id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
247000542 | pes2o/s2orc | v3-fos-license | Calculation of Stability Limit Displacement of Surrounding Rock of Deep-Buried Soft Rock Tunnel Construction Based on Fuzzy Logic Matching Algorithm
. With the continuous development of society and economy, infrastructure construction is expanding on a large scale. The stress concentration after excavation in deep-buried soft rock masses may cause the stress level of the rock mass to exceed the strength of the surrounding rock and form plastic stress, the area where plastic shear slip or plastic flow occurs. Based on the fuzzy logic matching algorithm for the surrounding rock of deep-buried soft rock tunnel construction, this paper analyzes the development law of surrounding rock deformation and supporting force over time in actual construction by establishing elementary function mathematical calculation equations, and tries to construct a set of It is used to determine the actual deep-buried soft rock tunnel surrounding rock stability limit displacement value process. Practical results show that the process based on fuzzy logic matching algorithm can effectively meet the requirements of actual deep-buried soft rock tunnel engineering.
INTRODUCTION
With the continuous development of my country's social economy, infrastructure has shown a large-scale expansion [1][2][3]. However, due to the topography of our country, we usually encounter mountainous conditions. The stress concentration phenomenon after the excavation of the cavern in the deep and weak rock mass may cause the stress level of the rock mass to exceed the strength of the surrounding rock, forming a plastic stress area. Plastic shear slip or plastic flow tends to increase the risk of construction and threaten the quality and safety of the project [4][5][6][7][8]. Therefore, it is extremely necessary to calculate the stability limit displacement of surrounding rock for deep-buried soft rock tunnel construction [9][10][11][12].
Usually mathematical models are used for scientific research and analysis of natural phenomena. Especially after the advent of calculus theory, differential equations are used to describe natural phenomena. However, if the calculus equation has its shortcomings, it can only be used to describe continuous and smooth change phenomena cannot describe discontinuous mutation phenomena. In the material world, most of the existing phenomena are noncontinuous mutation phenomena. Scene changes will develop to a certain extent and become qualitative changes, which are indifferentiable. Functions, such as earthquakes, volcanic eruptions, tsunamis, stratum movement, etc., for these common discontinuities in the material world, the use of catastrophe theory can better explain the nature of these phenomena [13][14][15]. With the in-depth understanding of the material world, people began to realize the need for a non-linear, discontinuous theory to study these material phenomena. Catastrophe theory was born and developed under this background.
Based on the fuzzy logic matching algorithm, this paper analyzes the development of surrounding rock deformation and supporting force over time by establishing elementary function mathematical calculation equations, and tries to construct a set that can be used to determine the actual deep buried soft rock tunnel surrounding rock. The process of stabilizing the limit displacement value.
CATASTROPHE CRITERION OF SURROUNDING ROCK INSTABILITY OF DEEP-BURIED SOFT ROCK TUNNEL BASED ON CATASTROPHE THEORY
In the simulation calculation of deep soft rock excavation, we define the plastic yield zone volume V(k) of surrounding rock after k times of excavation construction as: In Eq. (1), k is the number of excavation steps, N is the number of rock mass units that yield, and V i (k) is the volume of the i-th yield unit. Tunnel excavation is often accompanied by loading or unloading. V(k) will change under the influence of loading and unloading. At this time, the yield zone volume V(k) can pass through a continuous volume function. V(k) = f(t) represents the above changes, where t is the moment of loading or unloading.
Carry out tyler series expansion on the yield volume function V(k), generally taking the 4-th power term, then: In the formula, Among them, the relationship between a i and b i is: 4 3 2 4 0 3 3 2 1 2 2 2 1 4 0 1 4 3 2 1 0 6 3 1 0 0 Eq. (4) dividing both sides by b 4 at the same time and transforming can obtain the mathematical function of the cusp catastrophe model: In Eq. (5), c is a constant that can be ignored, and Eq. (6) can be further simplified into the standard functional formula of the cusp catastrophe model: In Eq. (6) Take Δ = 8u 3 + 27v 2 as the distance between the continuous evolution state and the critical state of the rock mass, and Δ is the mutation characteristic value. If and only if Δ ≤ 0, the system will generate a sudden change across the bifurcation set, and the Δ value is the criterion for volume change. The specific criteria are as follows: When Δ > 0, the surrounding rock system is considered to be a stable state; When Δ = 0, the surrounding rock system is considered to be in critical equilibrium; When Δ < 0, the surrounding rock system is considered to be unstable and damaged.
In the numerical simulation, according to the standard, the conclusion obtained from the analysis of the displacement monitoring data of the measuring point is only accurate for the surrounding rock within the Sm range of the current fault front and back. Taking into account the convenience of the numerical simulation, the numerical simulation is correct. The statistics of the volume of the plastic yield zone are only for the surrounding rock within 2 meters of the measurement point section. The measurement point is located at the section 15 meters away from the entrance of the tunnel, and the monitored space is the surrounding rock range of 14 to 16 meters from the tunnel entrance.
Through the monitoring of each excavation step, the volume of the plastic yield zone of the surrounding rock within two meters is measured, and the change in the volume increase of the plastic yield zone in each excavation step is calculated, thereby establishing the volume increase of the plastic zone of the surrounding rock in each excavation step. Measure value sequence {V} = {V(1), V(2), V(3}, …, V(m)}, where m represents the m-th excavation step. Repeat the above volume increment value sequence four times. The polynomial fitting is transformed into a polynomial to solve for the polynomial coefficients a 1 , a 2 , a 3 , a 4 , and then the polynomial coefficients are substituted into the relevant calculation equations to obtain the values of µ and v, and the mutation characteristic values are solved by Δ = 8u 3 + 27v 2 . Finally use the catastrophe criterion to judge whether the state of the surrounding rock system is stable or not.
The length of the tunnel in the numerical simulation of deep soft rock tunnel excavation is 80 meters. Because the tunnel entrance is greatly affected by the end restraint effect during the excavation of the footage, this study set the tunnel section 15 meters away from the entrance as the main Monitoring the cross-section, there are 57 steps in the 3D simulation tunnel excavation sequence, using the prewritten "plastic zone volume" related command flow statistics to obtain the cumulative plastic zone volume change value of the surrounding rock in 57 excavation steps, and comparing the phase The cumulative yield zone volume change value of adjacent excavation steps can be calculated to obtain the plastic zone volume increment value, and the plastic zone volume increment value sequence {V} = {V(1), V(2), V(3), …, V(m)}, use MATLAB to fit the 4-th degree polynomial of the maximum volume increase sequence of the plastic zone, fit the 4-th degree polynomial of the volume increase value V in the excavation step sequence, and solve the polynomial coefficients of the phase. Substitute a 1 , a 2 , a 3 , and a 4 into the formula to calculate the size of µ and v, thereby calculating the numerical size of the mutation characteristic value Δ.
After obtaining the value of the mutation characteristic value Δ, this paper uses the value change of Δ to determine the stability limit displacement value of the surrounding rock of the deep-buried soft rock tunnel. The limit displacement of the surrounding rock occurs in the excavation sequence of Δ positive and negative conversion, in simple terms. If Δ > 0 at the n-th excavation step and Δ < 0 at the n + 1 excavation step, the displacement value of the surrounding rock at the n + 1-th step sequence is the limit displacement value sought.
Since there are a total of 18 test groups in this simulation, and each simulation has 57 excavation steps, if the calculation of the Δ value is performed on 57 excavation steps, the workload will become very huge, so this Secondly, the dichotomy was used to determine the critical point of the mutation characteristic value Δ positive or negative. The specific search process is as follows: (1) First calculate the mutation characteristic value Δ of the last step, the 57-th excavation sequence; (2) When the mutation characteristic value Δ > 0 in step 57, it indicates that the surrounding rock is in a stable state of the system, that is, no instability occurs during the excavation process, and the search ends; (3) When the mutation characteristic value of the 57-th step Δ < 0, it indicates that the surrounding rock is in a system instability state, and the mutation characteristic value of the 29-th step is calculated according to the dichotomy; (4) When the mutation characteristic value of the 29-th step sequence Δ > 0, continue to use the dichotomy to calculate the Δ value between the steps 29 and 57; when the mutation characteristic value of the 29-th step sequence Δ < 0, the limit uses the dichotomy to calculate the Δ value in the sequence between the 1 st step and the 29-th step; (5) In this cycle, keep calculating and searching to find the excavation sequence where the mutation characteristic value Δ changes from greater than 0 to less than 0, then the cumulative displacement of the measuring point under this sequence is the stability of the surrounding rock of the deep-buried soft rock tunnel Limit displacement value.
Taking the 10-th group in the uniform test design as a typical example, a specific method for determining the limit displacement value of surrounding rock stability using the plastic zone volume mutation criterion in this section is shown.
Use the compiled plastic zone volume statistics command stream to perform plastic zone volume statistics on the FLAC3D files of 57 excavation steps in the 10-th group to obtain the cumulative plastic zone volume under each excavation step, and then pass the adjacent excavation steps. The difference of the cumulative plastic zone volume is calculated to calculate the change of the plastic yield zone volume increment value in each excavation sequence.
Obtain the volume increment value sequence of the yield zone in each excavation step sequence through statistics, use MATLAB to fit the volume increment value sequence of different excavation steps sequence with 4th degree polynomial, and then use the formula to calculate each excavation step. The volume mutation characteristic value Δ under the sequence finally obtains the stability limit displacement of the surrounding rock under the factor level. 3 2 The following data processing uses the polynomial curve fitting function in Matlab to fit the data in the table. The fitting function used in Matlab is: Among them, x, y are data points, n is the polynomial order, and return p is the polynomial coefficient vector p from high to low.
In the first step, the plastic zone volume increase value sequence of the 57-th excavation sequence is fitted and calculated.
Corresponding coefficients are obtained by the fourthdegree polynomial obtained by fitting the volume enhancement of the plastic zone in the 57-th excavation step: a 4 = −0.0012, a 3 = 0.2, a 2 = 6, a 1 = 32. 6 Substituting the above fourth-degree polynomial coefficients into the correlation formula: 2 3 3 23 3 2 1 2 2 3 4 4 4 4 4 3 , Substituting the values of µ and v into Eq. (10), we get: According to the above judgment criteria in this chapter, it is considered that the surrounding rock of deepburied soft rock is in a systemic unstable state during the 57-th excavation step. According to the dichotomy, the same method is used to determine the volume of the plastic zone in the 29-th excavation step. Incremental fitting analysis.
Fit the volume increment value sequence at the 29-th excavation step sequence.
The corresponding coefficients are obtained by fitting the fourth degree polynomial obtained by fitting the volume increment of the plastic zone in the 29-th excavation step: a 4 = 0.004, a 3 = −0.1494, a 2 = −0.8591, a 1 = 71.1651.
Substitute the above fourth-degree polynomial coefficients into the correlation formula: Substituting the values of µ and v into Eq. (12), we get: According to the above-mentioned judgment criteria in this chapter, it is believed that during the 29-th excavation step, the surrounding rock of the deep-buried soft rock is unstable and damaged. According to the dichotomy, the same method is used to increase the volume of the plastic zone in the 12-th excavation step. Once the fitting analysis is performed.
Fitting and calculating the volume increment value sequence at the 12-th excavation step sequence, the corresponding coefficients are obtained by fitting the fourth degree polynomial obtained by fitting the volume increase of the plastic zone at the 12-th excavation step: a 4 = −00353, a 3 = 0.8647, a 2 = −8.6257, a 1 = 89.4070.
Substitute the above fourth-degree polynomial coefficients into the correlation formula: 2 3 3 23 3 2 1 2 2 3 4 4 4 4 4 3 , Substituting the values of µ and v into Eq. (14), we get: It is believed that in the 12-th excavation step, the surrounding rock of the deep-buried soft rock is in a stable state. According to the dichotomy, the same method is used to fit and analyze the volume increase of the plastic zone in the 20-th excavation step. The stress release of the surrounding rock after excavation is achieved by applying a certain ratio of the maximum unbalanced force of the joint in the opposite direction of the joint within the range of the surrounding rock loosening zone. The calculation conditions are as follows: (1) Condition 1, step method excavation, initial support is applied after 30% stress release, the length of the upper and lower steps is 5 m, the excavation stops at 45 m, and the excavation of each analysis step is 1 m; (2) Condition 2, bench excavation, initial support is applied after 30% stress release, the upper and lower steps are 5m in length, and the second lining is applied at 10 m from the tunnel face. The excavation stops at 45 m, and the excavation is 1m for each analysis step.
Analysis of Calculation Results
The study of surrounding rock stability is mainly based on the displacement or stress changes to analyze its deformation laws and mechanical characteristics. This article discusses the deformation law and overall stability of the tunnel surrounding rock by simulating the spatial displacement distribution of the measuring points in different parts of the surrounding rock.
Deformation law of surrounding rock along the tunnel axis
Curve analysis of working condition 1: After the surrounding rock is excavated, relaxation deformation and plastic slip are generated, so that the unexcavated section of the surrounding rock within 8 m (about 1 times the diameter of the hole) in front of the tunnel has a convergent displacement, but the displacement value is very high. Small; the side wall has a large convergence rate within 15 m (about 2 times the hole diameter) behind the tunnel face, indicating that with the large release, transfer and redistribution of the stress after excavation, the surrounding rock will have a large displacement; the distance from the excavation face The closer it is, the smaller the deformation rate is. The reason is that the excavation has a certain three-dimensional space constraint on the deformation of the surrounding rock, indicating that timely reinforcement of the excavation surface during construction can inhibit the deformation of the surrounding rock within a certain range; the excavation surface Later, as the distance increases, the deformation rate gradually intensifies, and the displacement rate reaches a peak at 1.1 times the hole diameter. Since then, as the distance increases, the displacement rate becomes smaller and smaller, and the deformation becomes stable at 3 times the tunnel diameter, indicating that the farther away from the excavation surface, the weaker the surrounding rock is restrained by the excavation surface, and the surrounding rock is stressed. The relationship with deformation gradually transitioned to a plane problem.
Analysis of the second curve of working condition: After the second lining is applied, the cumulative convergence value of the side wall of the tunnel is relatively small, only 5.35 cm; in the severely deformed section of the surrounding rock (within the range of 2 times the tunnel diameter behind the excavation surface), its displacement rate and displacement The magnitude is also obviously smaller. The closer the second lining section is, the smaller the deformation rate of the surrounding rock is. This is because the surrounding rock is subject to the strong rigid constraint of the second lining, which effectively restricts the deformation and development of the surrounding rock. The safety reserve structure to withstand the rheological pressure of the surrounding rock and the supporting force in the later stage, emphasizes the timeliness of the secondary lining support in the construction of soft rock tunnels, and is of great significance to control the deformation and mechanical behavior of the surrounding rock.
Deformation law of surrounding rock on the tunnel section
The cumulative settlement of the vault is 7.93 cm, and the cumulative convergence of the side wall is 11.7 cm. That is, in the high ground stress field dominated by the vertical direction, the horizontal displacement value of the cavern is greater than the vertical displacement value, which is consistent with the previous conclusions. After the bench excavation, the surrounding rock deformation rate is fast and the deformation value is large. When the calculation and analysis step reach 1300, the construction of the lower step is carried out. After the excavation of the lower step, the deformation appears abrupt, but the magnitude of the abrupt change is not large (a sudden change of the vault sinking The star value is 0.3 cm, and the convergence mutation value of the side wall is 1.4 cm), indicating that the upper step excavation is the main stage of surrounding rock deformation and stress release. Effective support should be provided in time after the upper step excavation to control the deformation; the lower step is opened. After excavation, the mutation amount of side wall deformation is significantly greater than the amount of vault deformation mutation, indicating that the excavation of the lower step has a greater disturbance to the horizontal displacement of the surrounding rock. It is recommended to strengthen the support of the side wall and the arch foot during the construction, such as construction Measures such as locking feet and rods.
FUZZY LOGIC MATCHING ALGORITHM 4.1 Fuzzy logic matching algorithm calculation process
The fuzzy logic matching algorithm is used to obtain a mathematical model. The generalization performance of the model is determined by three parameters: the penalty parameter C, the kernel parameter σ of the RBF kernel function, and the error value ε. The selection of these three parameters determines the generalization performance of the network, so the optimization of the values of the three parameters is very important. This time, the decimal genetic algorithm is used to determine the optimal network parameters, improve operation efficiency, and solve mathematical calculation equations.
The specific flow chart of using decimal genetic algorithm to determine the optimal SVR network parameters is shown in Fig. 1.
Figure 1 Flow chart of genetic algorithm
The specific process of genetic algorithm is: (1) Group the 18 sets of data obtained from the uniform experimental design in Chapter 3 and the mutation theory in Chapter 4 to determine the limit displacement, and set 14 of them as evolutionary support vector regression learning samples, and select 2 weaves as test samples, and the remaining 2 groups are set as test samples; (2) Initialize the genetic algorithm to randomly generate an initial population that supports the network parameters of the acyl machine, where the counter is recorded as g = 0, and the population size is recorded as N p ; (3) Write the training samples and test samples into the halo regression algorithm to complete the network training and pre-cutting; (4) Pass the prediction results to the genetic algorithm one by one, and use the fitness function in the genetic method to find the fitness of each body, so as to complete the evaluation of the parameter fitness; (5) Determine whether the previously agreed evolutionary algebra has been met. If it is reached, the algorithm will end, return to the current individual with the highest fitness, and use the decoding to obtain the optimal support vector machine network parameters; if the required evolution has not met Algebra, then execute (6); (6) Selecting Zizi to select individuals with higher fitness in the initial population, and perform duplication, crossover, and mutation operations on the individuals to generate a population of offspring with SVR network parameters whose number of individuals is Np, and the counter is recorded as g = g + 1; return (3): (7) Follow the steps (3) -(6) until the specified evolution algebra is reached, the algorithm ends, and the optimal support vector regression network parameters are returned.
Overview of Genetic Algorithm
(1) Genetic operator.
1) Selection operator:
This article uses the queue selection method. The steps of the queuing selection method are: firstly, the fitness of each body is solved, the individuals are arranged in order according to the fitness, and then the probability table is assigned to each individual in order, which is regarded as their respective selection probability. Among them, the selection probability can be expressed as: Among them, q is the probability of selecting the best individual; r is the rank of the individual, which is the best when r = 1; p is the population size; the formula for calculating q'is: 2) Crossover operator: Ben again uses arithmetic hybridization. p c is the probability of the hybridization operation, which is defined as the expected value of p c ·N chromosomes in the population to complete the hybridization operation. In order to find the parent of the cross operation, the operation is repeated from i = 1 to N: a uniform random number r is generated from the interval [0, 1], if r < p c , X i is selected as the parent of the cross operation, where the parent The generation is represented by X , Y , namely: Test the feasibility of the newly produced offspring, if feasible, replace the parents with them; if it is feasible, keep the feasible part, continue to generate new random numbers, and then re-cross, continue to produce two usable offsprings.
3) Mutation operator:
This article uses non-uniform mutation. The essential difference between non-uniform mutation and uniform mutation is to replace the original component j with a nonuniform random number, the formula is as follows: x x x a f G r .
x otherwise Among them: r 1 , r 2 are uniform random numbers in the interval {0, 1); G is the current generation; G max is the largest number in the generation; b is the shape parameter.
(2) Fitness function. This article uses the following fitness function: where: SVR(x i ) represents the SVR prediction of the stability limit displacement of the rock of the l-th test sample during training, and y i is the actual stability limit displacement of the rock of the i-th test sample during the training.
FUZZY LOGIC MATCHING ALGORITHM TO SOLVE THE MATHEMATICAL MODEL OF SURROUNDING ROCK STABILITY LIMIT DISPLACEMENT
This chapter establishes a mathematical prediction model of elementary functions on the basis of fuzzy logic matching algorithm theory.
Considering that the calculation equations of the evolutionary support to the star regression algorithm for a single solution are in the form of one-dimensional variables, this time the mathematical calculation equations of the limit vault subsidence displacement and the limit horizontal convergence displacement are solved separately. Use MATLAB software and toolbox to complete the calculation and analysis work, the kernel function adopts the RBF kernel function, the value search pool of penalty parameter C is [0, 1000], the value search range of nuclear parameter σ is [0, 1000], the error The search range of the value ε is [0, 1], the evolutionary algebra is 1000 generations, and the population size is 20. Using queue selection, arithmetic crossing and non-uniform mutation, the probability of crossing is 0.9 and the probability of mutation is 0.05. The fitness function of genetic algorithm is: After the search by the genetic algorithm is completed, the optimal SVR model parameters of the limit dome subsidence displacement and the limit horizontal convergence displacement are obtained respectively.
Mathematical Model for Prediction of Limit Vault Subsidence Displacement
Using MATLAB and SVM toolbox, the SVM type of Svmtrain (training modeling) is epsilon-SVR, and the kernel function type is RBF by default, which is the p value required to establish the mathematical model of the ultimate vault subsidence displacement.
From the radial basis function kernel function (RBF), it can be known that if a brave SVR mathematical model display expression is required to be solved, only the form of the kernel function, the optimized kernel parameters of the kernel function, and the values of β i and b are required. Substituting the β value into the formula, the following calculation formula is obtained: The mathematical calculation equation for the prediction of the ultimate vault subsidence displacement for the stability of the surrounding rock of the deep-buried soft rock tunnel is: From the optimal SVR model of the ultimate displacement of the dome subsidence, we can see that σ = 813.1 in Eq. (23); x 1 , x 2 , x 3 , ..., x 13 , x 14 Substituting the influencing factor data of the inspection group into the formula, the prediction value of the limit vault subsidence displacement of the surrounding rock is solved, and the value is compared with the numerical simulation calculation value to check its reliability. It can be seen from the reliability result of the prediction model that the relative error between the predicted value of the 17-th set of vault subsidence limit displacement and the calculated value in the group test group is 13.2%, and the actual worst value is 123 mm; the 18-th group of test groups in the two groups of test groups, the 18-th group of dome subsidence limit displacement predictions. The relative error between the value and the calculated value is 21.2%, and the actual worst value is 34.9 mm.
The average relative error between the predicted and calculated values of the two inspection groups is 17.2%, which can fully meet the requirements of actual deepburied soft rock tunnel engineering.
Limit Horizontal Convergence Displacement Prediction Mathematical Model
Using MATLAB and SVM toolbox, the SVM type of Svmtrain (training modeling) is epsilon-SVR, and the kernel function type is RBF by default, and the β value required for the mathematical model of the limit horizontal convergence displacement is established.
From the radial basis function kernel function (RBF), it is known that if the optimal SVR mathematical model display expression is required to be solved, only the form of the kernel function, the optimal kernel parameters of the kernel function, and the values of β i and β are obtained. The mathematical calculation equation of the limit horizontal convergence displacement prediction for the stability of the surrounding rock of the deep-buried soft rock tunnel can be obtained as: Substituting the influencing factor data of the test group into Eq. (24), the predicted value of the limit horizontal convergence displacement of the surrounding rock is solved, and the value is compared with the numerical simulation calculation to check its reliability.
It can be seen from the reliability results of the prediction model that the relative error between the predicted value of the horizontal convergence limit displacement of the 17-th group in the 2 test groups and the calculated value is 17.7%, and the actual worst value is 210.5 mm; The relative error between the predicted value and the calculated value of the 18-th group of dome subsidence limit displacement is 19.9%, and the actual worst value is 23.3 mm. The average relative error between the predicted value of the two inspection groups and the calculated value is 18.25%, and the maximum relative error is 18.8%, which can fully meet the requirements of actual deep-buried soft rock tunnel engineering.
CONCLUSION
Based on the fuzzy logic matching algorithm, by establishing elementary function mathematical calculation equations, this paper constructs a set of procedures that can be used to determine the stability limit displacement value of the surrounding rock of the actual deep-buried soft rock tunnel, aiming to stabilize the surrounding rock of the deep-buried soft rock tunnel. The limit displacement calculation provides theoretical support. The criterion given in the current code is not directly related to some engineering construction conditions and the evolution process of surrounding rock deformation of some sections. For soft surrounding rock or soft rock under high ground stress, surrounding rock instability or large deformation may occur, but there is still a lack of an effective method. The above research and analysis in this paper will help to build a displacement prediction system for tunnel surrounding rock stability analysis to solve this complex problem. | 2022-02-21T16:16:14.632Z | 2022-04-15T00:00:00.000 | {
"year": 2022,
"sha1": "c006c58d553ed5a7382c641097d8bf9ac4cfd93b",
"oa_license": "CCBY",
"oa_url": "https://hrcak.srce.hr/file/395130",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7f3d831be754a7c0050f9ec2124989586538b495",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
} |
234406840 | pes2o/s2orc | v3-fos-license | Drama-in-Education for Critical Historical Thinkers: A Case Study in the Greek Context
The case study presented in this article refers to the connection of drama-in-education and critical thinking in history, in order to highlight the importance of drama for the teaching of history in primary schools in Greece. The research plan adopted is quantitative and qualitative, and the research strategy applied is that of case study. For the purposes of this study, four scenarios based on drama-in-education techniques were designed and applied on a sample of forty-three primary students. The analysis of the findings show that the students’ understanding of historical contexts and objectives of historically active subjects was encouraged by drama-based instruction.
Introduction
In recent decades, there has been a growing trend among teachers, educators, and historians in teaching and producing new teaching materials, with an emphasis on content selection, interdisciplinary approaches, and active forms of teaching (Levstik & Barton 2011). These views treat the student as an active recipient and bearer of historical thought and are based on constructivist learning theories, with the aim to emphasize the relationship between the learning of historical events and the construction of metacognitive skills for cultivating students' critical thinking (Kosti 2016, Saye & Brush 2002: 78, Davis 2001: 6, Foster 2001. The intention to revise history teaching methods is time-consuming; visits to libraries and archives, museums and archeological sites, use of oral testimonies, historical novels and cinema in the classroom, Information and Communications Technology and the Internet are some of the new learning environments proposed for the development of students' historical thinking (Levstik & Barton 2011). Activities as well as role playing, simulations, etc. are also suggested for delving into the meaning of historical terms and concepts such as historical time, historical interpretation and rational thinking in history (Shemilt 1984: 66-78).
Drama in Education and Historical Critical Thinking
In the context of critical historical thought, a special role could be played by Drama in Education, an exploratory and experimental approach to learning which includes interaction of thinking and knowledge, aesthetic and kinesthetic experiences, decision making, and understanding of multiple historical events (Kosti 2016, Booth 2012, Rantala 2011, O' Toole et al. 2009: 185, Heathcote 2008. It is a teaching approach that is based on imaginary "as-if" worlds and promotes learning through improvisation; it is closely linked to the curriculum of History (O 'Toole et al. 2009: 107-108).
In this teaching context, students as active learners structure their knowledge based on their experiences (O'Toole et al. 2009: 105, Wagner 1985. Students have the opportunity to develop abstract thinking, to use their imagination and body through role creating, and to experience creative expression of feelings and ideas. Key elements of learning in Drama in Education contexts are mutual agreement between teacher and students, creative pretense and pleasure of learning (Kosti 2016, Kondoyianni 2012, O'Toole et al. 2009: 11, Fairclough 1994, Fleming 1992.
The cooperative procedures of learning and the teacher's role of a cognitive guide or facilitator are key elements of Drama in the classroom; that is why the link between Drama and Historical Empathy seems to be very strong, such as Fines and Verrier (1974) pointed out and drama teachers have persistently reported (Kempe 2013). Fines and Verrier (1974 argue that there is a close relationship between History and Drama, with specific results in making historical meaning by using respective sources of the past.
The use of Drama in Education as a teaching approach to critical historical thought has been embraced by historians themselves; Thompson (1983: 21-13) argues that the development of this skill can be achieved through Drama and role play, while Little (1983: 12-16) considers Drama in history classroom to be invaluable, as it convinces students of the reality of the past and offers them opportunities to reflect on it through guided use of their imagination and sessions of action and discussion. Moving along the same lines, Goalen and Hendy (1993;1992, Goalen 1995 have experimentally proven that the historical thinking of averageperformance students develops when they approach the historical material through dramain-education techniques. Drama activates the symbolic thinking of students, resulting in the formation of new meanings and the development of a holistic learning concerning declarative knowledge, as well as creative imagination, social skills and self-confidence, which are focal points for the construction of critical thinking (O'Toole et al. 2009: 81-89). All of these forms of learning are related to the concept of historical critical thinking or, as many theorists of history call it, historical empathy (Kosti 2016, Kosti et al. 2015, Nichol 2013, Dodwell 2013, Kondoyanni & Kosti 2011, Levstik & Barton 2011, Kempe 2011, Williams 2009, Belliveau et al. 2008).
The concept of historical empathy as an element of historical thought has exerted an important influence on the theory and teaching of history in school, which aims at understanding and interpretation of historical reality. And despite some existing opinions that empathy is hard to achieve in the context of the classroom, research has shown that students can indeed achieve satisfactory levels of empathy if provided with a wide range of activities (Kosti et al. 2015).
In fact, Lee, Dickinson and Ashby (Lee et al. 1997; see also Lee 1987, Lee andAshby 2001;) have developed a descriptive system that encompasses the stages of empathy primary and secondary education students go through; the present analysis will be based on these five stages, because this system has been most noted and widely used (Kosti et al. 2015, Barton & Levstik 2008: • "The Divi Past". At this initial stage, students see the past as incomprehensible and essentially view people from the past as 'stupid' or 'thick' because they did things much differently from people in nowadays.
• "Generalized Stereotypes". Here past actions are evaluated generically in terms of conventional stereotyped roles.
• "Everyday Empathy". Past actions are set against the cultural contexts of today's world, without consistently distinguishing between the older and the modern views and values.
• "Restricted Historical Empathy". The recognition is established that people in the past had a different level of knowledge than ours, different views and values, but no depth is reached in their representation and interpretation.
• "Contextual Historical Empathy". Past actions are eventually viewed as integrated in a wider context of views and values, and it is recognized that they form a network of goals reaching far beyond any appearances.
Drama in Education as a student-centered pedagogical approach seems to have a lot to offer in teaching history. In this context, the present study proposes a framework of history didactics which is based on the principles of drama in education and focused on critical historical thinking. The framework in question focuses on students of the primary level of education, where there has been no extended research, and students of the 5th grade, providing a fertile ground for research experimentations.
Research Restrictions, Aim and Method
A mixed approach, of quantitative and qualitative orientation, according to the principles of case study (Charmaz 1995, Greene 2007. Cohen 2011 was considered the most appropriate method for this research.
The research questions were: • Can critical historical thinking of primary school pupils be impacted by drama?
• If so, how can drama enhance the historical thinking of primary school pupils?
The qualitative approach responded to both of the research questions raised above, while the quantitative approach complemented and reinforced the qualitative in relation to the first question.
Four interventions with a group of pupils of the 5th grade of Primary School were conducted to collect qualitative and quantitative data. The interventions were implemented over a period of three months (February -April 2019). Each intervention lasted 40 minutes and took place in the History classroom. The educational program in the context of which this research was implemented was included in the curriculum and approved by the Primary Education Office of Larissa. The pupils were asked to fill in two questionnaires, one before and one after the interventions. Hence, the research design was one-group pretest-posttest quasiexperimental.
In this particular research project teacher and researcher functions overlapped. This means that the teacher took on different roles, such as the role of facilitator, supporter and mentor.
Collecting her own data as researcher helped speed up the process of assimilation in the classroom environment and allowed her to make better and quicker decisions (Mertler 2012: 27).
The sample members were forty-three (43), twenty-five (25) boys and eighteen (19) Since the research was exclusively conducted in only one primary school in Larissa, the capital of the Thessaly region in Greece, it is restricted, without claims to generalization. Its significance lies in the fact that it attempts to investigate the relationship between drama and history in the Greek context, highlighting possible influences of drama in the critical historical thinking of primary school pupils, specifically while monitoring and describing the process of formulating causal relationships.
Quantitative Analysis
For the purpose of the research a questionnaire was composed to measure critical historical thinking of the sample pupils. The questionnaire consisted of four open-ended questions related to historical sources and investigated if students understood why things happened in history (Dickinson & Lee 1978). The first question simply asked what Constantine was seeking by the construction of Constantinople. The second question allowed subjects to show how far their first answer really represented their understanding; for this purpose students were asked why Justinian undertook such a large building project throughout the empire and especially the Hagia Sophia in Constantinople. The third question pressed harder by asking how it was possible to explain the fact that while the goal of the Crusaders was the liberation of the Holy Land from the pagans, the Fourth Crusade ended in the occupation and looting of Constantinople, a city with a Christian population. The fourth question was indended to overcome the limitations of the other questions by asking of the reasons that led to the Fall of Constantinople.
The category system used to evaluate the responses was that of the above-mentioned five stages of historical empathy. Each stage was graded with the corresponding grade. Some sample answers are cited below: • Stage 1 -Grade 1 (The Divi Past): "Crusaders were bad people; they only knew how to kill". Answer to question 3 -Student with lower academic profile (pre-test).
• Stage 2 -Grade 2 (Generalized Stereotypes): "Hagia Sophia in Constantinople was built to be a miracle". Answer to question 2 -Student with upper academic profile (pretest).
• Stage 3 -Grade 3 (Everyday Empathy): "Everyone would like to conquer Istanbul, because people love money and grandeur -and Constantinople was the richest city of its time. Whoever owned that city was the ruler of the world". Answer to question 3 -Student with middle academic profile (post-test).
• Stage 4 -Grade 4 (Restricted Historical Empathy): "Νothing remains the same in history. Everything has its ups and downs. This was done with Istanbul". Anwer to question 4 -Student with upper academic profile (post-test).
The questionnaire was piloted and tested for its validity using the Cronbach alpha statistical criterion, with acceptable values (see Table 1) and was given to the sample pupils before and after the experimental manipulation. Data were processed by using SPSS v.20 and the procedure followed the principles of descriptive and inferential statistics. At the same time, the randomness of the sample was checked and confirmed, while the normality was checked but not confirmed by Shapiro-Wilk test. This was the reason for using nonparametric Wilcoxon Signed Ranks Test for related samples and Man-Whitney U Test for independent samples for the conclusions according to the principles of inductive statistics.
The effect of drama was clear on the variable of "Historical thinking", as the sample pupils showed statistically significant improvements after the manipulation compared to the beginning (see Table 2). Looking more closely at the broad bands of pupils' ability (as defined by school grades), statistically significant improvements were shown in all academic performances, lower, middle and upper (see Table 3). After the narration, the students created a "Still-Image" that concerned the roles ofcitizens they had taken on without movement and speech (Avdi & Hatzigeorgiou 2007: 86;Papadopoulos 2010: 247). Then a student sat on the "throne" of Constantine the Great and read from a piece of paper in the form of parchment concerning the transfer of the capital to Byzantium. The search for the reasons why it was considered necessary to move the capital was carried out through the "Hot-Seating" activity ( The topic of the third intervention was the Fall of Constantinople. The students in the role of the crusaders walked to Venice and prepared themselves for the fourth crusade to gain the spoils and wealth they had been promised. The development of the drama took place through the following techniques: "Teacher in role" in which the teacher as Angelos Alexios spoke to the crusaders (students) about his desire to occupy the city to regain his throne and the spoils and the wealth that he would offer them in case they helped him regain power. Other techniques were "Team Dialogue" (Alkistis 2008: 262), featuring the meeting of the crusaders to arrive at a relevant decision and "Consciousness Alley" (Avdi & Hatzigeorgiou 2007: 92), during which a student in the role of admiral of the crusaders went through the corridor of consciousness and made the final decision: "Let's conquer the Constantinople". The evaluation was carried out through the "Writing in Role" in which the students in the role of crusaders after the Fall reconsidered the destruction, looting and massacres they committed and wrote in their diary a note whether their decision was correct or not.
As to the fourth intervention, The Fall of Constantinople by the Ottoman Turks, a movement activity was initially carried out with the free walking of the students in the role of Serbs in the appropriately designed classroom area. During the activity, the teacher in the role of "messenger" announced to the Serbs (students) the news of the Fall of the Constantinople so that they could think and express their views on the event. The students then took on the roles of neighboring peoples of the Byzantine Empire, formed the respective groups and presented, through the technique "Still-Image", the impact of the Fall of the Constantinople on other peoples, giving a title to the "image" they created. Finally, a reflection took place through the "Writing in Role" technique, as students as citizens of neighboring peoples wrote a verse about Constantinople.
Qualitative analysis
The findings of the quantitative analysis were further explored through the qualitative approach, in which the interventions performed in the sample group were analyzed. These interventions focused on the founding of Constantinople, the Church of Hagia Sophia (Holy Wisdom Church), the Fall of Constantinople and the Fall of Constantinople. Qualitative analysis was performed through thematic content analysis in order to analyze in detail the data that emerged from the interventions and to structure it in categories and thematic fields (Robson 2007: 416-418, Sarafidou 2011.
The findings of the qualitative analysis are consistent with the results of the quantitative analysis, as the data analysis showed that the techniques of Drama in Education contributed to the participating pupils' critical historical thinking, understanding and empathy.
In particular, in the first interventionabout the establishment of Constantinople, it turned out that the techniques of Drama aroused the interest of students and positively activated their involvement in the learning process. Based on the data, it emerged that the students were in the second stage of empathy ("Generalized Stereotypes"). Specifically, two categories of thematic analysis emerged from the analysis of the texts produced by the students in "Writing As for historical understanding, it was found that students understood the aims and objectives of historically active subjects (Constantine the Great) who acted in the Byzantine years and also understood that their actions were influenced by many factors and were the result of complex processes: • Student 5 (middle academic performance): "I moved the capital from Rome to Byzantium because Rome was very far from outermost areas; it was for many years the worship center of "Roman pantheon", a place where conflicts took place between hierarchs and persecutions of Christians occurred." Sophia, a Domed-Basilica. It is built in such a way that it will last for many centuries, it will be a symbol of Christianity and many people will admire it".
The third intervention dealt with the Fall of Constantinople. Based on the data, it emerged that the students were in the third stage of empathy ("Everyday Empathy"). Specifically, analysis of the documents produced by the students showed that historical empathy was achieved, as they recorded their views and feelings as crusaders after the fall of the city and it seemed that they understood the aims of the historically active subjects and of their actions: • Student 15 (middle academic performance): "My dear diary, our decision to invade the city was not right. We wanted glory and money, but many human lives were lost and, worst of all, we plundered Hagia Sophia." • Student 16 (upper academic performance): "My dear diary, I realized that the decision I had made was not the right one. But I did it for the money and gifts that Alexios Angelos would give us; but the only thing he wanted was to take the throne again. We did not even respect churches and sacred icons and holy vessels." • "Fear, pain, sorrow fell on the queen city.
I'm scared, I'm crying, I'm leaving, I cannot stand it.
Our city fell into foreign hands, into a foreign embrace." • Student 24 (upper academic performance): "Oh! My city, my beloved Queen!!! You fought hard, but slavery came from the back door.
Tuesday, May 29, 1453 -day of evil." • Student 25 (lower academic performance): "The Queen City is lost and does not return and if she returns she will not be like before." • Student 26 (upper academic performance): "Tuesday, May 29, 1453 / A city is lost forever and will never return. / I felt sad because they so many thousands of people were lost. The Queen City is lost FOREVER !!!"
Concluding note
The aim of this article is to argue that drama-in-education can present an effective approach of enhancing the historical thinking for primary school students in the Greek context. For this purpose, a mixed-method quantitative and qualitative research approach was used according to the principles of case study. As previously reported, the findings presented here are limited, as the study is a small-scale research in only one school. Hence the conclusions listed below can be seen as merely indicative and cannot be generalized.
As quantitative and qualitative analyses have highlighted, students demonstrated a development in their critical thinking towards the past. The Drama in Education approach facilitated this achievement, since it offered a learner-centered environment in which all students, regardless of their individual academic performance, were able to develop critical historical thinking and they felt free to express their opinions without the fear of being inaccurate in their analyses of the past. And this because drama has a potentional of awaking the students to the real dimension of the past.
Perhaps the most important conclusion drawn from these four instructional interventions is that the students embraced Drama with enthusiasm, and responded satisfactorily to the challenges they were presented with. These findings could be regarded as a very important starting point for educational research in Greece, on the purpose of possible proposals for inclusion of Drama-in-Education techniques in the reshaping of primary school history curriculum. Bibliography | 2021-05-13T00:03:13.124Z | 2020-12-31T00:00:00.000 | {
"year": 2020,
"sha1": "1685be8eed26bad0a558b473325f14c10b59afe3",
"oa_license": "CCBYNC",
"oa_url": "https://journals.ucc.ie/index.php/scenario/article/download/scenario-14-2-2/pdf-en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8c34d8503f7b5f09819bdcb27865930347426a7d",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
220335796 | pes2o/s2orc | v3-fos-license | A rare complication related to pulmonary vein isolation: intramural atrial hematoma
803 Circumferential point ‐by ‐point radiofrequency ablation with the use of a contact force control cool ‐tip catheter 35W (ThermoCool SF Biosense Webster, Irvine, California, United States) was performed. The maximal ablation index in the re‐ gion where hematoma occurred was 420, while maximal contact force was 32 g. Neither sudden impedance spikes nor steam ‐pops were observed during the procedure. Heparin (11 500 IU) was ad‐ ministered during the procedure. In total, 77 ra‐ diofrequency applications were performed with a total application time of 32 minutes and 28 sec‐ onds. Postprocedural transthoracic echocardiog‐ raphy presented no significant pericardial effu‐ sion. The patient was discharged home in gener‐ ally good condition. Eight days after the procedure, the patient was readmitted due to nonspecific chest pain radiat‐ ing to the interscapular region, dyspnea, nonpro‐ ductive cough, and a decreased tolerance of ex‐ ertion. Transthoracic echocardiography showed IAH on the inferior part of the lateral and poste‐ rior wall of the left atrium (FIGURE 1A) with pericar‐ dial effusion, with maximum thickness of 1 cm. Computed tomography (CT) confirmed this di‐ agnosis and showed left inferior pulmonary vein narrowing, which was not occluded (FIGURE 1C). IAH should be differentiated with atrial throm‐ bus. However, thrombus was ruled out due to the presence of atrial wall rupture, a typical find‐ ing in IAH. The department’s Heart Team was con‐ sulted and a conservative approach was agreed upon due to the patient’s stable condition. Con‐ trol transthoracic echocardiography (FIGURE 1B) and CT (FIGURE 1D) were performed after a month and showed IAH with a maximal diameter of 26 mm (short heart axis). The subsequent 2‐month fol‐ low‐up CT showed IAH with a maximal diameter of 17 mm (short heart axis). IAH is an extremely rare event, but it is a possi‐ bly fatal complication of CA.2 Currently, there are no guidelines regarding the treatment of this con‐ dition. Previous case reports present management Catheter ablation (CA) is a routinely performed procedure in patients with paroxysmal and per‐ sistent atrial fibrillation (AF). It has a high rate of success and a relatively low risk of dangerous complications, which are reported to occur in 5% to 7% of patients.1 A rarely observed complication is an intramural atrial hematoma (IAH); howev‐ er, its exact prevalence, risk factors, and recom‐ mended treatment schemes have not been deter‐ mined. Here, we present our approach to a case of CA ‐related IAH. A 52 ‐year ‐old female underwent pulmo‐ nary vein isolation (PVI) for highly symptom‐ atic (the European Heart Rhythm Association score III) paroxysmal AF, which was diagnosed 5 months prior to the procedure. Her medical his‐ tory included a quadricuspid aortic valve, mod‐ erate aortic regurgitation, hypertension, and osteoarthritis. Standard preprocedural trans‐ esophageal echocardiography ruled out throm‐ bus within the heart chambers. Left atrium was slightly enlarged with the diameter of 4.5 cm. CLINICAL IMAGE
Circumferential point -by -point radiofrequency ablation with the use of a contact force control cool -tip catheter 35W (ThermoCool SF Biosense Webster, Irvine, California, United States) was performed. The maximal ablation index in the region where hematoma occurred was 420, while maximal contact force was 32 g. Neither sudden impedance spikes nor steam -pops were observed during the procedure. Heparin (11 500 IU) was administered during the procedure. In total, 77 radiofrequency applications were performed with a total application time of 32 minutes and 28 seconds. Postprocedural transthoracic echocardiography presented no significant pericardial effusion. The patient was discharged home in generally good condition.
Eight days after the procedure, the patient was readmitted due to nonspecific chest pain radiating to the interscapular region, dyspnea, nonproductive cough, and a decreased tolerance of exertion. Transthoracic echocardiography showed IAH on the inferior part of the lateral and posterior wall of the left atrium (FIGURE 1A) with pericardial effusion, with maximum thickness of 1 cm. Computed tomography (CT) confirmed this diagnosis and showed left inferior pulmonary vein narrowing, which was not occluded (FIGURE 1C). IAH should be differentiated with atrial thrombus. However, thrombus was ruled out due to the presence of atrial wall rupture, a typical finding in IAH. The department's Heart Team was consulted and a conservative approach was agreed upon due to the patient's stable condition. Control transthoracic echocardiography (FIGURE 1B) and CT (FIGURE 1D) were performed after a month and showed IAH with a maximal diameter of 26 mm (short heart axis). The subsequent 2-month follow-up CT showed IAH with a maximal diameter of 17 mm (short heart axis).
IAH is an extremely rare event, but it is a possibly fatal complication of CA.2 Currently, there are no guidelines regarding the treatment of this condition. Previous case reports present management Catheter ablation (CA) is a routinely performed procedure in patients with paroxysmal and persistent atrial fibrillation (AF). It has a high rate of success and a relatively low risk of dangerous complications, which are reported to occur in 5% to 7% of patients.1 A rarely observed complication is an intramural atrial hematoma (IAH); however, its exact prevalence, risk factors, and recommended treatment schemes have not been determined. Here, we present our approach to a case of CA -related IAH.
A 52 -year -old female underwent pulmonary vein isolation (PVI) for highly symptomatic (the European Heart Rhythm Association score III) paroxysmal AF, which was diagnosed 5 months prior to the procedure. Her medical history included a quadricuspid aortic valve, moderate aortic regurgitation, hypertension, and osteoarthritis. Standard preprocedural transesophageal echocardiography ruled out thrombus within the heart chambers. Left atrium was slightly enlarged with the diameter of 4.5 cm.
CLINICAL IMAGE
A rare complication related to pulmonary vein isolation: intramural atrial hematoma depending on the patient's clinical condition.3,4 The 2 main approaches are surgical intervention or close clinical surveillance. Fukuhara et al2 suggested that most patients require surgery, even if no hemodynamic collapse occurred. However, in another publication, Fukuhara et al5 described IAH resolutions in patients who did not undergo a surgical intervention. Similarly, the presented case demonstrates a successful conservative approach. Therefore, careful consideration should be made before selecting the treatment approach in patients with CA -related IAH.
CONFLICT OF INTEREST None declared.
OPEN ACCESS This is an Open Access article distributed under the terms of the Creative Commons AttributionNonCommercialShareAlike 4.0 International License (CC BY -NC -SA 4.0), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material, provided the original work is properly cited, distributed under the same license, and used for noncommercial purposes only. For commercial use, please contact the journal office at pamw@mp.pl. | 2020-07-05T13:05:35.472Z | 2020-07-04T00:00:00.000 | {
"year": 2020,
"sha1": "da90d5d60927cfa47247dbdc616177a7824f6a3e",
"oa_license": null,
"oa_url": "https://www.mp.pl/paim/en/node/15476/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9c588c6ca4696ad856b65e45b349885acc131d74",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118564508 | pes2o/s2orc | v3-fos-license | Dipolar Drag in Bilayer Harmonically Trapped Gases
We consider two separated pancake-shaped trapped gases interacting with a dipolar (either magnetic or electric) force. We study how the center of mass motion propagates from one cloud to the other as a consequence of the long-range nature of the interaction. The corresponding dynamics is fixed by the frequency difference between the in-phase and the out-of-phase center of mass modes of the two clouds, whose dependence on the dipolar interaction strength and the cloud separation is explicitly investigated. We discuss Fermi gases in the degenerate as well as in the classical limit and comment on the case of Bose-Einsten condensed gases.
I. INTRODUCTION
In recent years atomic and molecular dipolar gases have attracted a lot of interest since the long-range and the anisotropic nature of the interaction is expected to give rise to new important features at both the microscopic and macroscopic level. These include, among others, novel effects in the mechanism of the expansion and of the collective modes [1] and in the structure of the superfluid [2] and normal [3] phase, new exotic phases of crystalline nature (see e.g., the recent work [4]) and new schemes for quantum computation in the presence of optical lattices [5]. Some of these effects, in particular those concerning the expansion and the collective modes, have been already experimentally observed in magnetic dipolar atomic gases [6]. The recent progress in the realization of gases of electric polar molecules, where the effect of the dipolar force is particularly strong, is expected to open new challenging frontiers in this area of research (see [7][8][9] and references therein).
The aim of the present work is to propose a drag experiment induced by the long range nature of the dipolar interaction. We consider an atomic or molecular gas harmonically trapped in a double well configuration such that the overlap between the two clouds and the corresponding tunneling effect can be neglected (see Fig. 1). The only force acting between the two gases is of long range nature (here and in the following we assume that dipoles are oriented in the direction orthogonal to the discs, i.e along the z-th axis of Figure 1) and we study how the out-of-phase transverse dipole mode is affected by the long-range interaction. Displacing one of the two clouds out of its equilibrium position and releasing it, will excite both the in-phase (center of mass) and the out-ofphase dipole modes. On a time scale fixed by the inverse of the frequency difference between the two modes, the center of mass motion of the first cloud will be transferred to the second one. We call this effect "dipolar drag" in analogy to the well known Coulomb drag (see e.g., [10]) exhibited by electrons in uniform bilayer systems [11].
II. DIPOLAR DRAG OF THE CENTER-OF-MASS MOTION
We consider a gas confined by a cylindrically harmonic potential: where 2z 0 is the distance between the minima of the potential along z, λ = ω z /ω ⊥ is the ratio between the transverse and longitudinal trapping frequencies and we consider pancake configurations, i.e., λ ≫ 1. Let x i being the center of mass coordinate along x of the i-th cloud. The equations of motion can be written as where α is the coupling between the two bare center-ofmass modes. The eigenfrequencies of the previous equation are simply ω in = ω ⊥ , for the in-phase sloshing mode and ω out = ω ⊥ 1 + 2α/ω 2 ⊥ for the out-of-phase sloshing mode. Thus in order to determine α we just need to determine the splitting ω out − ω ⊥ for the dipolar coupled system. Once the frequency ω out is known, we can determine quantitatively the evolution of the system as described by Eq. (3). In Fig. 2 the motion of the coupled clouds for a value of ω out = 1.1ω ⊥ (see Sec. III below) is shown. The beating of the motion is a direct measurement of the out-of-phase mode frequency, since the time at which the initially displaced cloud stops in the center is simplyt = π/(ω out − ω ⊥ ).
In the following we calculate the frequency ω out as a function of the dipolar interaction strength and the distance between the two clouds. We will also discuss how the equation of state of the gas affects such a frequency. The frequency ω out was recently calculated by Huang and Wu [12] in the case of a magnetic dipolar Bose gas, using a technique very similar to the one employed in the present work. For this reason we mainly focus on the case of Fermi gas. Moreover the Fermi statistics allows for an easier realization of cold gases of hethero-nuclear molecules carrying an electric dipole moment (e.g., the recent experiment [13]) so that the strength of the dipolar force can be much larger.
III. TWO COUPLED FERMI GASES WITH DIPOLAR INTERACTION
The system we want to study is made of two clouds, i = 1, 2, of dipolar Fermi gas in the normal phase confined by the potential Eq. (1). Moreover we assume that the dipoles are oriented along the z axis by an electric field.
The interaction potential between two dipoles ( d 1 = d 2 = d) has the standard form: where θ is the angle between d, i.e., the z direction, and r 1 − r 2 . We study the system by means of the following energy functional of the cloud densities n i , i = 1, 2 where we introduce the kinetic energy E i kin = 2 /(20mπ 2 ) d r[6π 2 n i ( r)] 5/3 calculated within local density approximation, the potential energy E i trap = d r[n i ( r)V i trap ] and the intra-and inter-cloud dipolar energies In the energy functional Eq. (5) we have not included the intra-cloud exchange energy. We safely neglect it, since for the pancake-like configurations, which we are mainly interested in, the direct term is the dominant effect (see, e.g., [14]). In order to study the center-of-mass oscillations in the transverse direction (see Fig. 1) we consider a scaling transformation for the densities of the type n i (x, y, z) → n i (x+ǫ i , y, z). From the variation of the energy functional (5) we get as expected two modes. The in-phase mode is not affected by the dipolar interaction and has a frequency ω = ω ⊥ equal to the harmonic trapping one. Conversely the out-of-phase mode is affected by the dipole interaction and is characterized by the frequency where N is the total atom/molecule number.
Assuming the two clouds are identical we can get the densities of the clouds by means of an easy variational gaussian ansatz where W ⊥ and W z = W ⊥ /κ are the widths of single cloud and 2L is the distance between the clouds, which, for our parameters, namely a strong confinement in the z direction, is very close to the distance 2z 0 .
Inserting the gaussian anzatz Eq. (8) into the single where we introduce the standard notation (see e.g. [15]) Once the densities are known we are in the position to calculate the frequency ω out . The problem has many parameters and to be concrete we consider a gas of biatomic molecules 40 K 87 Rb and reasonable experimental values for the number of molecules and for the trapping potentials (see Table I). The result for the out-of-phase mode frequency as a function of the clouds' distance and for different value of λ are reported in Fig. 3.
We see that for small enough distance the larger the cloud (larger λ for fixed N ) the smaller the effect. This can be easily understood in terms of the potential of a single disk of radius W ⊥ on a probe dipole. Indeed at a distance z ≪ W ⊥ the potential decays with W ⊥ , which is a general result independent of statistics. On the other hand we have that the asymptotic behavior of the frequency shift at large distance is ∝ 1 + C/L 5 with C a constant, which is the result one immediately obtaines by considering just two trapped dipoles. It can also be easily shown that considering spherical clouds with W ⊥ = W z = W in Eq. (7) the frequency of out-of-phase dipole mode reads where h(y) = e −2y 2 (4y 5 +6y 3 +9/2y)−9/2 π 2 Erf( √ 2y), which approaches a constant for large values of y.
The results reported in Fig. 3 show how the frequency shift can be large enough, for the chosen parameters, to be experimentally measurable. Moreover we checked that the results are the essentially the same using Thomas-Fermi profiles, instead of the gaussian ansatz Eq. (8).
It is useful to compare the above predictions for the frequency shifts calculated for a zero temperature Fermi gas with the ones holding for a Bose-Einstein condensed gas or for a classical thermal configuration. To this purpose one can still use Eq. (7) with the proper density profiles (an inverted parabola for a BEC gas and a Boltzmann distribution for a classical gas). As emerges from Eq. (7) the effect is amplified for smaller radial sizes where the gradient of the density is larger. One then understands that a Bose gas interacting with a moderate value of the scattering length will provide larger shifts with respect to both a Fermi gas and a thermal configuration. The shifts for a Bose gas were investigated in [12] where, however, magnetic dipolar atomic gases were considered, which are characterized by a significantly small value of the dipolar coupling constant d 2 in Eq. (4). At present the more promising perspectives for realizing electric dipolar molecules, where the effect of the dipolar interaction is particularly strong, concern the fermionic species which ensure better stability conditions and which are already available in the thermal regime.
In Fig. 4 we report the predictions for the frequency shifts exhibited by a thermal configuration calculated at the temperature T = T F , where T F ≡ ω ⊥ (6N λ) 1/3 /k B is the Fermi energy and k B the Boltzmann's constant. We used simply the gaussian density profiles of Eq. (8), but with the radii given by the Boltzmann expression W 2 ⊥ = 2k B T /(mω 2 ⊥ ) and κ = λ. The corresponding parameters are given in Table II. Comparison with the predictions reported in Fig. 3 shows that the effect, for the same trapping conditions and number of particles, is indeed smaller than for a degenerate Fermi gas, since the thermal radii are larger and the densities smaller than the ones of the degenerate configuration. Table II .
IV. CONCLUSIONS
We have proposed a drag experiment between two nonoverlapping atomic/molecular clouds (see Fig. 1 and Fig. 2) to test the long-range nature of the dipolar potential. The method is independent of quantum statistics and holds for both degenerate and thermal gases. This effect corresponds to the trapped version of the famous Coulomb drag exhibited by electrons in uniform bilayer systems. The realization of such a drag experiment would provide a direct and easy signature of the long-range nature of the dipole interaction. | 2011-05-02T15:18:50.000Z | 2011-05-02T00:00:00.000 | {
"year": 2011,
"sha1": "1be0da5c5454022119cb3719d51bf8b63ef0a4c9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1105.0353",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1be0da5c5454022119cb3719d51bf8b63ef0a4c9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255261900 | pes2o/s2orc | v3-fos-license | Short-Chain Fatty Acids as Bacterial Enterocytes and Therapeutic Target in Diabetes Mellitus Type 2
Diabetes mellitus is a disease with multiple gastrointestinal symptoms (diarrhea or constipation, abdominal pain, bloating) whose pathogenesis is multifactorial. The most important of these factors is the enteric nervous system, also known as the “second brain”; a part of the peripheral nervous system capable of functioning independently of the central nervous system. Modulation of the enteric nervous system can be done by short-chain fatty acids, which are bacterial metabolites of the intestinal microbiota. In addition, these acids provide multiple benefits in diabetes, particularly by stimulating glucagon-like peptide 1 and insulin secretion. However, it is not clear what type of nutraceuticals (probiotics, prebiotics, and alimentary supplements) can be used to increase the amount of short-chain fatty acids and achieve the beneficial effects in diabetes. Thus, even if several studies demonstrate that the gut microbiota modulates the activity of the ENS, and thus, may have a positive effect in diabetes, further studies are needed to underline this effect. This review outlines the most recent data regarding the involvement of SCFAs as a disease modifying agent in diabetes mellitus type 2. For an in-depth understanding of the modulation of gut dysbiosis with SCFAs in diabetes, we provide an overview of the interplay between gut microbiota and ENS.
Introduction
Diabetes mellitus is a disease with multiple gastrointestinal symptoms (diarrhea or constipation, abdominal pain, bloating) whose pathogenesis is multifactorial [1]. The most important of these factors is the enteric nervous system (ENS), also known as the "second brain"; a part of the peripheral nervous system capable of functioning independently of the central nervous system [2,3].
ENS is a part of the peripheral nervous system contained in the intestinal wall that is able to function independently of the central nervous system (CNS), being organized into two plexuses: myenteric and submucosal; Auerbach's and Meissner's plexus, respectively [2,4,5]. ENS modulates multifold functions of the gastrointestinal tract to maintain intestinal homeostasis, such as intestinal permeability, immune function, gastrointestinal motility, or the maintenance of mucosal integrity [5].
Extrinsic and intrinsic factors may influence gene expression and ENS development and activity. ENS development may be affected by maternal nutrition and deficiency in vitamin A or folate or foetal exposure to neuromodulatory drugs (antidepressants, antipsychotics, anti-epileptics, anti-cholinergics) that change neurons' activity and postnatal subtype and function [4]. of enteric neural crest-derived cells (ENCCs) and establishing the number of neurons in adulthood [2].
The Organization of ENS
The gut wall comprises four layers, the mucosal layer with a mucosal barrier, submucosal, muscular, and serosa layers. The mucosal barrier is composed of the mucin layer, the epithelial cell layer, and the apical junction complex [3]. ENS consists of the submucosal plexus situated beyond the epithelial cell layer and the myenteric plexus localized between the longitudinal and circular muscle layers. These two layers are connected to each other through neuronal projections that also connect to non-neuronal targets, such as immune cells and epithelial cells. The first ENS components that are close to the gut lumen are glial cells [4]. A short overview of gut wall layers is provided in Figure 1.
Biomedicines 2023, 11, 72 3 of 19 involved in guiding the progenitor cells, promoting the proliferation and differentiation of enteric neural crest-derived cells (ENCCs) and establishing the number of neurons in adulthood [2].
The Organization of ENS
The gut wall comprises four layers, the mucosal layer with a mucosal barrier, submucosal, muscular, and serosa layers. The mucosal barrier is composed of the mucin layer, the epithelial cell layer, and the apical junction complex [3]. ENS consists of the submucosal plexus situated beyond the epithelial cell layer and the myenteric plexus localized between the longitudinal and circular muscle layers. These two layers are connected to each other through neuronal projections that also connect to non-neuronal targets, such as immune cells and epithelial cells. The first ENS components that are close to the gut lumen are glial cells [4]. A short overview of gut wall layers is provided in Figure 1. Enteric glial cells (EGCs) are defined by all peripheral neuroglia that are associated with the enteric neurons. Their main function is to assure local homeostasis using bidirectional communication. Based on their location in the intestinal wall, six main types of EGCs have been described. Intraganglionic glial cells are associated with neuronal bodies in the myenteric and submucosal plexus; the submucosal glia cells mediate the secretomotor function of the submucosal neurons, whereas the myenteric glial cells are involved in myenteric neurons trophic support, modulation of oxidative stress and neuroinflammation, neurogenesis and gliogenesis. Interganglionic glial cells surround the nerve bundles and mediate the signal propagation between myenteric ganglia. Extraganglionic glial cells include glia associated with nerve fibers in the two plexuses, but outside the ganglia, glia localized in the intestinal mucosa, and glia associated with nerve fibers in the muscle layer. The mucosal glial cells are involved in epithelial cell maturation and have potential roles in the modulation of immune reaction and neuroendocrine signalling [5].
IPANs are key components of the ENS that modulate enteric activity and assure adequate bowel function. IPANs consist of Dogiel type I and type II morphology neurons, cholinergic myenteric neurons, nitrergic myenteric neurons, and intestinofugal neurons. Dogiel type I morphology neurons were described as responsive to mechanical stimuli, but with rare ramifications to the mucosa. Dogiel type II morphology neurons consist of Enteric glial cells (EGCs) are defined by all peripheral neuroglia that are associated with the enteric neurons. Their main function is to assure local homeostasis using bidirectional communication. Based on their location in the intestinal wall, six main types of EGCs have been described. Intraganglionic glial cells are associated with neuronal bodies in the myenteric and submucosal plexus; the submucosal glia cells mediate the secretomotor function of the submucosal neurons, whereas the myenteric glial cells are involved in myenteric neurons trophic support, modulation of oxidative stress and neuroinflammation, neurogenesis and gliogenesis. Interganglionic glial cells surround the nerve bundles and mediate the signal propagation between myenteric ganglia. Extraganglionic glial cells include glia associated with nerve fibers in the two plexuses, but outside the ganglia, glia localized in the intestinal mucosa, and glia associated with nerve fibers in the muscle layer. The mucosal glial cells are involved in epithelial cell maturation and have potential roles in the modulation of immune reaction and neuroendocrine signalling [5].
IPANs are key components of the ENS that modulate enteric activity and assure adequate bowel function. IPANs consist of Dogiel type I and type II morphology neurons, cholinergic myenteric neurons, nitrergic myenteric neurons, and intestinofugal neurons. Dogiel type I morphology neurons were described as responsive to mechanical stimuli, but with rare ramifications to the mucosa. Dogiel type II morphology neurons consist of multipolar processes that ramify extensively in the mucosa and circumferentially and have both mechanosensory and chemosensory properties. Cholinergic and nitrergic myenteric neurons were proven to be rapidly adapting excitatory neurons. Intestinofungal neurons connect with sympathetic neurons from the prevertebral ganglia, being stimulated by mechanical compression. They have been involved in the increase of sympathetic inhibition of gut motility [19].
The Functions of ENS
The myenteric plexus is responsible for intestinal muscle movements related to gut content propulsion, whereas the submucosal plexus coordinates the secretion and absorption of biomolecules. ENS modulates multifold functions of the gastrointestinal tract to maintain intestinal homeostasis, such as intestinal permeability, immune function, gastrointestinal motility, or the maintenance of mucosal integrity [5].
The ENS or intrinsic nervous system represents an important mechanism in the communication between the gut and the brain, along with neurotransmitters, i.e., acetylcholine, serotonin, and dopamine produced by the gut microbiota and enteroendocrine cells of the gut [19,20]. ENS proceeds key regulatory functions to maintain intestinal homeostasis, from motor functions, enteric transport and secretion, local blood flow regulation, to immune and endocrine responses [21]. The enteric neural circuits which form ENS are located in two types of ganglia, including Auerbach's plexus or myenteric plexus, responsible for motor control of circular and longitudinal muscle layer and Meissner's plexuses, consisting of enteric neurons which innervate epithelial and the smooth muscle layer of the muscularis mucosae [21,22].
The propulsion of content along the gut requires ENS activity, whereas phasic contractions are not dependent on enteric neurons. ICCs, the gut pacemaker cells, are involved in the generation of electrical oscillations in the smooth muscle cells from the intestinal wall that cause phasic contractions [19]. In a genetic study based on loss of function of ICCs, the authors showed that ICCs integrate excitatory and inhibitory signals from enteric neurons, with slow-wave activity of the smooth muscle cells, indicating that ICCs have an important role in the coordination and control of gut motility beyond their role of pacemaker cells [20].
The myenteric neurons connect with SIP syncytium, an essential key factor for bowel motility. The SIP syncytium is a multicellular syncytium formed of smooth muscle cells (SMCs), ICCs, and platelet-derived growth factor receptor (PDGFR) α cells connected through gap junctions. The SIP name is derived from the first letter of each component. Bowel motility is based on motility patterns, like peristalsis, segmentation, and migrating motor complex in the small intestine and high amplitude propagating contractions (HAPCs) in the colon. Activation of the SIP syncytium through diverse neural signalling generates and maintains these motor patterns [21]. ENS motility patterns include chemotransduction that relies on endogenous 5-HT and chemoreceptor activation of Dogiel type II morphology neurons, and neurotransmitters mediation, which involve acetylcholine action over nicotinic receptors, glutamate signalling, ATP/P2X pathway and vasoactive intestinal peptide signalling [19].
Spencer et al. showed that rhythmic electrical depolarizations in smooth muscle cells are based on rhythmic firing in ENS that are responsible for colonic migrating motor complexes (CMMCs). Temporally synchronized neurons, both excitatory and inhibitory, were activated at the onset of neurogenic contractions. ENS was activated before smooth muscle cell depolarization, with a rhythmic and temporally synchronized firing pattern at a frequency around 2 Hz, indicating the discovery of a unique motility pattern outside the CNS [22].
The enteric circuit relies on inputs represented by the intrinsic sensory pathways and extrinsic mediation. Intrinsic sensory pathways include the detection of chemical and mechanical stimuli and the feedback generated by the contracting muscles. Although intrinsic sensory neurons form an extensive network, the nerve endings are not in direct contact with gut content. Enteroendocrine cells (EECs) found in the epithelium lining are stimulated by the nutrient composition of the luminal content, and secrete many peptides and hormones involved in paracrine or endocrine mechanisms and in communication with ENS. The communication between EECs and ENS includes the release of 5-HT from EECs and the activation of 5-HT3 receptors from the intrinsic sensory nerve endings [23]. Sympathetic and vagal afferent nerves are responsible for extrinsic mediation based on cholinergic and adrenergic signalling, defining the complex relationship between CNS and ENS [23].
ENS-mediated vasodilation is mediated by the stimulation of IPANs by the released 5-HT induced by small distortions of the mucosa, and the activation of mechanosensitive enteric neurons as a response to mechanical deformations of the gut [24]. The common pathway is the activation of muscarinic M3 receptors by the acetylcholine in endothelial cells, which leads to NO secretion and vasodilation. Other mediators involved in ENSmediated vasodilation include substance P, VIP, or histamine [21].
Fluid and electrolytes secretion into the lumen also relies on IPANs activation by 5-HT and further release of acetylcholine or VIP. These mediators act via their corresponding receptors (muscarinic, respectively, VIPR1) and increase intracellular calcium and cyclic AMP, that activate chloride channels, inducing the transport of chloride and subsequently accompanying sodium and water [21].
Other functions of the ENS are the regulation of epithelial proliferation, differentiation and repair and the modulation of epithelial barrier [21]. An overview of the main functions of the ENS and their influencing factors is provided in Table 1. Diet composition Vit D supplementation Antibiotics CNS disorders [25][26][27][28][29] Secretion and absorption of nutrients Submucosal plexus IPANs [19] Integration of intrinsic and extrinsic mediation Intrinsic sensory neurons-modulated by molecules secreted by the EECs Sympathetic and vagal afferent nerves [23] Vasodilation of submucosal arterioles and consecutive hyperemia IPANs Mechanosensitive enteric neurons [24] Epithelial proliferation, differentiation, and repair Enteric neurons [21] Modulation of epithelial barrier Enteric neurons Enteric glial cells [21] Intestinal immune modulation Cholinergic anti-inflammatory pathway (CAIP) [21,30] Metabolism mediation Enteric neurons via enterosynes [29] 1.
Factors That Influence ENS Activity
Extrinsic and intrinsic factors may influence gene expression and ENS development and activity. ENS development may be affected by maternal nutrition and deficiency in vitamin A or folate or foetal exposure to neuromodulatory drugs (antidepressants, antipsychotics, anti-epileptics, anti-cholinergics) that change neurons activity and postnatal subtype and function [4,26].
After birth, the diet's composition may directly impact ENS activity or may change gut microbiota composition and affect secondarily ENS. The nutrients involved in ENS mediation have been proposed to be SCFA butyrate that can alter neuron activity and gene expression and N-3 polyunsaturated fatty acids that can affect enteric neuron subtype ratios. Breast milk, enriched in nutrients and bioactive molecules, including neurotrophic factors and cytokines, has been involved in the modulation of enteric neurons and glial cells, particularly during the first postnatal stages [4]. Nezami et al. have been shown in mice fed with a diet with 60% calories from fat, delayed intestinal transit, associated with myenteric neurons cytoplasmic lipid accumulation and apoptosis, mitochondrial dysfunction, and endoplasmic reticulum stress [25]. A low-palmitate high fat diet did not cause similar changes, suggesting that palmitic acid was responsible for enteric neuronal cell dysfunction. They also showed that the effects were mediated by miR375 overexpression in association with reduced levels of 3-phosphoinositide-dependent protein kinase-1 (Pdk1), both markers of cell survival, differentiation, and apoptosis regulators [25]. Furthermore, vitamin D supplementation protected against myenteric neurons dysfunction induced by high fat diet and palmitic acid. The suggested mechanism was that vitamin D induces activation of peroxisome proliferator-activated receptor gamma (PPAR-γ), leading to ameliorated neuronal peroxisome function and improved neuronal lipid metabolism [26].
Treatment with antibiotics can also influence ENS activity. The administration of vancomycin in the first post-natal 10 days in mice changed the level of activity of enteric neurons in young adult life depending on the sex and decreased the level of mucosal 5-HT on the long term. In male mice, vancomycin caused a reduction of neurons and glial cells in the myenteric plexus and altered the subtype proportions of neurons. The excitability of myenteric neurons was also decreased in male mice after vancomycin treatment. In female mice, there were no alterations regarding myenteric neurons, but the submucosal neurons were more responsive to stimuli. This study suggests that neonatal exposure to antibiotics can change the function of ENS later in life, leading to gastrointestinal motility disorders, depending on the sex [27].
CNS disorders can also have an impact over ENS function. In a mice model of permanent middle cerebral artery occlusion (pMCAO), the authors concluded that in focal ischemic stroke, enteric neuronal apoptosis is caused by a mechanism involving galectin-3 release triggered both central and peripheral and further activation of toll-like receptor 4 (TLR-4) and transforming growth factor-βactivated kinase 1 (TAK1)/AMP activated kinase (AMPK) pathway, suggesting a neuroinflammatory reaction transmitted from CNS to ENS [28].
The Interplay between ENS and Gut Microbiota
The intestinal microbiota represents the main place of the body where the largest number of microorganisms live (bacteria, viruses, fungi, etc.) [6]. There are five main bacterial phyla in healthy adults: Firmicutes, Bacteroidetes, Actinobacteria, Proteobacteria, and Verrucomicrobia [6,7]. The intestinal microbiota plays a decisive role in the homeostasis of the body and the maintenance of human health [31]. Depletion of compositional phila of gut microbiota have been evidence in large spectrum of neurological diseases, i.e., stroke [8], neurodegenerative disorders, autism spectrum disorder [9], and multiple sclerosis [10], Parkinson's disease, gliomas [11], cognitive disorders [12,13], and nervous system tumors [11]. Evolving experimental studies explored the relation between dysbiosis induced by antibiotics and changes in microbiota and gut-brain axis signaling [14,32]. In mice model, treatment with high doses of Antibiotics (AB) alters gut microbiota, with enrichment in Enterobacteria, and decreases in Bacteroides and Enterococci populations [32]. Regarding enteric neuron circuits within the intestinal layers, long-term AB usage in mice increases MPO and SP immunoreactivity in the area of submucous and myenteric plexus resulting in visceral hypersensitivity to colorectal distension [32]. This further suggests the involvement of decreased gut bacterial species induced by AB usage in disruption of homeostasis of ENS. Targeting gut microbiota disruption with pro/prebiotics could restore the gut microflora and therefore reestablish ENS homeostasis in AB-treated mice. Compositional microflora changes in AB-treated mice were represented of enrichment in Bacteroides spp., Clostridium coccoides, Lactobacillus spp. and underrepresentation of Bifidobacterium spp. [32]. After AB deprivation of beneficial species of gut microbiota, mice treated with Lactobacillus paracasei shown reduced visceral hypersensitivity and SP immunostaining compared with controls [14]. Local epithelial and immune cells express Toll-like receptors (TLRs), which recognize bacterial ligands, called Microbial-associated molecular patterns (MAMPs), and bacterial-derived metabolites, inducing specific local and systemic immune responses [33,34]. Recognition of MAMPs by TLR represents the basis of the interaction of gut microbiota with ENS.
Crosstalk between Enteric Neurons and Gut Microbiota
Gut dysbiosis causes alteration in host microbial interactions of IBS patients, leading to dysregulated local immune response, which might contribute to intestinal sensorial and secretomotor disruption in these patients [14]. Neuronally-dependent intestinal changes have been shown in animal studies with disrupted gut microbiota, which affected colonic motility, pain related visceral sensitivity, and secretomotor function [14]. Multiple immunological and neuronal expression theories of the ENS might explain intestinal changes during gut dysbiosis.
Evolving experimental data evidenced the role of the TLR-signalling pathway in regulating neuronal survival and neurogenesis of neural plexuses of intestines [35][36][37][38]. Immunostaining studies showed differential expression of TLR on nitrergic, cholinergic and calretinin neuron populations of myenteric and submucosal neurons [15,35,36]. In mice with TLR2 deficiency and gut dysbiosis, ENS exhibited changes in architecture and neuromodulator profile, with alteration of secreto-motor function, and reduced glial cell line-derived neurotrophic factor (GDNF) levels in smooth muscle cells. These changes were reversed by administrating TLR2 agonist [37].
In normal conditions, TLRs are constitutively expressed on surface of microglia and glial cells, with activation of TLR pathway in disease conditions [39,40]. TLR are constitutively expressed in glial cells and microglia under normal conditions, but changes in TLR expression in neuronal cells have been observed under pathological conditions [38]. However, in C57BL/6N specific pathogen free (SPF) mice, the expression of TLR3, TLR4, and TLR7 has been noted in nerve fibers and ganglia of myenteric and submucous plexuses of the small and large intestine [38]. Moreover, dorsal root ganglia of thoracic and lumbosacral areas in SPF-mice exhibited differential expression of TLR3, TLR4, and TLR7, suggesting the role of TLR signaling in ENS regulation [38].
In mice studies, AB treatment affect intestinal integrity leading to IBS-related mechanisms, such as disruption of gastrointestinal motility, hypersensitivity of visceral painrelated responses, enhanced secretomotor function. Using chemical analysis methods in animal models with impaired gut microbiota, intestinal samples revealed immune changes, including increase of secretory-IgA, upregulation of lectin RegIIIγ, TLR-4 and 7, cannabinoid receptors 1 and 2, mu-opioid receptors, and nerve growth factor [14].
Vicentini et al. showed a loss of nitrergic, cholinergic, and calretinin enteric neurons of myenteric and submucosal plexus in AB-treated mice, along with neuronal fiber density [15]. Notably, enteric neurogenesis, suggested by upregulation of Sox2 in neurons, accompanied dysregulated gut microbiota induced by AB in mice, suggesting the compensatory response triggered by ENS to gut microbial depletion after AB treatment [15]. Moreover, the recovery of gut microbiota was associated with restoration of intestinal epithelial barrier function and increase of enteric glia and neurons by stimulating enteric neurogenesis [15].
In mice treated with AB, dysregulated gut microbiota triggered enteric neuron changes within submucosal and myenteric plexuses. The enteric neuron changes exhibited a loss of neuronal population in nitrergic and cholinergic subpopulation, with a decrease of neuronal fiber density [15].
These findings outline that disruption in gut microbiota species triggers complex immune responses, which induces neurosensorial and neuroimmune modulators responsible for the loss of enteric neuron structure and chemical signals of ENS neurotransmitters [15].
SCFA serve as the primary energy source for colonocytes (especially butyrate) and contribute to the maintenance of intestinal barrier integrity through mucus production and increased expression of tight junction proteins (occludins, claudins, and junctional adhesion molecules) [43]. The highest concentration of SCFA was determined in the colon and the ratio of C2, C3, and C4 was 60:20:20, but lower concentrations of SCFA were identified in the liver and blood. In this regard, it can be mentioned that the main role of SCFAs begins in the intestine [41].
The link between SCFAs and the ENS is based on the interaction of SCFAs with their receptors, which are highly expressed in the ENS [44]. The first receptors identified as SCFA receptors were FFAR2 (GPR43) and FFAR3 (GPR41). The activation of these receptors influences several physiological processes (inflammatory, immune, hormonal secretion, etc.) contributing to maintaining the body's homeostasis [45]. Another receptor responsible for SCFA effects is GPR109a. The interaction between this receptor and SCFA (C4) is important for intestinal homeostasis [46].
Through its primary mediators, SCFAs including acetate, propionate, and butyrate, the gut microbiota exerts immunomodulatory and anti-inflammatory functions and interacts in a bidirectional manner with numerous organ systems, forming the so called specific gut-microbiota-related axis, between microbiota and other organs [47,48]. Recent studies described so far, the Gut-Liver axis [49], Gut-Brain axis [50], Gut-Liver-Brain axis [51].
The role of microbiota-derived metabolites, such as lipopolysaccharides (LPS) and SC-FAs in neuron survival, and neurogenesis has been evidenced in experimental AB-induced gut dysbiosis [15]. LPS supplementation during treatment with AB-attenuated nitrergic neuron loss in submucous and myenteric plexuses in mice [15]. SCFAs supplementation in AB-induced microbial depletion in mice dampens neuronal deficit in ileal and colonic myenteric plexus, by regulating S100B+ expression of neurons and enteric glial cells [15]. However, LPS and SCFAs did not succeed to restore impaired motor function and disrupted permeability of colon after AB treatment, suggesting long-term structural and microbial changes within intestine during AB treatment.
5-hydroxytryptamine (5-HT) represents a key factor that regulates secreto-motor function of the GI tract by acting on specific receptors localized on enterocytes, enteric neurons, and immune cells [52,53].
According to multiple studies, disruption in intestinal motility has been correlated with reduced 5-HT values within the gut [52,53].
SCFAs could regulate intestinal dysmotility by increasing tryptophan hydroxylase 1 expression in mice colonized with human gut microbiota, compared to SPF mice [53]. SCFAs modulate colonic serotonin production by acting on enterochromaffin cells [53]. Research data suggested that SCFAs might promote neurogenesis in gut dysbiosis by activating enteric serotonin networks via 5-HT4 receptors [53][54][55]. The amount of SCFA lowers the pH in the colon. Consequently, the microbiota at this level may be affected and subsequently SCFA production. These changes can influence the aggregation of proteins that then play an important role in blood sugar regulation. One of these proteins is IAPP secreted by pancreatic β cells [56,57].
An overview of the main mechanisms shared between gut microbiota and ENS are provided in Figure 2.
lowers the pH in the colon. Consequently, the microbiota at this level may be affected and subsequently SCFA production. These changes can influence the aggregation of proteins that then play an important role in blood sugar regulation. One of these proteins is IAPP secreted by pancreatic β cells [56,57].
An overview of the main mechanisms shared between gut microbiota and ENS are provided in Figure 2.
The Link between SCFA and Diabetes Mellitus
It is estimated that 1 in 10 adults has diabetes mellitus (DM) and the estimated number of people living with diabetes in 2021 was around 537 million [58]. A healthy diet, with increased consumption of prebiotics, soluble fiber (such as pectin, inulin, and hemicellulose contained in fruits: apples, apricots, oranges, peaches, figs, and pears; vegetables: peas, turnips, sweet potatoes, and brussels sprouts; legumes: lima, navy, pinto, kidney, and black beans and in bran cereals, oatmeal, barley) and resistant starch, (from whole grains including oats and barley, brown rice, green bananas, white beans, and lentils), added to the other components of a healthy lifestyle, is well known to have beneficial effect in all types of DM, being the basement of clinical management [59].
The link between gut microbiome-SCFAs-complex diseases as DM is confirmed by microbiome-wide association studies (as association) and also by bidirectional Mendelian-randomization study (as causation).
Meta-analysis of gut microbiome studies showed that obesity and other obesity related diseases (such as T2DM) are associated with a loss of bacteria that produce C4 [60]. The causal relationship between the gut microbiome and glucose control was demonstrated by Sanna et al. [61], using data collected from 952 normoglycemic individuals (LifeLines-DEEP cohort) including genome-wide genotyping, gut metagenomic sequence and fecal SCFAs levels, combined with seventeen metabolic and anthropometric traits and then performing Mendelian-randomization of three genetic predictors effects in the discovery dataset (DIAGRAM: 26,676 T2DM cases and 13,2532 controls) and in the replication cohort (UK Biobank: 19,119 T2DM cases and 423,698 controls). They found that genetic predisposition to increased fecal concentration of propionate (C3) (meaning altered
The Link between SCFA and Diabetes Mellitus
It is estimated that 1 in 10 adults has diabetes mellitus (DM) and the estimated number of people living with diabetes in 2021 was around 537 million [58]. A healthy diet, with increased consumption of prebiotics, soluble fiber (such as pectin, inulin, and hemicellulose contained in fruits: apples, apricots, oranges, peaches, figs, and pears; vegetables: peas, turnips, sweet potatoes, and brussels sprouts; legumes: lima, navy, pinto, kidney, and black beans and in bran cereals, oatmeal, barley) and resistant starch, (from whole grains including oats and barley, brown rice, green bananas, white beans, and lentils), added to the other components of a healthy lifestyle, is well known to have beneficial effect in all types of DM, being the basement of clinical management [59].
The link between gut microbiome-SCFAs-complex diseases as DM is confirmed by microbiome-wide association studies (as association) and also by bidirectional Mendelianrandomization study (as causation).
Meta-analysis of gut microbiome studies showed that obesity and other obesity related diseases (such as T2DM) are associated with a loss of bacteria that produce C4 [60]. The causal relationship between the gut microbiome and glucose control was demonstrated by Sanna et al. [61], using data collected from 952 normoglycemic individuals (LifeLines-DEEP cohort) including genome-wide genotyping, gut metagenomic sequence and fecal SCFAs levels, combined with seventeen metabolic and anthropometric traits and then performing Mendelian-randomization of three genetic predictors effects in the discovery dataset (DIAGRAM: 26,676 T2DM cases and 13,2532 controls) and in the replication cohort (UK Biobank: 19,119 T2DM cases and 423,698 controls). They found that genetic predisposition to increased fecal concentration of propionate (C3) (meaning altered intestinal absorption or altered production?) is causally related to increased risk of T2DM. On the other hand, increased (genetic-driven) production of intestinal butyrate (C4) was associated with better insulin response following an oral glucose tolerance test.
Recent data showed that SCFAs could also play an important role in enhancing gut barrier function, preventing inflammatory clinical conditions associated with invading bacteria from the intestine. SCFAs deficiency may affect the gut-neuro-immune crosstalk, with important contributions to early stages of impaired autoimmunity response in type 1 diabetes mellitus (T1DM) [62].
The link between human microbiome-SCFAs (as potential therapeutic agents)-gestational diabetes mellitus (GDM) is also investigated. A review that included 128 studies concluded that gut microbiota, via SCFAs, may have a role in the regulation of blood pressure, glycemia, coagulability, and lipid profile during pregnancy, and may also impact on newborn health via vertical transfer of microbiota and (or) its metabolites. A major limitation of this review is that most of the observations were from animal models and the translation of the results to humans is difficult to relate [17].
In a recent published study that enrolled 60 pregnant women (40 controls and 20 GDM cases) showed that the levels of three dominant SCFAs (C2, C3, and C4) and total SCFAs were decreased in GDM compared to controls [63]. It was also observed in the same study that in pregnancies complicated with GDM, the placental content of specific receptors of SCFAs was decreased, associated with increased inflammatory responses. The study participants were further divided into four subgroups: first-normal pregnancies; second-GDM with isolated basal hyperglycemia; third-GDM with normal fasting plasma glucose but impaired result in 1 h and/or 2 h glycemia post glucose load; fourth-GDM with elevated fasting plasma glucose and/or 1 h and/or 2 h glycemia post glucose load. C3 levels were significantly decreased in the third group. The circulating levels of C2, C4, and total SCFAs were significantly reduced in the fourth group. After stratification according to BMI, only C2 was significantly lower in the overweight/obese GDM cases from the fourth group compared to the normal-weight control group. [64].
Gut homeostasis represents a microbial balance state maintained by local and system immune host functions, influenced by intestinal environmental factors. Complex interaction with multiple cells concur at gut homeostasis, such as innate immune cells, dendritic cells, macrophages and innate lymphoid cells, adaptive immune cells, T cells, B cells and plasma cells, and intestinal epithelial cells
SCFAs produced by fermentation in the intestinal lumen play an important role in gut homeostasis, but they are also absorbed and reach blood circulation, with effect on glucose storage in the liver, muscle, and fat tissues, and direct effect on beta-cells of the pancreas. Being small molecules, they also reach the brain, with beneficial effect on appetite control and decrease food consumption, with consecutive weight loss and decrease of insulin resistance, having the potential to improve glycemic control of patients with diabetes, especially T2DM. There is evidence that SCFAs can have multiple beneficial effects in DM by stimulating glucagonlike peptide-1 (GLP-1) and insulin secretion, anti-inflammatory properties, anti-obesogenic effect, and improvement of insulin-sensitivity [18].
Recent findings indicate that SCFAs can also play a role in modulating immune response, with a potential beneficial effect for patients with T1DM. There are also few data regarding the role of SCFAs in gestational diabetes.
Human cells express specific receptors for SCFAs, such as free-fatty acid receptor 2-FFAR2 (GPR43), which demonstrated highest selectivity for C2 (but can be also activated by other SCFAs) and free-fatty acid receptor 2-FFAR3 (GPR41), which can be also activated by SCFAs (pentanoate was the most potent agonist). Regarding the ligand preference of C2, C3 and C4, data showed for FFAR2 that C2 = C3 > C4 and for FFAR3 that C3 > C4 > C2 [64]. The adipose tissue had the highest expression of FFAR3 (on the endothelial cells of the blood vessels), but was also present in immune cells or endothelial cells in other tissues, while FFAR2 was most abundant in immune cells [65].
One potential SCFA as therapeutic agent and with evidence for beneficial results on human health is acetate (C2). However, there are different effects for oral administration, colonic or parenteral C2 infusion, acetogenic probiotic administration, or increasing acetogenic fiber consumption [66].
Early suggestions about the potential therapeutic effects of C2 come from folk medicine, where vinegar (which contains mainly C2) is used for its antihyperglycemic activity. In 2004, Johnston CS et al. published in Diabetes Care a study entitled "Vinegar Improves Insulin Sensitivity to a High-Carbohydrate Meal in Subjects With Insulin Resistance or Type 2 Diabetes" [67]. In this study, eight non-diabetic, insulin-sensitive control subjects, 11 non-diabetic insulin-resistant subjects, and 12 subjects with T2DM were enrolled. Fasting administration of a solution (containing 20 g of apple cider vinegar, 40 g of water and 1 tsp saccharine) compared to placebo, 2 min before the ingestion of a test meal (containing 87 g of carbohydrates), raised whole-body insulin sensitivity in the first hour of postprandial condition by 34% in insulin-resistant subjects and by 19% in T2DM subjects. The potential mechanism may be explained by the suppressing effect of C2 on disaccharidase activity [68] and by the rising effect on glucose-6-phosphate concentrations in skeletal muscle [69].
In another study, published by Halima et al., 46 T2DM patients were randomly assigned to receive either 15 mL of apple cider vinegar before the middle meal (active group, n = 24) or water (placebo group, n = 20) for 30 days. At the end of the study, significant reductions of the fasting plasma glucose, weight and body mass index (BMI), triglycerides and VLDL-cholesterol particles were observed in the active group, with no significant change of the parameters in the placebo group [70].
Another study, with the same design (15 mL of vinegar before the middle meal versus placebo, for one month) with 30 T2DM subjects in the active group versus 30 T2DM subjects in the placebo group, confirmed a beneficial effect of apple cider vinegar in the reduction of FPG and HbA1c, with no change in the placebo group. After one month of intervention, mean FPG levels decreased from 174.67 ± 63.52 mg/dL to 156.23 ± 60.04 and HbA1c from 7.56 ± 3.01% to 7.03 ± 2.97% [71].
In a group of 55 T2DM patients with poor glycemic control (mean HbA1c at the beginning of the trial was 9.32 ± 1.74%) the consumption of 15 mL of apple cider vinegar in 200 mL of water during dinner for three months significantly improved the glycemic parameters: FPG decreased from 170.14 ± 62.42 mg/dL to 157.32 ± 58.16 mg/dL and HbA1c at the end of the trial was 8.65 ± 1.81% [72].
However, as a conclusion about the effect of apple cider vinegar on glucose parameters in T2DM, a recent published systematic review and meta-analysis including nine randomized, placebo-controlled clinical trials published until January 2020, showed that the reduction of FPG was non-significant. After stratification based on the duration of intervention, the use of apple cider vinegar has a lowering effect on FPG in studies that lasted more than 8 weeks, but no dose-dependent effects were observed. The lowering effect of apple cider vinegar on HbA1c was non-significant in all groups [73].
A study comparing the acute effects of intravenous or intrarectal administration of C2 in six females with hyperinsulinemia showed that rectal administration was followed by increased peptide YY and GLP-1 release compared to intravenous administration or placebo. In this study, C2 administration increased plasma levels of peptide YY, GLP-1 and decreased the levels of TNF-alpha compared to placebo, with no change on plasma adiponectin [74].
Acute oral (but not intravenous) C4 administration in mice decreased caloric intake. Chronic C4 administration also prevented diet-induced obesity and activates brown adipose tissue (effects that are lost after subdiaphragmatic vagotomy), so C4 may play a role in preventing/treating obesity and obesity-related diseases, such as T2DM, in humans [75]. The role of C4 supplementation in modulating inflammation was studied in 37 subjects (18 healthy controls and 19 T2DM patients). There was significant difference between controls and T2DM patients regarding TNF-alpha and IL-10 circulating levels. C4 appears to suppress TNF-alpha production and increase IL10 levels. It also decreases the migration of monocytes in T2DM, with potential beneficial effects in the modulation of inflammation in this clinical condition [76]. In a 90-days controlled, open-label study that enrolled 16 T2DM patients randomly allocated 1:1 to diet alone or diet plus fecal microbiota transplantation, both methods showed improvement in glycemic (measured as HbA1c) and blood pressure control and weight loss. It was observed a change in the structure of microbiota that can suggest increase production of C4, but blood or colonic levels of SCFAs were not directly measured [77].
Both FFAR2 and FFAR3 are expressed in enteroendocrine cells and in vitro studies showed that C2 can increase, in a dose dependent manner (3.0 to 30 mM), the expression of proglucagon (a GLP-1 precursor) [78]. SCFAs can increase the GLP-1 release from the intestinal L-cells and the mechanism of action seems to be mediated through specific receptors-FFAR2 and FFAR3, followed by a rise of intracellular calcium, as previously demonstrated on mice colonic cultures. The release of GLP-1 is higher after incubation for 2 h with 1 mmol/L C3 or C2, with the smallest effect after incubation with 1 mmol/L of C4 [79].
The administration of 24 g inulin increased SCFAs areas under de curve (AUC) in the interval 4-6 h post administration, compared with glucose. However, the increase of SCFAs did not affect the plasma concentration of GLP-1 or peptide YY, although a decrease of ghrelin concentration at 6 h after inulin administration was observed [80].
Another study included 60 patients with T2DM that were randomly allocated to four groups: the first group received C4 (as sodium butyrate capsules), the second group received inulin (supplemented as powder), the third group received both C4 and inulin, and the fourth group consumed a placebo for 45 consecutive days. The fasting plasma glucose decreased significantly in the third group and GLP-1 levels were higher in the first and third group compared to the placebo group (suggesting that C4 supplementation can have favorable effect on GLP-1 secretion) [81,82].
The effect of a colonic infusion of SCFAs mixtures (200 mmol/L) high in C2, C3, or C4 versus placebo was studied in a randomized, double-blind, crossover study with the participation of 12 normoglycemic men with overweight or obesity. All SCFAs infusions increased fasting fat oxidation, increased fasting and postprandial plasmatic levels of peptide YY, and decreased fasting free glycerol concentration. Colonic administration of C2 and C4 also significantly increased resting energy expenditure compared to placebo [40]. All these effects have the potential to improve insulin resistance and promote weight loss in T2DM patients with obesity, but to this day, we do not have long-term human studies.
FFAR2 and FFAR3 are also present in beta-cells, with potential direct effect of SCFAs on insulin secretion [83]. An in vitro study performed on isolated islets of Langerhans from non-diabetic human donors showed that the effect of stimulating insulin secretion of SCFAs (C2 and C3) is mediated through FFAR2. In the same study, it was reported that through the same receptor, C2 and C3 have a protective effect against islet apoptosis [84]. However, there are also conflicting results regarding SCFAs' effect on the beta-cells. The expression of the FFAR2 and the plasma concentration of C2 are increased with a high-fat diet. FFAR2-/-mice fed a high fat diet displayed reduced beta-cell mass and in vitro treatment of isolated human islets with a specific agonist of FFAR2 increased insulin secretion, making FFAR2 a potential therapeutic target for T2DM [85]. On the other hand, double deletion on FFAR2 and FFAR3 (whole body or at pancreatic level) leads to greater insulin secretion and improvement of glucose tolerance in obese, T2DM mice fed with high fat diet compared to controls, but with no effect on glucose control if the deletion of the receptors was in the intestinal cells. The authors concluded that under diabetic condition, the effect of C2 mediated through FFAR2 and FFAR3 is to inhibit insulin secretion stimulated by hyperglycemia [86].
Due to the presence of FFAR2 and FFAR3 in the immune cells, agonists like C2, C3, and C4 can reach plasma levels to stimulate these receptors and play a role in immunogenic response. C2 has fast tissue extraction from the plasma and can hardly reach very high levels; therefore, it probably has little pathological effect on the activity of the immune cells. However, C3 can accumulate in the blood, especially in extreme pathological conditions, like propionic acidemia, with severe impairment of immune function [66]. An interesting experiment conducted by Marino E et al. [87] showed that in mice of non-obese diabetic strain (NOD mice), the use of a high-amylose maize starch diet enriched with C2, C4, or both decreases the occurrence of T1DM versus a high-amylose maize starch diet, normal diet, or high-amylose maize starch diet enriched with C3. The beneficial effects of enriched diets were to modulate autoreactivity mediated through T cells, enhancing gut integrity and decreasing plasma concentration of IL-21 (a known diabetogenic cytokine). A recent published study investigating the effect of high-amylose maize-resistant starch diet enriched with C2 and C4 administered for 6 weeks in patients with long standing T1DM was shown to improve the immune function up to 12 weeks, together with increased stool and plasma concentration of SCFAs. There was no change in HbA1c, glycemic parameters recorded on continuous glucose monitoring systems, insulin doses, food intake, or weight from baseline to week 6 or 12 [88].
In GDM, a gut dysbiosis was observed, with decreased SCFA-producing bacteria, but trends of the alterations for some bacteria are inconsistent. For example, although some bacteria are known to have beneficial effect by the production of SCFAs, they may also produce other proinflammatory molecules with negative impact on glycemic metabolism (such as Bacteroidetes that produce also gram-negative lipopolysaccharide) [89].
Effect of Clinical Management of Diabetes Mellitus on SCFAs Production
The first step in clinical management of DM is lifestyle optimization: healthy diet, daily physical activity, non-smoking status, none or moderate alcohol consumption, good quality and duration of sleep, and stress management. Dietary recommendations focus on providing the right amount of macro-and micronutrients to fulfill daily needs, but also to avoid postprandial hyperglycemia and to decrease cardiovascular risk. In recent years, the role of DFs has been studied, and its effects on the physiology and pathophysiology seem to be mediated by SCFAs produced by bacterial fermentation [90].
Pharmacotherapy is prescribed according to the type of DM. Insulin therapy is indicated for survival in T1DM and some cases of specific type of diabetes (e.g., pancreatectomy for pancreatic tumors) and for better glycemic control in T2DM, GDM and other forms of specific type of diabetes. In the pharmacotherapy of T2DM, beside insulin, there are many other classes of drugs that have received approval for use: biguanides, sulphonylureas, glinides, thiazolidindiones, alpha-glucosidase inhibitors, dipeptidyl peptidase-4 (DPP-4) inhibitors, glucagon-like peptide 1 (GLP-1) analogs/GLP-1 receptor agonists, sodiumglucose-linked transporter 2 (SGLT-2) inhibitors [91], and recently approved, dual gastric inhibitory polypeptide (GIP)/GLP-1 receptor agonist [92].
Metformin is commonly used in the pharmacotherapy of T2DM patients. Its beneficial effects can be explained by three major mechanisms of action: first, it inhibits gluconeogenesis and glycogenolysis in the liver; second, it improves insulin sensitivity and glucose uptake in the muscles and peripheric tissues; and third, it delays the intestinal absorption of glucose [93]. In more than 10% of users, at the beginning of the treatment, metformin can cause adverse gastrointestinal effects such as diarrhea, nausea, vomiting, abdominal pain, and loss of appetite [93]. Part of these gastrointestinal side effects can be prevented by gradual increase of dose and there are some evidences that the use of a gastrointestinal microbiome modulator can have beneficial effects on tolerability. The composition of the gastrointestinal microbiome modulator was: inulin from agave, beta-glucan from oats, and polyphenols from blueberry pomace) [94]. Metformin can impact SCFAs production by interaction with host microbiota. In a comparison between nine non-users of metformin versus fifteen users (all T2DM patients), it was observed that SCFAs levels in metformin-users was significantly higher, especially for C3 [95].
The impact of dapagliflozin (a SGLT-2 inhibitor) or gliclazide (a sulphonylurea) as an add-on to metformin on microbiota of 44 T2DM patients was studied over 12 weeks of therapy. The results showed no changes to the fecal microbiome, and the conclusion was that beneficial glycemic effects of these two drugs were not mediated through microbiota [96].
Although there are some evidences from animal studies that thiazolidindiones can have a mild effect on gut microbiota, there are no available data from human studies regarding the effect of thiazolidindiones on human microbiome and SCFAs production [97].
Studies on animal models showed that alpha-glucosidase inhibitors have the potential to modify gut microbiota in a reversible and diet-dependent manner, with increased SCFAs production (especially C4) [98]. The role of alpha-glucosidase inhibitors on modulating human microbiota is under evaluation. It was observed that beyond the beneficial therapeutic effect in T2DM by inhibition of human alpha-glucosidase and reduced absorption of dietary carbohydrates, the drug can also inhibit gut bacteria alpha-glucosidase, with an impact on bacterial ability to metabolize carbohydrates, and can be a cause of fluctuation in the structure of human microbiota [99]. On the other hand, it was recently reported that human gut bacteria encode resistance to acarbose [100].
GLP-1 is an incretin hormone secreted by the intestinal L-cells, with beneficial effects in DM by several mechanism of actions, such as lowering plasma glucose value, increasing insulin-secretion dependent on glycemia, and preserving beta-cell mass. GLP-1 analogs are used to treat T2DM, with proven benefits for better glycemic, weight loss, and cardiovascular protection [91]. Although there are data about the relation between SCFAs production and increased endogen secretion of GLP-1 (to be discussed in the prior section), there is no clear evidence that pharmacotherapy with GLP-1 analogues in humans with T2DM modifies production of SCFAs. A change in the microbiome of mice with diet-induced obesity treated with GLP-1 RA liraglutide (0.2 mg/kg, BID) and dual GLP-1/GLP-2 receptor agonist (GUB09-145, 0.04 mg/kg, BID) was observed, with potential to influence SCFAs production [101]. A recent published study enrolled 51 T2DM patients treated with metformin and/or sulphonylureas. They were randomized to receive either 1.8 mg s.c. of liraglutide (a GLP-1 analogue) or sitagliptin (a DPP4-inhibitor), or placebo, daily, for 12 weeks. At the end of the study, no change of gut microbiota was detected [101].
Conclusions
The role of SCFAs in DM remains to be elucidated, because there are conflicting results regarding the effect of different SCFAs (C2, C3, or C4), current data suggesting potential effects via other metabolic paths than through specific receptors and with different responses in pathological versus physiological conditions. It is not well established how to supplement SCFAs: via modulation of gut SCFAs-producers through special dietary intervention, administration of prebiotics, or direct administration of specific SCFAs. Additionally, is not yet clear which route of administration and what doses should be recommended in clinical practice. The potential role of SCFAs in T1DM is based on evidence suggesting a beneficial immunomodulator effect for C2 and C4, and a neutral (or even negative) effect of C3. In GDM, a modification of SCFAs circulatory levels has also been observed, but it remains to be established what SCFAs, which route of administration, and what concentration can have therapeutic effect in humans. | 2022-12-30T16:11:28.365Z | 2022-12-27T00:00:00.000 | {
"year": 2022,
"sha1": "a8cb4bd32db7930cde2c6f73716a1cd1a4b80ac3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/11/1/72/pdf?version=1672152383",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a234704fbfc3f6c037f564961330ef5461fd523d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251043288 | pes2o/s2orc | v3-fos-license | AMAZONE: prevention of persistent pain after breast cancer treatment by online cognitive behavioral therapy—study protocol of a randomized controlled multicenter trial
Background Surviving breast cancer does not necessarily mean complete recovery to a premorbid state of health. Among the multiple psychological and somatic symptoms that reduce the quality of life of breast cancer survivors, persistent pain after breast cancer treatment (PPBCT) with a prevalence of 15–65% is probably the most invalidating. Once chronic, PPBCT is difficult to treat and requires an individualized multidisciplinary approach. In the past decades, several somatic and psychological risk factors for PPBCT have been identified. Studies aiming to prevent PPBCT by reducing perioperative pain intensity have not yet shown a significant reduction of PPBCT prevalence. Only few studies have been performed to modify psychological distress around breast cancer surgery. The AMAZONE study aims to investigate the effect of online cognitive behavioral therapy (e-CBT) on the prevalence of PPBCT. Methods The AMAZONE study is a multicenter randomized controlled trial, with an additional control arm. Patients (n=138) scheduled for unilateral breast cancer surgery scoring high for surgical or cancer-related fears, general anxiety or pain catastrophizing are randomized to receive either five sessions of e-CBT or online education consisting of information about surgery and a healthy lifestyle (EDU). The first session is scheduled before surgery. In addition to the online sessions, patients have three online appointments with a psychotherapist. Patients with low anxiety or catastrophizing scores (n=322) receive treatment as usual (TAU, additional control arm). Primary endpoint is PPBCT prevalence 6 months after surgery. Secondary endpoints are PPBCT intensity, the intensity of acute postoperative pain during the first week after surgery, cessation of postoperative opioid use, PPBCT prevalence at 12 months, pain interference, the sensitivity of the nociceptive and non-nociceptive somatosensory system as measured by quantitative sensory testing (QST), the efficiency of endogenous pain modulation assessed by conditioned pain modulation (CPM) and quality of life, anxiety, depression, catastrophizing, and fear of recurrence until 12 months post-surgery. Discussion With perioperative e-CBT targeting preoperative anxiety and pain catastrophizing, we expect to reduce the prevalence and intensity of PPBCT. By means of QST and CPM, we aim to unravel underlying pathophysiological mechanisms. The online application facilitates accessibility and feasibility in a for breast cancer patients emotionally and physically burdened time period. Trial registration NTR NL9132, registered December 16 2020.
Background and rationale {6a}
Persistent pain after breast cancer treatment (PPBCT) is highly prevalent [1]. In the acute phase following surgery, reconstructive surgery, radiotherapy, and chemotherapy, up to 50% of the patients suffer from pain [2,3]. In around 30% (15-65%) of patients, this pain persists for 1 year or longer. Previously, it was thought that pain after mastectomy was mainly neuropathic, caused by damage to the intercostal brachial nerve (ICBN). However, pain may also originate from other sources such as lymphedema, scarring of tissues of the chest wall [4], or from muscle guarding [5]. Once chronic, PPBCT and secondary problems are difficult to treat and require a complex individualized multidisciplinary approach [6,7]. The standard pharmacological treatment of PPBCT consists of antidepressants [8,9], and capsaicin [10], but even though effective, intolerable side effects often limit (longterm) treatment. Interventional treatment options of PPBCT target the thoracic dorsal root ganglia, intercostal nerves [11], stellate ganglion [12], thoracic sympathetic chain [13], and fascia of the thoracic wall [14].
In the past decade, research has identified treatmentand patient-related factors increasing the risk for PPBCT. Treatment-related risk factors such as axillary lymph node dissection, radiotherapy, and intercosto-brachial nerve handling may directly increase tissue damage. Patient-related risk factors such as genetic haplotypes, young age, high BMI, pre-existing pain, high postoperative pain intensity, and psychological distress suggest a greater vulnerability of the nociceptive system that is prone to sensitization [15][16][17][18]. Quantitative sensory mechanisms. The online application facilitates accessibility and feasibility in a for breast cancer patients emotionally and physically burdened time period. testing (QST) demonstrated the presence of central sensitization in patients with PPBCT [19][20][21][22]. Moreover, there is accumulating evidence that patients with persistent postsurgical pain have less efficient inhibitory modulation of afferent nociceptive signals [23][24][25][26].
Many interventions aiming at a reduction of nociceptive input and concomitant sensitization of the central nociceptive system have been studied, e.g., reducing the extent of surgery, aggressively treating perioperative pain and minimizing the radiation dose and field [2,27]. So far, only perioperative ketamine and lidocaine have been found to significantly reduce chronic pain after various kinds of surgery [28]. Perioperative venlafaxine reduced the prevalence of PPBCT in one study [29]. The utility of paravertebral blocks during mastectomy in preventing long-term pain has been studied repeatedly with-despite good acute postoperative pain reliefno convincing preventive effect on PPBCT prevalence [30][31][32][33][34][35][36].
Anxiety and pain catastrophizing have emerged as the most robust psychological predictors of persisting postoperative pain [3,15,37,38], including PPBCT [16]. Moreover, these psychological states have been shown to interfere with pain processing within the CNS [39][40][41][42]. Stress may alter the endogenous pain inhibitionfacilitation balance, resulting in reduced net endogenous pain inhibition [43,44]. Less endogenous pain inhibition increases the risk for central sensitization and leads to chronification of acute pain.
In contrast to most other identified risk factors for PPBCT, psychological variables are modifiable and can therefore be a target for intervention. Cognitive behavioral therapy (CBT) is the leading psychological treatment for chronic pain. It aims to reduce maladaptive cognitions and behaviors and replace these with more adaptive ones. Reducing anxiety and catastrophizing around the time of surgery may reduce prevalence of persistent pain [45,46]. A recent meta-analysis showed that perioperative psychological interventions significantly reduced persistent pain and disability after different types of surgeries [47]. The search only identified two RCTs examining the effects on long-term pain after breast cancer surgery. Hadlandsmyth et al. found a 2-h postoperative psychological intervention to be feasible and acceptable, yet it did not significantly affect pain at three months [48]. A one-session preoperative internet-based intervention targeting pain catastrophizing before breast cancer surgery reduced the duration of postoperative of opioid consumption, yet no effect in pain intensity was found [49]. It should be noted that both studies used a single session intervention, which may not be sufficient to have a long-term impact on PPBCT. Moreover, neither study examined the effects beyond the 3-month period.
The AMAZONE study is the first to examine the longterm effects of a more intensive perioperative CBT program on the development of PPBCT in breast cancer surgery patients with high levels of anxiety and/or catastrophizing. Face-to-face CBT is challenging in the context of cancer treatment because of the demands it poses on patients, especially during treatment. Therefore, an online CBT (e-CBT) program was developed. e-CBT increases feasibility because it can be administered in the home environment and at a time that is convenient for patients. It has been demonstrated that (therapistguided) e-CBT is as effective as face-to-face CBT [50]. A review supported feasibility and acceptability of internetbased interventions for breast cancer patients [51].
In addition to studying the effectiveness of the e-CBT intervention on reducing the prevalence of PPBCT, its effect on sensory (nociceptive) signal transmission and endogenous pain suppressing pathways is examined by means of quantitative sensory testing (QST) and conditioned pain modulation (CPM) to unravel potential underlying mechanisms.
Objectives {7}
The primary objective is as follows: the primary aim of the project is to investigate the effect of online cognitive behavioral therapy (e-CBT) on the prevalence of PPBCT 6 months after breast cancer surgery in patients with high levels of preoperative anxiety or pain catastrophizing. The e-CBT intervention will be compared with an educational intervention consisting of information about surgery and a healthy lifestyle.
The secondary objective(s) is as follows: to examine the effects of e-CBT on nociceptive sensory signal transmission and central pain inhibiting mechanisms; to examine the effects of e-CBT on the intensity of acute postoperative pain in the week after surgery, cessation of postoperative opioid use, and on PPBCT prevalence after 12 months, PPBCT intensity and interference; and to evaluate the impact of e-CBT on anxiety, depression, catastrophizing, fear of cancer recurrence, and quality of life until 12 months post-surgery.
Trial design {8}
This superiority trial is designed as a multi-center randomized controlled trial, with an additional non-randomized control arm. Breast cancer patients with high anxiety or catastrophizing levels will be allocated to either the e-CBT intervention or active control arm (EDU). Patients with low to normal levels of anxiety and catastrophizing will receive treatment as usual (TAU). The TAU group serves as an observational cohort and is followed parallel to the randomized trial ( Fig. 1).
Study setting {9}
Patients are recruited in several academic and specialized hospitals in the Netherlands all dedicated to the treatment of breast cancer.
Eligibility criteria {10}
The eligibility criteria are as follows: women scheduled for primary breast cancer surgery with or without primary reconstructive surgery.
Inclusion criteria all patients
-Unilateral primary breast cancer surgery -Age ≥ 18 years old
Inclusion criteria for the RCT (e-CBT&EDU)
-Women scoring either ≥ 8 on the anxiety subscale of the Hospital Anxiety and Depression Scale (HADSa) [52], ≥ 3 on the Surgical fear item (i.e., "quite a bit" or "very much" [16];), ≥ 5 on the Concerns about Recurrence Scale (CARS) [53], or ≥ 18 on the Pain Catastrophizing Scale (PCS) [54] Patients are asked for participation by the local breast cancer teams, study nurses, and anesthesiologists involved in the perioperative breast cancer surgery procedure.
If patients are interested, they are offered verbal and written information followed by an appropriate reflection time of 48 h to 1 week, depending on the time-span between preoperative screening and surgery. In adaption to the COVID-19 pandemic, the informed consent procedure is also possible by telephone, video call, mail, and email.
Additional consent provisions for collection and use of participant data and biological specimens {26b}
Separate informed consent is asked for the use of participant data for future research on breast cancer.
No biological specimens are taken.
Explanation for the choice of comparators {6b}
To provide a control group with comparable intervention conditions to the e-CBT group, an online educational (sham) intervention is composed. Women randomized to this EDU group receive an intervention based on information about surgery, communication skills, and a healthy lifestyle. Content referring to psychosocial support is avoided. The number of online sessions and appointments with a therapist are equal to the e-CBT intervention.
Women with low to normal levels of anxiety and catastrophizing are allocated to the non-randomized control group receiving TAU. This group serves as reference group for the primary outcome PPCBT and for the QST/ CPM measurements.
Intervention description {11a} AMAZONE e-CBT intervention
The AMAZONE e-CBT intervention is developed together with an experienced onco-psychologist and patient representatives of the Dutch Breast cancer Society and the MUMC+ patient panel. In addition, existing protocols for decreasing pre-operative anxiety and pain catastrophizing [48,[55][56][57] and fear of cancer recurrence (FCR) [58] were taken into account.
The intervention consists of education and skills training to better cope with pain after surgery and challenge and replace anxious and catastrophizing thoughts by more helpful cognitions. The key elements of the e-CBT are cognitive restructuring, relaxation exercises, coping with anxiety, activity-rest balance, and pleasant activity scheduling. As such, the intervention targets cognitive, emotional, behavioral, and physiological aspects of psychological distress.
The e-CBT intervention consists of five sessions, with one being delivered pre-and four post-operatively. Patients can follow the sessions at their own pace, but are recommended to follow the timeline as presented in Table 1. In addition to the five online sessions, three appointments with a therapist are scheduled. The appointments take place via video call (secured platform). The content is manualized and the duration is limited to 30 min. The purpose of the appointments is to monitor the intervention, increase motivation, and answer questions and concerns that occur during the intervention. Session 1 starts with discussing the reactions that can be expected when confronted with breast cancer and the upcoming surgery. The education part is devoted to pain, the different factors contributing to the pain experience, and the stress response. Relaxation techniques are introduced. Patients are asked to try out the different relaxation exercises (progressive muscle relaxation, breathing exercises and visualization exercise). As a recurring aspect of the intervention, patients choose one exercise and continue to practice this exercise on a daily basis.
Session 2 consists of three parts, the first being the relation between thoughts-feelings-bodily sensationsbehavior. With the aid of a thoughts diary, patients learn to notice their own thoughts and reactions. Later in the intervention, this will serve to practice cognitive restructuring of the catastrophizing thoughts. The second part focuses on coping with anxiety and contains tips and exercises. The last part of the session concentrates on finding a good activity-rest balance after the surgery. Patients are encouraged to engage in pleasant or valued activities. After session 2, patients are asked to continue the relaxation exercises, write down their thoughts about cancer, pain, and other complaints in the thoughts diary, and document their pleasant and valued activities.
Session 3 starts with getting insight in one's own coping strategies. The education part focuses on cognitive restructuring. Patients receive exercises to change their unhelpful (catastrophizing) thoughts into more helpful thoughts. After session 3, patients continue the homework assignments, complemented by the cognitive restructuring part in the thoughts diary.
Sessions 4 is devoted to coping strategies and valued activities. The role of avoidance in the persistence of complaints is discussed. It is explained how valued activities can counterbalance difficult and stressful situations. Therefore, patients are asked to think about things that are genuinely important to them during this period. Using the acronym SMART (Specific-Meaningful-Acceptable-Realistic-Time framed), they describe activities that give them energy and help them endure the difficult treatment. Homework assignments are continued.
Session 5 is devoted to rehearsal, continued practice, and maintenance. After a recap of the first four sessions, patients are guided into the construction of their own action plan. They write down how to recognize their own alarm signals and which of the exercises and techniques from the intervention are most helpful to them.
The intervention is hosted on a specialized eHealth platform (Karify ® , Utrecht, Netherlands), which allows secured communication with a therapist.
AMAZONE active control intervention (EDU)
The active control intervention (EDU) consists of five sessions that present information that is taken from publicly available sources. It has the same outline as the e-CBT intervention: one online session pre-operatively and four post-operatively and three appointments with a therapist to monitor the intervention, increase motivation, and address possible questions concerning the intervention. The control intervention is hosted on Qualtrics (Qualtrics, Provo, Utah, USA).
Session 1 gives information about the different types of surgery and pain treatment. Patients receive tips on how to prepare for surgery and handle possible complaints the days after surgery. This comprises a series of physical exercises that are recommended after breast cancer surgery. For the patients who want to gain additional information, a list of reliable sources is presented. Session 2 is devoted to good communication about the disease and treatment.
Session 3 gives information about fatigue during cancer treatment. Different causes are discussed as well as tips on how to cope with fatigue.
Session 4 focuses on a healthy weight and food pattern. Patients can choose the topic(s) they want to read more about: unwanted weight gain, diminished appetite, changed taste, and information on food and medication.
Session 5 consists mainly of rehearsal of the main points of previous sessions.
Criteria for discontinuing or modifying allocated interventions {11b}
Subjects can leave the study at any time for any reason if they wish to do so without any consequences. The responsible physician or the investigator can decide to withdraw a subject from the study for urgent medical reasons.
Strategies to improve adherence to interventions {11c}
After enrollment and before the first online session, the randomized patients have an online appointment (via secured platform) to explain the aim of the program and address concerns. Also, after sessions two and five, patients have appointments with their therapists to discuss questions regarding the online sessions and motivate them. This appointment is also meant to create therapeutic alliance and protocol adherence.
When patients forget to fill in the online questionnaires or open the online sessions, they receive reminders via mail or SMS. Patients can also contact the research team if needed.
Relevant concomitant care permitted or prohibited during the trial {11d}
AMAZONE does not interfere with the planned oncological treatment nor prohibits any other necessary medical, psychological, or psychiatric treatment. In case additional psychological or psychiatric treatment is initiated during the 2 months after surgery in the randomized groups, a sensitivity analysis will be applied to assess potential confounding by this subgroup.
Provisions for post-trial care {30}
The AMAZONE trial is qualified as a low risk study.
Harm from e-CBT or EDU is not expected.
Main study parameter/endpoint
The main outcome of the study is the prevalence of significant PPBCT in the operated area at six months, defined as a score ≥ 3 on an 11-point numeric rating scale (NRS).
Secondary study parameters/endpoints
Secondary outcomes of the study are pain intensity scores (intercept and slope) during the first postoperative week (NRS, pain diary), cessation of postoperative opioid use (no. of days), and PPBCT prevalence after 12 months. Mean PPBCT intensity in the operated area (NRS), presence of neuropathic pain (Doleur neuropathique (DN4)) [59], and pain interference (Brief Pain Inventory (BPI) [60], measured at 2, 6, and 12 months.
(Pain) sensitivity is assessed with quantitative sensory testing (QST) before and at 6 months post-surgery. QST is measured at the bilateral pre-axillary dermatomes Th 3. In patients with persistent postoperative pain in the operated area, the 6-month QST measurement is performed in the painful and the corresponding contralateral location. QST measurements are performed according to the protocol developed by the DFNS [61]. Conditioned pain modulation (CPM) is measured before, at 1 week postoperatively and 6 months later according to the suggestions of Yarnitsky et al. [62]. The chosen CPM algorithm compares three repetitions of pressure pain threshold measurements in the thenar contralateral to the operated side, before and immediately after a cold pressor test delivered to the other hand with ice water.
In addition the psychological parameters anxiety, fear of recurrence, catastrophizing, and depression are assessed together with the cancer related quality of life before and up until 12 months after surgery.
Other study parameters
Other parameters including clinical and psychosocial patient characteristics, breast-cancer treatment related variables, and compliance with the psychological intervention are assessed at the time points shown in Table 2.
Participant timeline {13}
The participant timeline is presented in Fig. 1
Sample size {14}
The sample size calculation has been performed for the primary outcome, the prevalence of PPBCT (NRS ≥ 3) at 6 months after surgery for breast cancer. The expected overall prevalence of PPBCT is 30%, but the high anxiety/high catastrophizing group, which we will recruit for the randomized trial, has been shown to have a 2.2 times higher prevalence [15,16,69], and we expect that about 30% of all patients will screen positive for anxiety/catastrophizing [16,55,70]. To be on the conservative side for our calculation, and in accordance with Burns and Moric, we estimate that the prevalence of PPBCT will be 50% in the high anxiety/catastrophizing group [71] and also that perioperative CBT decreases PPBCT prevalence by 50%.
For the current study, this would mean a reduction of the prevalence in the e-CBT group from 50 to 25%. We need to include 55 patients per group to obtain 80% power to detect this difference, with an alpha of 0.05. To account for a potential drop-out rate of 20%, we will recruit a total of 138 high anxiety/catastrophizing patients. All patients that are considered to have low anxiety/catastrophizing and thus are not randomized will be asked to participate in the cohort study. We expect to include around 322 low anxiety/catastrophizing patients based on an expected 30/70% ratio of high vs low anxiety/ catastrophizing.
Recruitment {15}
Patients of participating hospitals with medium to high production volumes for breast cancer treatment are offered participation by their treating physicians. In addition, information about AMAZONE is shared on a website [64], on websites offering information about breast cancer treatment in the Netherlands, i.e., The Dutch breast cancer society (BVN), The Dutch Cancer Society (KWF), the Integraal Kankercentrum Nederland (IKNL), and also via social media. Hereby we aim to improve recruitment by patient empowerment.
Assignment of interventions: allocation Sequence generation {16a}
Allocation is performed using electronic stratified randomization with random permuted block sizes. Stratification factors are axillary dissection and center of inclusion.
Concealment mechanism {16b}
Patients in the e-CBT and EDU groups are blinded for the type of intervention. Patients in the TAU group
Implementation {16c}
Patients are enrolled by the treating physician at the participating centers. The allocation sequence is computercontrolled and generated by Castor ® EDC.
Assignment of interventions: blinding Who will be blinded {17a}
Patients are only informed about allocation to either treatment (randomization) or control (TAU). Patients who will be randomized will not be informed about the type of intervention-e-CBT or EDU. The local AMA-ZONE teams can view the treatment allocation in the online patient administration program. Outcome analyses is performed by an independent statistician who will be blinded for the type of intervention.
Procedure for unblinding if needed {17b}
The need for unblinding of participants is very unlikely as side effects of CBT are currently not described and EDU does not contain any information/advices that cannot be found elsewhere.
Data collection and management Plans for assessment and collection of outcomes {18a}
Detailed information on outcomes is presented in the "Outcomes {12}" section. All investigators are working in accordance with GCP guidelines and are trained in the execution of clinical studies. The teams of the centers assessing pain sensitivity by QST/CPM were trained to perform the standardized QST protocol according to the regulations of the German Research Network Neuropathic pain (DFNS) by the accredited location at the Center for Biomedicine and Medical Technology Mannheim (CBTM), Ruprecht-Karls-University Heidelberg, Medical Faculty Mannheim.
To maintain the quality of the measurements, refresher trainings are organized on a regular basis.
Preoperative risk profiling and all patient reported outcomes (questionnaires) as well as medical information and SAE reporting are collected online with the cloudbased clinical data management platform Castor ® EDC (www. Casto rEDC. com).
The members of the local AMAZONE study teams are trained to use the program for study flow logistics (Ldot©), Castor©, screening forms and other questionnaires that have to be completed at the respective time points.
Patients are instructed to use the online software by the local investigators during the inclusion procedure.
Plans to promote participant retention and complete follow-up {18b}
At the assessment time points, participants automatically receive reminders by email and text messages to fill in the online questionnaires. Members of the local study team can contact the participant if questionnaires are missing or incomplete. In case online questionnaires were not filled in at one time point, patients still receive notifications at the subsequent follow-up assessment times. All collected data will be used for final analyses, even if a participant does not complete follow-up.
Data management {19}
Data handling is according to the Dutch General Data Protection Regulation (AVG) and the Dutch Act on Implementation of the General Data Protection Regulation (UAVG). Data are retrieved and stored according to GCP guidelines in a coded fashion in a protected database (Castor ® EDC). Subjects will receive a unique sequential study code that does not include any personal information. The coding key will be password protected and kept in each participating hospital, only accessible by the local study team.
An independent quality officer will monitor the study data according to GCP practice. Monitoring encompasses the verification of informed consent, inclusion and exclusion criteria, source data of the clinical parameters and (S)AE reporting, and will take place at the initiation of the study, after inclusion of 10 patients at a site and after inclusion of the last patient.
Confidentiality {27}
Information about potential and enrolled participants is documented in a local trial master file (TMF) that is accessible only to the local AMAZONE team. Signed consent forms are collected in a local file. The forms are saved for 15 years after the end of the trial at the study centers.
Plans for collection, laboratory evaluation and storage of biological specimens for genetic or molecular analysis in this trial/future use {33}
See the "Additional consent provisions for collection and use of participant data and biological specimens {26b}"; there will be no biological specimens collected.
Statistical methods for primary and secondary outcomes {20a}
All analyses will be performed according to the intention-to-treat principle. The main analyses focus on the comparison of the e-CBT and education (EDU) group. Exploratory comparisons concerning prevalence of PPBCT, acute postoperative pain, and pain-sensitivity with the no-intervention (low fear) group will also be made, using similar statistical techniques as described below for the two randomized groups.
Primary study parameter(s)
Group differences concerning the prevalence of PPBCT at 6 months follow-up will be reported stratified by treatment allocation. Comparisons between e-CBT and EDU will be made using logistic regression analysis, adjusted for the variables used to stratify randomization. The odds ratio (OR) including 95% CI and p-value will be reported.
Secondary study parameter(s)
The mean daily postoperative pain scores over the 1-week period will be compared between the groups using linear mixed-effects regression with a random intercept and slope to account for multiple measurements over time within patients. Covariance between random effects will be estimated using an unstructured covariance matrix. Correlation between measurements over time will be modeled using a first order autoregressive structure. Both differences in average pain level over time and differences in trend across time will be assessed.
Time to opioid cessation will be analyzed with the Kaplan-Meier method and compared using the log-rank test.
Mean pain intensity and pain interference at 2, 6, and 12 months will be compared using linear mixed-effects regression with regards to group differences and time course.
Quality of life at 6 and 12 months will be compared between groups using the independent-samples t-test.
The effect of surgery on QST and CPM parameters will be analyzed by paired-sample t-test on the pre-surgical and 1-week and 6-months post-surgical results. The intervention effect on QST and CPM will be assessed with linear mixed-effects regression at the three time points.
Intervention effects on pain catastrophizing, anxiety, depression and fear of cancer recurrence at two months will be evaluated using the independentsamples t-test. With linear mixed-effects, regression differences in level and slope are compared between e-CBT and EDU taking all longitudinal measurements into account.
Other study parameters
Other parameters including clinical and psycho-social patient characteristics, breast-cancer treatment related variables, and compliance with the psychological intervention are assessed at the time points shown in Table 2.
Interim analyses {21b}
After randomization of 50 patients, the effect size for the primary outcome will be computed to assess whether the assumptions made for the sample size calculation were correct. As the responsible statistician is not authorized to decide about the progress of the study-only to give advice about the sample size-the interim analyses will bear no effect on conclusions of effectiveness, but may be used to recompute the necessary sample size.
Methods for additional analyses (e.g., subgroup analyses) {20b}
Additional analyses are not planned.
Methods in analysis to handle protocol non-adherence and any statistical methods to handle missing data {20c}
In case of missing data in more than 5% of patients, multiple imputation with fully conditional specification will be used to impute the dataset. The number of imputations will be set to the percentage of incomplete patients, and predictive mean matching will be used to draw values to be imputed from selected donors. The percentage of missing values per variable of interest will be presented as count and percentage.
Plans to give access to the full protocol, participant level-data and statistical code {31c}
The trial protocol, anonymized trial data, and statistical codes are available upon request from the corresponding author.
Composition of the coordinating center and trial steering committee {5d}
The study is coordinated by the AMAZONE study group of the Maastricht University and the Maastricht University Medical Center+. Members of the study group also provide the coordination of centers, recruitment, and follow-up on a day-to-day base. The coordinating group meets at least weekly. Monthly meetings of the coordinating group and the recruiting centers are scheduled and a hotline for practical issues is available. Data management support is provided by the center for data and information management at the Faculty of Health, Medicine and Life Sciences of the Maastricht University (MEMIC).
Composition of the data monitoring committee, its role and reporting structure {21a}
A DMC was not deemed necessary because the AMA-ZONE e-CBT and EDU intervention are low-risk interventions.
Adverse event reporting and harms {22}
The investigator reports all SAEs for which the assumption can be made that they are related to the imposed behavioral manner (i.e., e-CBT or EDU) to the participating subject or the procedure(s) they are subjected to (QST/CPM measurements). SAEs are reported within the legal timelines (7/15 days) to the sponsor without undue delay after obtaining knowledge of the events. Exemptions from expedited reporting are SAEs that are known for the indication and treatment of breast cancer or other SAE's for which the assumption cannot be made that they are related to the study intervention. These non-related SAEs do not have to be reported on an expedited basis but will be fully documented on the SAE-eCRF-page.
The sponsor reports the SAEs through the web portal Toe-tsingOnline and to the accredited METC that approved the protocol, according to the requested timelines by Dutch law.
Frequency and plans for auditing trial conduct {23}
An independent auditor (quality officer) will monitor the trial conduct and accuracy of data collection according to the regulations described under Good Clinical Practice (GCP). The quality officer is independent of the sponsor and free from competing interests. In particular, conduction of the informed consent procedure, application of inclusion and exclusion criteria, and the quality of data collection of the primary endpoints are subject to monitoring. The officer will perform a source data verification of data described in the CRFs to investigate the agreement between source data and study reports. The monitor also evaluates whether (S)AE's are adequately reported within the time frame as directed by the Dutch law.
Monitoring of the centers will take place at the initiation of the study, after inclusion of 10 patients at a site and after that every 6 months until closing the study.
The AMAZONE trial was rated low risk. Monitoring reports are reviewed by the AMAZONE steering committee and reported to the Ethics Committee if necessary according to the WMO.
Plans for communicating important protocol amendments to relevant parties (e.g., trial participants, ethical committees) {25}
Protocol modifications and amendments are communicated directly to the local PIs and METCs. In addition recruitment progress, practical information, news from the trial locations are communicated via the monthly meetings, a monthly newsletter, and the study website [64].
Dissemination plans {31a}
Trial results will be communicated to the scientific community via journals and national and international conferences. As the AMAZONE study intervention might have immediate impact on the standard of perioperative breast cancer care, special efforts will be made to disseminate the study results to clinicians and health care providers via journals and websites of the national professional associations. As patient empowerment is one of the aims of the AMAZONE study the study protocol is developed in close cooperation with the Borstkankervereniging Nederland and the Dutch Cancer Society. These organizations also have special attention for the information of patients about the study and its results.
Discussion
In the past decades, breast cancer treatment has evolved to an individualized treatment based on tumor size, receptor status and mutational status. While cancertreatment options are discussed extensively, information about persisting pain, psychosocial, and physiotherapeutic support are often neglected in the treatment phase. Specialized therapy for these complaints is not structurally offered and if only after patients developed long-lasting complaints. The same is true for studies investigating the effect of e-health interventions to reduce the physical and psychological burden after cancer treatment [51]. These interventions are usually initiated months or years after the completion of breast-cancer treatment.
The AMAZONE study is unique as the intervention starts before breast cancer surgery with the aim to prevent the development of persisting pain and related disability. Consequently, functioning and quality of life in breast cancer survivors can be improved. A preventive intervention might have a much larger impact than treating physical and psychological symptoms after they have occurred. A preventive intervention may therefore also be more (cost)effective.
If e-CBT is proven effective by the trial, the AMA-ZONE application can easily be introduced in daily clinical practice. | 2022-07-26T13:14:25.545Z | 2022-07-25T00:00:00.000 | {
"year": 2022,
"sha1": "07ab7100becf124b2884f9284afb97c6a3c91202",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "ebfe2e0be7b69fdd1f1156b2d0fe470b19d4759a",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271292843 | pes2o/s2orc | v3-fos-license | A tubulin-binding protein that preferentially binds to GDP-tubulin and promotes GTP exchange
α- and β-tubulin form heterodimers, with GTPase activity, that assemble into microtubules. Like other GTPases, the nucleotide-bound state of tubulin heterodimers controls whether the molecules are in a biologically active or inactive state. While α-tubulin in the heterodimer is constitutively bound to GTP, β-tubulin can be bound to either GDP (GDP-tubulin) or GTP (GTP-tubulin). GTP-tubulin hydrolyzes its GTP to GDP following assembly into a microtubule and, upon disassembly, must exchange its bound GDP for GTP to participate in subsequent microtubule polymerization. Tubulin dimers have been shown to exhibit rapid intrinsic nucleotide exchange in vitro, leading to a commonly accepted belief that a tubulin guanine nucleotide exchange factor (GEF) may be unnecessary in cells. Here, we use quantitative binding assays to show that BuGZ, a spindle assembly factor, binds tightly to GDP-tubulin, less tightly to GTP-tubulin, and weakly to microtubules. We further show that BuGZ promotes the incorporation of GTP into tubulin using a nucleotide exchange assay. The discovery of a tubulin GEF suggests a mechanism that may aid rapid microtubule assembly dynamics in cells.
Introduction
Microtubules represent one type of the eukaryotic cytoskeleton required for key cellular functions including transportation of cargos via motor proteins, relaying of mechanical signals to the interphase nucleus, and proper segregation of chromosomes during cell division (Gudimchuk and McIntosh, 2021;Reck-Peterson et al., 2018;Zheng, 2010;Kirby and Lammerding, 2018).Microtubules are assembled from tubulin, a heterodimer of α-and β-tubulin.α-tubulin is constitutively bound to guanosine triphosphate (GTP) at its non-hydrolyzing and non-exchangeable "N-site", whereas β-tubulin binds to and rapidly exchanges both GTP and GDP at its exchangeable "E-site" where GTP can be hydrolyzed into GDP (Jacobs et al., 1974;Brylawski and Caplow, 1983).Tubulin heterodimers with GTP-bound β-tubulin (hereinafter referred to as GTP-tubulin) polymerize into microtubules (Desai and Mitchison, 1997).As the microtubule elongates, longitudinal and lateral interactions of incorporated tubulin dimers activate β-tubulin GTPase activity, stochastically hydrolyzing the bound GTP and forming GDP-tubulin in the microtubule lattice (Shemesh et al., 2023;Nogales et al., 1998a).As a result, the growing ends of microtubules contain GTP-tubulin, called the GTP cap.GTP hydrolysis induces tubulin dimer conformational changes which cause destabilization of the microtubule lattice (Alushin et al., 2014).In the absence of the GTP cap, microtubules transition from growth to shrinkage, termed catastrophe (Horio and Murata, 2014).Microtubule disassembly releases free GDP-tubulin, which must exchange its bound GDP for GTP before it can take part in microtubule polymerization again.
GTPases generally exhibit slow intrinsic rates of nucleotide exchange (t 1/2 > 30 min) and require guanine nucleotide exchange factors (GEF) to accelerate nucleotide dissociation by orders of magnitude for proper biological function (Bischoff and Ponstingl, 1991;Blaise et al., 2021;Marshall et al., 2012;Self and Hall, 1995;Chardin et al., 1993;Killoran and Smith, 2019).Many GEFs have been characterized for different GTPase families, and GEFs within a family share common catalytic domains such as the CDC25 domain for RasGEFs and the Sec7 domain for ArfGEFs (Bos et al., 2007).GEFs between different GTPase families do not share homologous protein sequences making it difficult to identify novel GEFs from primary structure alone (Koch et al., 2016).Despite the lack of sequence similarity, studies have shown a common mechanism by which GEFs mediate the release of GTPase-bound nucleotides by destabilizing the magnesium ion in the nucleotide binding pocket required for stable nucleotide binding (Vetter and Wittinghofer, 2001;Béraud-Dufour et al., 1998).GEFs bind to GTPases without specificity to their nucleotide states and mediate the exchange of either bound GTP or GDP with nucleotides in solution (Boor et al., 2015;Bos et al., 2007;Bischoff and Ponstingl, 1991).In cells, the 10:1 ratio of intracellular GTP:GDP ensures that the GTP-bound form is the dominant GTPase species when acted upon by a GEF (Traut, 1994).In contrast to slow measured rates of nucleotide exchange of many GTPases, the in vitro dissociation of GDP from β-tubulin is rapid (t 1/2 = 5 s) (Brylawski and Caplow, 1983).Therefore, it is believed that free tubulin in the cell is able to quickly exchange into the GTP-bound form leading to the view that a tubulin GEF is not needed.However, there is an 18-fold increase in microtubule turnover in mitosis compared to in interphase, representing a considerable increase in demand for GTP-tubulin (Saxton et al., 1984).Additionally, microtubule dynamics have been shown to be diffusion-limited in metaphase Xenopus egg extracts, and energy consumption outpaces energy production during cell division (Galichon et al., 2024;Geisterfer et al., 2020).In this context, the more rapid polymerization and depolymerization of microtubules may necessitate tubulin GEF activity to sustain elevated microtubule turnover during spindle assembly.
BuGZ (Bub3-interacting and GLEBS motif-containing protein ZNF207) is a recently discovered protein that binds to tubulin and microtubules (Jiang et al., 2015).In mitosis, BuGZ regulates spindle assembly by promoting microtubule polymerization and proper microtubule-kinetochore attachment by binding and stabilizing Bub3, a spindle assembly checkpoint protein.As a result, reduction of BuGZ results in prometaphase arrest with defects in spindle morphology and chromosome alignment (Jiang et al., 2014;Toledo et al., 2014).BuGZ is evolutionarily conserved in animals and plants, and the plant BuGZ may also regulate microtubule-kinetochore interactions as it contains a predicted Bub3 binding domain (Chin et al., 2022).BuGZ's N-terminus, containing two C2H2 zinc fingers, binds to tubulin, the mitotic kinase Aurora A, and microtubules, while BuGZ undergoes liquid-liquid phase separation through its intrinsically disordered C-terminal region.In combination, the respective N-and C-terminal domain characteristics lead to enrichment of tubulin and Aurora A in BuGZ condensates, which in turn promote microtubule polymerization and Aurora A kinase activity, respectively (Huang et al., 2017;Jiang et al., 2015).The multivalent binding of phase separated BuGZ also leads to microtubule bundling (Jiang et al., 2015).BuGZ's ability to promote microtubule polymerization by concentrating tubulin within BuGZ condensates is similar to a behavior observed in other phase-separating microtubule regulators, such as TPX2 and centrosome components, indicating that phase separation may play a role in mitotic spindle formation (King and Petry, 2020;Woodruff et al., 2017).
Through quantitatively analyzing BuGZ's binding affinity to tubulin and microtubules, we found that BuGZ exhibits 10-fold higher affinity for GDP-tubulin than for GTP-tubulin, and a 210-fold higher affinity for GDP-tubulin than for microtubules.We further show that BuGZ promotes nucleotide exchange, converting GDP-tubulin to GTP-tubulin.We will discuss the implication of our findings on microtubule assembly dynamics.
Results & Discussion
BuGZ exhibits weak binding to microtubules Previous studies have shown that BuGZ binds to microtubules and tubulin dimers and that BuGZ phase separation leads tubulin to concentrate inside the condensates, which in turn promotes microtubule polymerization.BuGZ droplets formed along microtubules also cause microtubule bundling (Jiang et al., 2015).However, there is an incomplete understanding of the interplay between BuGZ's interactions with tubulin and microtubules given a lack of quantitative binding data.Therefore, we performed quantitative assays to determine the binding affinity between BuGZ and microtubules.
To measure equilibrium binding of BuGZ to microtubules, we used microtubules polymerized with a 1:10 mixture of biotin-labeled:unlabeled GTP-tubulin such that ≥1 biotin-tubulin is present per 12-protofilament turn, facilitating retrieval of the microtubules with paramagnetic streptavidin beads.Taxol was added to stabilize the polymerized microtubules.Given that taxol-stabilized microtubules remain intact at low temperatures, we performed all equilibrium binding experiments at 4 ˚C to suppress BuGZ's tendency to undergo phase separation (Schiff et al., 1979;Jiang et al., 2015).Under these conditions, we mixed varying concentrations of taxol-stabilized microtubules (0 µM, 5.5 µM, 11 µM, 22 µM, 44 µM, 110 µM) with streptavidin-coated paramagnetic beads.Purified X.laevis BuGZ was added to a final concentration of 1.19 µM and the reaction was allowed to reach equilibrium.Then, microtubule-bound BuGZ was pulled down by a magnet.The supernatant was collected and BuGZ depletion was measured by quantitative immunoblotting.The fraction of BuGZ bound to microtubules was plotted against microtubule concentrations and then fitted with a rectangular hyperbola (see Materials and Methods).The equilibrium dissociation constant (K D ) was determined from the fit.The K D of BuGZ for taxol-stabilized microtubules is 9.45 µM (95% CI: 7.72 µM -11.50 µM) (Figure 1A).Since a similar fraction of BuGZ was bound to 10 µM unsheared or sheared microtubules, BuGZ does not show an obvious preference to microtubule ends or the lattice under our assay conditions (Figure 1A, 1B).
BuGZ preferentially binds GDP-tubulin over GTP-tubulin
The findings above show that the affinity of BuGZ for microtubules is relatively weak compared to other microtubule associated proteins whose binding affinities were previously reported (Folker et al., 2005;Spittle et al., 2000;Butner and Kirschner, 1991;Seeger and Rice, 2010).Since BuGZ also binds to tubulin and BuGZ condensates concentrate tubulin dimers, we next measured how well BuGZ binds free tubulin dimers (Jiang et al., 2015).We first investigated BuGZ's binding affinity for GTP-tubulin.Streptavidin-coated paramagnetic beads were incubated with varying concentrations (0 µM, 0.1 µM, 1 µM, 2 µM, 5 µM, 10 µM, 20 µM) of biotinylated GTP-tubulin and then BuGZ was added to a final concentration of 1.19 µM.The reaction was performed at 4 °C in the presence of nocodazole to prevent the assembly of microtubules.The beads were retrieved via magnetic separation, and BuGZ equilibrium binding was assessed by measuring BuGZ depletion from the supernatant via quantitative immunoblotting.The K D for BuGZ binding to GTP-tubulin was determined to be 477 nM (95% CI: 275.1 nM -757.5 nM) (Figure 2A).The 20-fold stronger binding affinity of BuGZ to GTP-tubulin than to microtubules prompted us to measure the binding affinity of BuGZ to GDP-tubulin using the same assay.We found that the K D for BuGZ binding to GDP-tubulin was 45.3 nM (95% CI: 21.7 nM -78.5 nM), representing a 10-fold stronger affinity than for GTP-tubulin and 210-fold stronger affinity than for taxol-microtubules (Figure 2B).Control experiments using mEGFP in solution with bead-bound tubulin show no differential effect in GDP or GTP conditions (Figure 2C).
The N-terminal zinc finger-containing domain is responsible for the preferential binding of BuGZ to GDP-tubulin To further investigate the ability of BuGZ to recognize the nucleotide-state of tubulin, we used AlphaFold to generate a predicted protein structure of BuGZ (Fig. 3A) (Jumper et al., 2021).Since most of the sequence following the N-terminal zinc finger motifs of BuGZ is predicted to be intrinsically disordered, the structure prediction beyond the N-terminal region is considered low confidence by AlphaFold (Fig. 3A).Next, we used the multimer model of AlphaFold to predict the structure of a BuGZ in-complex with a tubulin dimer (Fig. 3B).AlphaFold correctly predicted the tubulin dimer structure as previously determined by cryo-electron microscopy (Nogales et al., 1998b).AlphaFold also predicted that the C-terminal GLEBS motif of BuGZ, containing a short α-helix, is positioned close to the interface of α-tubulin and β-tubulin implying that the GLEBS domain could interact with tubulin (Fig 3B).Consistent with our previously reported finding, AlphaFold also predicted an interaction between the N-terminus of BuGZ and the tubulin dimer (Jiang et al., 2014(Jiang et al., , 2015)).Based on these analyses, we constructed and purified two mutants of BuGZ: BuGZ-ΔGLEBS with a deletion of the 32 amino acids containing the GLEBS motif in the Proline-Rich Region 2 (PRR2) and the N-terminal 92 amino acids of BuGZ (BuGZ-NTD), containing the two zinc fingers (Figure 3C).
Using the same binding assay as described above, we measured the K D for the binding of each BuGZ mutant to GTP-tubulin and GDP-tubulin.We found that BuGZ-ΔGLEBS binds to GTP-tubulin and GDP-tubulin at a K D of 408.1 nM (95% CI: 156.3 nM-834.4nM) and 54.2 nM (95% CI: 28.6 nM-90.6 nM), respectively (Figure 3D).Since these affinities are nearly identical to those measured for wild-type BuGZ, we conclude that the GLEBS motif is not involved in BuGZ binding to tubulin.For BuGZ-NTD, the K D for GTP-tubulin and GDP-tubulin binding was measured as 7.14 µM (95% CI: 5.53 µM-9.17µM) and 1.47 µM (95% CI: 1.18 µM-1.82µM), respectively (Figure 3F).This indicates that BuGZ's N-terminal 92 amino acids containing the two zinc fingers exhibit preferential binding for GDP-tubulin binding over GTP-tubulin.
The reduced binding affinity of BuGZ-NTD to tubulin, relative to full-length BuGZ, indicates that BuGZ's C-terminal intrinsically disordered region plays a role in BuGZ-tubulin interactions.To investigate this finding further, we performed binding assays using a previously reported BuGZ mutant, BuGZ-13S (Jiang et al., 2015).This mutant is identical to wild-type BuGZ with the exception of 13 aromatic amino acid residues (phenylalanine or tyrosine) mutated to serine.While retaining BuGZ's C-terminal intrinsically disordered region, the loss of aromatic residues results in greatly reduced phase separation of BuGZ (Jiang et al., 2015).We found that the K D for BuGZ-13S binding to GTP-tubulin and GDP-tubulin are 2.75 µM (95% CI: 1.97 µM-3.78 µM) and 710 nM (95% CI: 533 nM-926 nM), respectively (Figure 3E).The stronger binding affinity of BuGZ-13S to GTP-and GDP-tubulin, relative to BuGZ-NTD, shows that the C-terminal intrinsically disordered region contributes to BuGZ-tubulin binding with the N-terminal domain of BuGZ contributing to the interaction bias toward GDP-tubulin.
Proteins that form condensates exhibit the propensity to form small oligomers even under conditions that disfavor phase separation (Chattaraj et al., 2024;Martin et al., 2021).Although our quantitative measurements of interaction affinities are performed at 4 ˚C, which suppresses the formation of BuGZ condensates visible under light microscopy, small BuGZ oligomers may still form.These oligomers would lead to an increased depletion of BuGZ by the biotinylated tubulin bound on beads in our assay, which could explain why the phase separation property of BuGZ contributes to increased tubulin binding (Figure 3H-I).
BuGZ promotes GTP exchange into GDP-tubulin
Nucleotide dissociation from tubulin is known to be rapid (Brylawski and Caplow, 1983).Since cells have a 10-fold higher concentration of GTP over GDP, it is thought that most GDP-tubulin produced upon microtubule disassembly are rapidly converted into GTP-tubulin (Traut, 1994).However, in cellular states with increased microtubule polymerization and depolymerization, such as in mitosis where microtubule turnover has been measured to be 18-fold faster than in interphase, tubulin may require a nucleotide exchange factor to sustain the higher levels of microtubule dynamicity as small perturbations can cause defects during spindle assembly and cell division (Saxton et al., 1984;Vicente and Wordeman, 2019).The ten-fold binding affinity bias of BuGZ to GDP-tubulin over GTP-tubulin prompted us to investigate whether such preferential binding could facilitate the conversion of GDP-tubulin to GTP-tubulin.To test this possibility, we compared the incorporation of GTP α-32 P into GDP-tubulin in the presence of BuGZ at room temperature, a condition in which BuGZ would undergo increased phase separation relative to our binding assays performed at 4 ˚C.1mM GTP including 3 nM GTP α-32 P was added to a solution of 100 µM GDP-tubulin and 1mM GDP to achieve a 1:1 ratio of GTP:GDP.Then, BuGZ or BSA was added to a final concentration of 1.19 µM.We hypothesized that if BuGZ preferentially promotes nucleotide exchange on GDP-tubulin because of its stronger binding affinity, the presence of BuGZ would result in higher GTP incorporation into tubulin, even in the presence of a 1:1 ratio of available GTP and GDP.At equilibrium, nucleotides were cross-linked to tubulin via ultraviolet radiation and unbound nucleotides were removed by size-exclusion chromatography.The eluate containing tubulin and BuGZ was assessed by scintillation counting (Figure 4A).In the presence of BuGZ, GTP incorporation, measured by the exchange of GDP for 32 P-labeled GTP, increased by 63.9% (p=0.03)relative to the control condition (Figure 4B).This shows that BuGZ acts as a tubulin GEF to promote GTP incorporation in tubulin.
Next, we tested the three BuGZ mutants for nucleotide exchange activity using the same assay.At equilibrium, BuGZ-ΔGLEBS showed a significant increase of 57.4% (p=0.03) in GTP incorporation relative to controls (Figure 4B).Conversely, BuGZ-13S and BuGZ-NTD showed statistically insignificant increases of GTP incorporation by 24.6% (p>0.05) and 7.8% (p>0.05),respectively (Figure 4B).Therefore, the ability of BuGZ to promote GTP exchange of GDP-tubulin relies on both the N-terminal tubulin binding domain and BuGZ phase separation mediated by the unstructured C-terminal region.
Discussion
Although extensive efforts have led to the discovery of many microtubule-associated proteins (MAP), there have been limited studies on their interactions with tubulin.There is also no reported effort on identifying a tubulin GEF, possibly because of the observed rapid dissociation of GDP or GTP from tubulin.Here, we show that BuGZ is a tubulin binding protein with 10-fold stronger affinity toward GDP-tubulin over GTP-tubulin and that it exhibits GEF activity toward tubulin.Studies have shown that some MAPs exhibit preferential binding to microtubule segments containing either GTP-tubulin, such as those present at the plus-end of polymerizing microtubules, or GDP-tubulin found in the microtubule lattice (Goodson and Jonasson, 2018).Although these studies suggest that MAPs may preferentially bind to GTP-or GDP-bound tubulin, nucleotide-specific MAP-tubulin interactions have only been reported for two MAPs, CLIP-170 and XMAP215 (Ayaz et al., 2012;Folker et al., 2005).In contrast to BuGZ which binds to GDP-tubulin at K D = 45.3 nM and GTP-tubulin at K D = 477 nM mediated by its zinc finger-containing N-terminal domain (Figure 2A-B, 5C), CLIP-170 and a subdomain of XMAP215 called TOG exhibit similar binding affinity to both GDP-and GTP-tubulin at K D ≈ 45 nM and K D ≈ 235 nM, respectively (Folker et al., 2005;Ayaz et al., 2012).Our finding of BuGZ as the first tubulin binding protein that shows preferential interactions with GDP-tubulin over GTP-tubulin reveals the need to further quantitatively study the ability of various MAPs to bind to tubulin bound to GDP or GTP (Gache et al., 2005;Yu et al., 2016).Further characterization in this manner may uncover an underappreciated regulatory role of MAPs in modulating microtubule assembly and function in different cellular contexts.
Our finding of BuGZ as a tubulin GEF is unexpected because of the prevailing idea that there is an abundance of intracellular GTP and that the GTP:GDP ratio is high in cells (Traut, 1994).Coupled with the rapid intrinsic nucleotide dissociation from tubulin should allow GDP-tubulin to quickly exchange into GTP-tubulin (Brylawski and Caplow, 1983).However, a recent study shows that the availability of GTP in the cell is rate-limiting in nucleocytoplasmic transport, indicating that GTP may become limiting when energy demand is high (Scott et al., 2024).Mitosis is an energy demanding process as it involves structural reorganization of the whole cell.Moreover, studies have shown that microtubule growth is diffusion-limited such that increasing the density of growing microtubule tips and increasing the local concentration of GTP-tubulin dimers causes a decrease and increase in microtubule growth, respectively (Geisterfer et al., 2020;Odde, 1997;Geel et al., 2020).In mitosis, the rapid growth and shrinkage of microtubule plus and minus ends, respectively, in the spindle could locally deplete GTP-tubulin.By binding to the GDP-tubulin generated from spindle microtubule depolymerization, BuGZ could increase the rate of GTP-tubulin production and subsequently, the 10-fold lower affinity for GTP-tubulin would mediate BuGZ's release of the tubulin, post-exchange, to support the highly dynamic spindle microtubules.How BuGZ functions as a GEF remains unclear, but its preferential binding to GDP-tubulin suggests that it may not facilitate equal nucleotide exchange of GDP-and GTP-tubulin as seen for other GEFs but instead specifically facilitates nucleotide exchange GDP-tubulin.Since the zinc finger domain of BuGZ can be produced in bacteria with high quantity and purity, it should be possible to solve the structure of the BuGZ-tubulin complex, which should shed light on how BuGZ functions as a tubulin GEF.
Cloning, protein expression, protein purification
Xenopus laevis BuGZ cDNA, codon-optimized for Spodoptera frugiperda Sf9 expression, was cloned into a baculovirus vector (Gibco pFastBac Dual Expression Vector) via Gibson Assembly (NEB Gibson Assembly Master Mix).In the assembled plasmid, BuGZ (NCBI: NM_001086855.1) is tagged on its amino terminus with: four leader amino acids, 6x His tag, GS linker, and TEV protease cleavage site (full tag sequence: MSYY-HHHHHH-GSG 4 SG 4 S-ENLYFQG).Vectors were then transfected into Sf9 cells (Gibco Sf9 cells in Sf-900 III SFM), using Gibco Cellfectin II Reagent according to ThermoFisher Bac-to-Bac Baculovirus Expression System instructions, to generate P0 viral stock.After 3 rounds of viral amplification, the resulting P3 virus was used to infect Sf9 cells for protein expression.72 hrs post-infection, cells were collected into 100 mL pellets via centrifugation at 1,000 g.Pellets were snap-frozen in liquid nitrogen and stored at -80 °C.For purification, pellets were thawed and resuspended in lysis buffer (LyB): 20 mM KH 2 PO 4 , 500 mM NaCl, 25 mM Imidazole, 1 mM MgCl 2 , 1 mM β-mercaptoethanol, 2.5% Glycerol, 0.01% Triton X-100, Roche cOmplete, EDTA-free Protease Inhibitor Cocktail, pH 7.4.The cell suspension was sonicated using Misonix Sonicator 3000 (2 min total process time, 30 s on, 30 s off, power level 2.0) on ice and then clarified by centrifugation at 15,000 x g for 30 min.Clarified lysate was then filtered through a MilliporeSigma Millex-GP 0.22 μm PES membrane filter unit.Lysate was loaded onto a 1 mL HisTrap HP column (Cytiva) pre-equilibrated in Buffer A (same as LyB without protease inhibitor cocktail) and ran on the FPLC (Cytiva Äkta pure).The column was then washed with 1X lysate volume of 85% Buffer A/15% Buffer B (20 mM KH 2 PO 4 , 150 mM NaCl, 300 mM Imidazole, 1 mM MgCl 2 , 1 mM β-mercaptoethanol, 2.5% Glycerol, 0.01% Triton X-100, pH 7.4).Bound protein was eluted with a 30 mL linear gradient from 15% to 100% Buffer B. BuGZ-containing fractions were identified by running fractions via SDS-PAGE and staining with Invitrogen SimplyBlue SafeStain (Invitrogen LC6060).BuGZ-positive fractions were pooled and concentrated to < 500 μL using Millipore Amicon Ultra centrifugation units with 30 kDa MW cut off (10 kDa MW cut-off for BuGZ-NTD).Contaminating proteins were removed from the concentrated BuGZ protein solution by gel filtration using a Superdex 200 Increase 10/300 GL column equilibrated in Buffer C (80 mM PIPES pH 6.8, 100 mM KCl, 1 mM MgCl 2 , 50 mM Sucrose, 1 mM EGTA).Fractions of the eluate containing BuGZ were again identified via SDS-PAGE and were then pooled, and concentrated using Millipore Amicon Ultra centrifugation units, and snap-frozen in liquid nitrogen for storage at -80 °C.The purity and concentration of BuGZ was determined by comparing in-gel SimplyBlue SafeStain staining to staining of known amounts of bovine serum albumin.
Measurement of BuGZ binding to Taxol-stabilized microtubules 10 mg/mL tubulin (Cytoskeleton T240) and 1 mg/mL biotin-tubulin (Cytoskeleton T333P) were mixed in equal volumes in a buffer containing BRB80 (80mM PIPES, 1mM MgCl 2 , 1mM EGTA, pH 6.8) + 1 mM GTP.The tubulin solution was diluted 1:1 in a solution containing BRB80, 2 mM DTT, 2 mM GTP, 20 μM taxol (taxol buffer 1) and incubated for 20 min at 37 °C.Pierce Streptavidin Magnetic Beads (ThermoFisher Scientific 88816) were equilibrated in a buffer containing BRB80, 1 mM DTT, 1 mM GTP, 10 μM taxol (taxol buffer 2).The microtubule assembly mixture was added to 20 μL streptavidin-coated bead slurry and incubated for 20 min at 4 °C, then washed 3 times with taxol buffer 2. After the final wash, the beads were resuspended in a 10 μL solution containing taxol buffer 2 and 1.19 μM BuGZ.After 20 min incubation at 4 °C, the beads were collected with a magnet and the supernatant was removed, and mixed with an equal volume of 2X SDS sample buffer for immunoblot analysis.For experiments with sheared microtubules, the taxol-stabilized microtubules were passed through a 30 G syringe needle 10 times immediately prior to addition to streptavidin-coated beads.
Measurement of BuGZ binding to GTP-and GDP-tubulin
Varying volumes of solution containing 1 mg/mL (~10 μM) biotin-tubulin, 200 μM nocodazole, 1 mM GTP or GDP, and BRB80 were added to Pierce Streptavidin Magnetic Beads equilibrated in the same buffer to decorate them with the desired amount of tubulin.After 20 min.incubation at 4 °C, beads were washed 3 times in the same buffer and the supernatant was removed.The magnetic beads, now decorated with varying amounts of biotin tubulin, were resuspended in equal volumes of a solution containing 1.19 μM BuGZ, 200 μM nocodazole, 1 mM GTP or GDP, and BRB80.After 15 min incubation at 4 °C, beads were collected with a magnet and supernatant was retained (mixed 50% v/v with 2X Laemmli buffer) for immunoblot analysis.
Gel electrophoresis and immunoblot analysis
Supernatants from binding experiments were boiled at 95 °C for 10 min and spun down in a benchtop centrifuge at 10,000 g for 3 min.Samples were loaded into an SDS polyacrylamide gel and separated by electrophoresis in a Tris-Glycine buffer (25 mM Tris, 250 mM glycine, 3.5 mM SDS).Following electrophoresis, proteins were transferred onto nitrocellulose membrane (Cytiva Amersham 10600002) for 2 hrs at 4 °C in transfer buffer (50 mM Tris, 125 mM glycine, 3.5 mM SDS, 20% methanol).After transfer, membranes were incubated in a block buffer (5% w/v skim milk, Tris-buffered saline pH 7.4) for 1 hr at room temperature, then probed with primary antibodies to the 6X His tag (for BuGZ recognition) in 5% w/v skim milk, Tris-buffered saline, 0.02% Tween-20, for 1 hr at room temperature.After washing 3 times for 10 min each in tris-buffered saline, membranes were incubated in a secondary antibody solution (Tris-buffered saline, 0.02% Tween-20, antibody) for 1 hr at room temperature.After washing 3 times for 10 min in tris-buffered saline, membranes were imaged in a LI-COR CLx system.Anti-6X His tag antibody (Abcam ab18184) was diluted 1:1000, and the secondary antibody, LI-COR IRDye 680RD Goat anti-Mouse IgG (926-68070), was diluted 1:10,000.
Nucleotide exchange assay
A solution containing 100 μM tubulin, 1 mM GDP, BRB80 was mixed with a 1/10 volume of 10X nucleotide exchange buffer (10 mM GTP, 33.3 nM GTP α-32 P (PerkinElmer BLU506H250UC), 1 mM nocodazole, BRB80) for a final solution containing 91 μM tubulin, 0.9 μM GDP, 0.9 μM GTP, 2.7 nM GTP α-32 P, 91 μM nocodazole, and BRB80.Immediately, a BRB80 solution containing either BuGZ or BSA was added to a final concentration of 1.19 μM of the protein.Corresponding controls were produced in the same manner except without the addition of tubulin.Mixtures were incubated at room temperature for 15 min.Then, samples were treated with UV for 5 min in a Stratagene UV Stratalinker 1800.Free nucleotides were removed with BioRad Micro Bio-Spin 6 gel columns (BioRad 7326222) equilibrated to BRB80 buffer according to manufacturer's instructions.Flowthrough was mixed with a scintillation cocktail (RPI Bio-Safe II) and GTP α-32 P was measured using PerkinElmer Tri-Carb 2810 TR.The measured signal from the no tubulin conditions for BSA or BuGZ were subtracted from its corresponding tubulin + BuGZ or BSA readouts.Then, the data were normalized as a ratio of the BuGZ to BSA 32 P signals.
Data analysis
All statistical analyses were performed using Graphpad Prism 10.Equilibrium dissociation constants were determined for conditions where the K D was near or far exceeding the total concentration of BuGZ (BuGZ to taxol-MT, BuGZ to GTP-tubulin, BuGZ-ΔGLEBS to GTP-tubulin, BuGZ-13S to GDP-and GTP-tubulin, BuGZ-NTD to GDP-and GTP-tubulin), using the function: (Equation 1) where X is the microtubule/tubulin concentration, B is the concentration of BuGZ, and Y is the BuGZ fraction bound.
For conditions where the K D was far less than the total concentration of BuGZ (BuGZ to GDP-tubulin, BuGZ-ΔGLEBS to GDP-tubulin), equilibrium dissociation constants were determined using Prism 10's 'Hyperbola' function: (Equation 2) where Bmax = 1, X is the concentration of BuGZ, and Y is the BuGZ fraction bound.Nucleotide exchange assays were analyzed using Wilcoxon signed-rank test against a hypothetical value of 1.
Figure 1
Figure 1 BuGZ exhibits weak binding to microtubules.(A) BuGZ binding isotherm to taxol-microtubules.K D = 9.45 µM (95% CI = 7.72 µM-11.50µM).Black data points correspond to the BuGZ fraction bound to taxol-MT.Curve-fit determined by Equation 1 (Materials and Methods).Data from (B) for the BuGZ fraction bound to 10 µM sheared taxol-MT shown in blue.(B) BuGZ fraction bound to 10 µM taxol-microtubules and sheared taxol-microtubules.Data shown with the mean of three replicates.
Figure 4
Figure 4BuGZ acts as a guanine nucleotide exchange factor (GEF) for tubulin.(A) Workflow of BuGZ GEF assay.Tubulin dimers in 1mM GDP buffer are supplemented with nocodazole, BuGZ (or BSA control), equimolar GDP and GTP (the GTP includes the spike-in of 32 P-GTP).After 15 minute incubation, the mixture was subjected to UV-mediated crosslinking and then passed through a desalting column to remove unincorporated nucleotides.Then, incorporated 32 P was measured by a scintillation counter.Light green = α-tubulin.Dark green = β-tubulin.Orange = GDP.Grey = GTP.Cyan = 32 P -GTP.(B) Presence of BuGZ in solution with tubulin and equal molar quantities of GDP and GTP increases GTP incorporation by 63.9% relative to BSA control (p = 0.03).BuGZ-ΔGLEBS increases GTP incorporation by 57.4% (p = 0.03).BuGZ-13S increases GTP incorporation by 24.6% (ns).BuGZ-NTD increases GTP incorporation by 7.8% (ns).Mean and standard deviation shown in the bar graph.Wilcoxon signed-rank test: *p ≤ 0.05, ns, not significant, theoretical value = 1.(C) Summary table listing BuGZ and BuGZ mutant binding affinities for GTP-and GDP-tubulin in micromolar scale. | 2023-05-12T13:10:04.422Z | 2023-05-10T00:00:00.000 | {
"year": 2024,
"sha1": "b2f3bbe39af9d7a128bbd5fbacc955845cdb7c8c",
"oa_license": "CCBYNCND",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/05/10/2023.05.09.539990.full.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "f5c323d6650b4b81fb8f58eaa1b78e57475537da",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
249241911 | pes2o/s2orc | v3-fos-license | Functional imaging and quantification of multineuronal olfactory responses in C. elegans
Many animals perceive odorant molecules by collecting information from ensembles of olfactory neurons, where each neuron uses receptors that are tuned to recognize certain odorant molecules with different binding affinity. Olfactory systems are able, in principle, to detect and discriminate diverse odorants using combinatorial coding strategies. We have combined microfluidics and multineuronal imaging to study the ensemble-level olfactory representations at the sensory periphery of the nematode Caenorhabditis elegans. The collective activity of C. elegans chemosensory neurons reveals high-dimensional representations of olfactory information across a broad space of odorant molecules. We reveal diverse tuning properties and dose-response curves across chemosensory neurons and across odorants. We describe the unique contribution of each sensory neuron to an ensemble-level code for volatile odorants. We show that a natural stimuli, a set of nematode pheromones, are also encoded by the sensory ensemble. The integrated activity of the C. elegans chemosensory neurons contains sufficient information to robustly encode the intensity and identity of diverse chemical stimuli.
INTRODUCTION
Many animals exhibit diverse behaviors-navigating the world, finding food, avoiding dangers-in response to olfactory cues. To do this, their olfactory systems distinguish the identity and intensity of numerous odorant molecules.
Insect and mammalian olfactory systems use large ensembles of olfactory sensory neurons to detect odorants and pheromones (1)(2)(3)(4)(5)(6). Each olfactory sensory neuron usually expresses a specific olfactory receptor tuned to recognize odorant molecules by ligand-receptor binding affinity (7). A given receptor is typically activated by many different odorant molecules, and each odorant can activate multiple receptors (1,8). These olfactory systems can potentially use combinatorial coding strategies to distinguish and identify large numbers of odorant molecules.
Caenorhabditis elegans senses many odorants across wide concentration ranges (9)(10)(11). However, its olfactory circuits have a compact cellular and molecular organization that differs from insects and mammals (Fig. 1A). The C. elegans genome encodes >1000 putative chemosensory G protein-coupled receptors (GPCRs) (12,13); at least 200 GPCRs are expressed by its 11 pairs of amphid chemosensory neurons. This suggests both a substantial capacity for olfactory detection and a coding strategy where the properties of each neuron are shaped by many receptors (12)(13)(14).
Behavioral studies first established AWA, AWB, and AWC to be the primary detectors of odorants. Laser ablation of AWA or AWC severely degrades chemotaxis toward selected attractive odorants. However, when both AWA and AWC are ablated, animals could move toward odorant sources (35). In similar experiments with selected organic compounds and salts, ablation of other chemosensory neurons-ASE, ADF, ASG, ASI, ASJ, and ASK-degrades chemotaxis to a lesser extent (10). Therefore, although some neurons are more important for chemotaxis toward some odorants than others, chemosensation does not rely on single neurons.
The stimulus-evoked properties of C. elegans chemosensory neurons have also been described through the detection of selected odorants by selected neurons (15)(16)(17)(18)(19)(20)(21). For example, isoamyl alcohol is detected by AWC, AWB, and ASH (15), and benzaldehyde is detected by AWA, AWB, AWC, and ASE (17). AWA responds to a wide range of volatile odorants (16). Diacetyl is detected by AWA at low concentrations and by ASH at high concentrations, as well as by AWC, ASK, and ASE. In some cases, the left and right pairs of a chemosensory neuron type detect different odorant molecules. For example, the left and right AWC neurons, AWCL and AWCR, are stochastically asymmetric, where one detects butanone and the other detects 2,3-pentanedione (36,37). The ASE neurons, primarily gustatory, respond asymmetrically to different ions, where ASEL detects sodium ions and ASER detects chloride and potassium ions (22,23).
We know less about the response properties of odorant receptors. ODR-10 remains its most thoroughly characterized odorant receptor, expressed in AWA and sensitive to diacetyl, an attractive stimulus. Ectopic expression of ODR-10 in AWB leads to diacetyl repulsion. This suggests that the attractive and aversive behaviors are encoded by the neuron, instead of specific odorants (38). Consistently, AWB and AWC are also needed for aversive olfactory (55,56). Panel generated at nemanode.org. (B) Adult C. elegans were immobilized inside a microfluidic device and controllably presented with odorant solutions. Each animal was volumetrically imaged at 2.5 Hz with a spinning-disk confocal microscope during stimulus presentations. EMCCD, electron multiplying charge-coupled device; WI, water-immersion. (C) Animals expressed nuclear-localized GCaMP6s in all ciliated sensory neurons. A sparse wCherry landmark distinguished the 11 chemosensory neurons. Here, a dual-color maximum projection image shows the head of the worm. The 11 chemosensory neurons on the near (L) side are labeled. For clarity, the chemosensory neurons on the far side and other ciliated neurons are not labeled. (D) Neuronal activity traces of the 11 chemosensory neurons in response to a single odorant presentation (1-octanol, 10 −4 dilution), averaged across multiple trials across 14 animals. The number of trials varies across neurons because neurons that were occluded or improperly tracked were excluded from the dataset (see Materials and Methods). The 10-s odorant delivery period is shown by the colored bar. Significant responses (q ≤ 0.01) are marked with stars, with "post" indicating a significant response to stimulus removal (OFF response). Error bars (gray) are SEM.
learning (39,40) and activating different subsets of chemosensory neurons changes the activity of different downstream interneurons (41).
Complex properties of individual neurons and their relationships to behaviors have been extensively examined. AWC, ASH, and ASE exhibit adaptation: When a chemical stimulus is prolonged from minutes to hours, both neuronal activity and behavioral responses diminish (16,(42)(43)(44)(45). AWA and AWC change their response properties in a context-dependent manner (46,47). AWA neurons fire action potentials that may encode stimulus-specific features (48). Complex activity patterns of AWA have been directly mapped to behavioral patterns (16,17,38,48,49).
Although odorant-evoked responses in many individual C. elegans chemosensory neurons are well characterized, how their collective dynamics might represent odorant information as an ensemble remains unexamined. We set out to characterize how this chemosensory ensemble responds to a chemically diverse space of odorants at different concentrations and how the tuning properties of each chemosensory neuron might relate to an ensemble-level code. We assembled a panel of olfactory stimuli spanning a diverse molecular chemistry and used microfluidics to deliver these odorants at multiple concentrations (Fig. 1B). To efficiently record neuronal responses at the sensory periphery, we used a transgenic animal that allowed simultaneous measurement of intracellular calcium dynamics in all amphid chemosensory neurons (Fig. 1, A and C).
We found that most odorant-evoked responses are widespread across the chemosensory ensemble. Dose-response curves are different for different odorant molecules, whether comparing the responses of the same neuron to different odorants or comparing the responses of different neurons to the same odorant. Odorant identity and intensity information can be reliably decoded by the collective activity of the chemosensory ensemble. A set of pheromones also evokes ensemble-level responses but with a distinct pattern from volatile odorants. We conclude that the ensemble-level representations of different odorants in the small sensory system of C. elegans contain sufficient information to accurately distinguish the identity and intensity of odorant molecules across olfactory stimulus space.
Calcium imaging of chemosensory neurons with representative odorant stimuli
We developed a GCaMP6s calcium reporter line to simultaneously record calcium dynamics in all ciliated sensory neurons (Supplementary Methods and fig. S1). We focused on the 11 pairs of amphid chemosensory neurons: AWA, AWB, AWC, ASE, ASG, ASH, ASI, ASJ, ASK, ADL, and ADF (Fig. 1A). We immobilized and positioned young adult C. elegans in a microfluidic device that allows odorants to flow past its nose (Fig. 1B) (50). We adapted a multichannel microfluidic device (4) to control the delivery of pulses of single and mixed odorant solutions. Volumetric imaging was performed at 2.5 Hz with a spinning-disk confocal microscope ( Fig. 1, B to D, and fig. S2).
We recorded the responses of all amphid chemosensory neurons to >70 stimulus conditions, testing each of the 23 odorants at multiple concentrations. Individual animals were repeatedly presented with series of 10-s odorant pulses separated by 30-s buffer blanks ( fig. S2, A and B). For each stimulus condition, we recorded the responses to ∼80 odor presentations across multiple animals (Fig. 2, A to C, and fig. S3, C and D). The highest concentrations we tested were 10 −4 dilutions. The lowest concentrations we tested (10 −8 dilutions) did not elicit significant responses from any neuron.
Odorants elicit ensemble responses
Across our odorant panel, calcium imaging captured many sensory neuron responses, some previously characterized and some unknown. Nearly every odorant reliably activated more sensory neurons than previously described. For example, diacetyl, attractive at low concentrations (53), reliably activated AWA upon odor onset at all concentrations (Fig. 2, A to C). 1-Octanol, a repellent (54), reliably activated ASH and inhibited AWC across concentrations (Fig. 2, A to C). However, additional reliable responses were also uncovered. For example, AWC was inhibited by butyl butyrate, and ASJ was activated by 1-octanol. Similarly, isoamyl alcohol not only activated AWA, AWB, AWC, and ASH at different concentrations, as previously reported (15), but also activated ASE and ASG (Fig. 2, A to C). At high concentrations, every odorant elicited responses from multiple sensory neurons. We observed substantial overlap in the sets of responding neurons for different odorants (Fig. 2, A to D).
Most chemosensory neurons exhibited ON responses to most odorants-changes in calcium levels upon odorant onset. We also observed OFF responses-changes in calcium levels upon odorant removal. For example, AWB has been reported to exhibit ON and OFF responses at different isoamyl alcohol concentrations (15). We confirmed this result and also found that AWB had ON responses to some odorants, such as diacetyl at high concentration, and OFF responses to 1-hexanol and 1-octanol ( Fig. 1D and fig. S2E).
Most chemosensory neurons exhibited excitatory responsesincreases in intracellular calcium levels during stimulus presentation. Some neurons exhibited inhibitory responses-decreases in intracellular calcium levels below the baseline level. A previous work showed that AWC is inhibited by several odorants in our panel, including diacetyl, benzaldehyde, and 2-butanone (16,17,21,39). In our stimulus conditions, AWC is inhibited by every odorant in our panel ( Fig. 2A). We also found that ASK is inhibited by many odorants including ethyl butyrate and 2-nonanone ( fig. S2F). Some neurons are inhibited by certain odorants but excited by others. For example, ASJ is strongly inhibited by 2-butanone but strongly excited by 1-nonanol ( fig. S2G).
The left and right ASE neurons exhibited strong asymmetry in their responses to two odorants-heptanoate and butyl butyrate. Both activated ASEL and inactivated ASER ( fig. S2H). AWC, another pair of neurons with known structural asymmetry (20), might exhibit moderate differences in their response dynamics when presented with short odorant pulses (21). Because all other left and right sensory neurons respond symmetrically to all odorants and because the left and right ASE and AWC neurons also respond symmetrically to many odorants, we grouped left and right sensory neurons in all analyses unless otherwise noted.
To compare the temporal dynamics of chemosensory neurons across odorants, we computed pairwise cross-correlations of their activity time courses across odorants ( fig. S4, A and B). We found that matrices of pairwise cross-correlations are distinct for different odorants. Measured in terms of peak responses or dynamics, the diversity of ensemble-level dynamics is as large as the number of tested odorants. The compact sensory neuron ensemble of C. elegans may be able to encode the identities of numerous odorants by using this combinatorially large space of distinct activity patterns.
Sensory representations are not dependent on chemical synaptic connections
The C. elegans connectomes have revealed consistent axo-axonic chemical synapses between some sensory neurons and from some interneurons to sensory neurons ( Fig. 1A) (55,56). These connections raise the possibility that ensemble representations might not solely reflect independent responses from individual neurons.
We examined this possibility by analyzing ensemble responses in an unc-13(s69) mutant where synaptic vesicle fusion is mostly blocked (57) (Fig. 3A). We sampled five odorants that represent different chemical classes. In all cases, nearly identical groups of neurons significantly responded (q ≤ 0.01) in wild-type and unc-13 mutants (Fig. 3B). Therefore, chemical synaptic transmission does not appear to be the dominant factor in ensemble responses. The tuning of each neuron to an odorant is likely to be cell intrinsic, a function of the receptors expressed in each neuron.
It is possible that gap junctions or neuromodulation, which are not affected in unc-13 mutants, play roles in shaping sensory responses. We note that there are no direct gap junctions between different chemosensory neurons (Fig. 1A). We also note that response magnitudes in unc-13 mutants, on average across neurons and stimuli, were ∼60% the size of response magnitudes in wild-type animals. Chemical synaptic connections might play a role in amplifying response magnitudes.
Olfactory representations broaden with increasing concentrations
We compared the response properties of different neurons in response to odorants from our panel over three to five orders of magnitude in concentration (Fig. 2, A to C, and fig. S3, C and D). For most odorants and neurons, response magnitudes increased monotonically with odorant concentration-neurons activated at low concentrations were also activated at all higher concentrations. Every odorant is associated with the activation of a characteristic set of neurons at all concentrations above detection threshold. Across all concentrations, for example, 1-pentanol activates AWA and AWC; 1-octanol activates ASE, ASH, AWA, AWB, and AWC; and benzaldehyde activates AWA, AWB, and AWC. Each set of activated neurons may constitute a unique olfactory representation associated with each odorant identity. For many odorants, increasing concentration spatially broadens olfactory representation by activating more sensory neurons. Different neurons exhibit different thresholds for different odorants. For example, AWB is only activated by 1-pentanol at concentrations above 10 −5 dilution, and ADF, ADL, and ASG are only significantly activated by 1-pentanol at 10 −4 dilution, the highest tested concentration ( fig. S3E). Thus, odorant intensity is represented partly by the magnitude of responses of activated neurons and partly by the number and identities of activated neurons (Fig. 2D).
We used phase-trajectory analysis to illustrate the temporal dynamics of ensemble-level odorant representations. In a low-dimensional principal component (PC) space, these representations follow closed trajectories as they evolve over time following odor presentation ( fig. S4C). Along each trajectory, neurons become activated, reach their peak responses, and return to baseline. In this space, the responses to different odorants follow trajectories with different headings from the origin. Trajectories for responses to the same odorant at different concentrations are aligned in direction but differ in magnitude.
Diversity in dose responses across neurons and odorants
We constructed dose-response curves for all 11 chemosensory neurons in response to select odorants from our panel over five orders of magnitude in concentration. The dose-response curves of the 11 chemosensory neurons exhibit substantial diversity ( Fig. 2E and fig. S3F). Each odorant can evoke dose-response curves with different thresholds and steepnesses in different neurons. Conversely, each sensory neuron can exhibit dose-response curves with different thresholds and steepnesses for different odorants.
In some cases, neurons detected an odorant with slowly graded responses over a broad dynamic range. Graded responses include AWA's response to 1-pentanol and ASG's response to 1-octanol ( Fig. 2E and fig. S3F). In other cases, neurons exhibited steep response functions, becoming fully activated or fully inhibited above a sharply defined threshold.
To estimate differences in response steepness across odorants and neurons, we performed log-linear fits on peak responses r as a function of odorant dilution c: r(c) ≈ mlog 10 c + I and determined the slope m through linear regression. Response steepnesses were diverse, whether for a given neuron across odorants or for a given odorant across neurons ( fig. S3G).
Diversity in dose-response curves contrasts with insects and mammals, where olfactory sensory neurons exhibit similarly shaped dose-response curves across neurons and across odorants (4,58,59). In insects and mammals, each sensory neuron is generally equipped with one receptor type, whereas in C. elegans, each neuron likely expresses multiple receptors (12,13). For a nematode neuron, the presence of receptors to multiple odorants, each with different binding affinities, may explain dose-response diversity.
Comparing ensemble-level representations of chemically similar odorants Different odorants activate distinct but overlapping subsets of the chemosensory ensemble (Fig. 2, A to C). Quantitative differences in the sensitivity of chemosensory neurons to odorants will depend on cell-specific patterns of receptor expression. In most olfactory systems, a typical olfactory receptor is activated by a range of structurally similar odorant molecules with common chemical features. This leads to a systematic dependence of ensemble-level olfactory representations on odorant chemistry. To assess this dependence in C. elegans, we performed hierarchical clustering of odorants from our panel based on ensemble-level responses evoked at high concentrations ( fig. S3H). The representations of some molecular classes clustered together. For example, ensemble-level responses to a set of straight-chain alcohols (1-hexanol, 1-heptanol, 1-octanol, and 1-nonanol) were similar to one another, and the ensemble-level responses to a set of ketones (2-butanone, 2,3-pentanedione, and 2-heptanone) were likewise similar. On the other hand, the esters in our panel, a group more diverse in their chemical structure, produced a broader set of representations.
Principal components analysis (PCA) is a quantitative means of comparing high-dimensional ensemble-level representations. We constructed a PC space from all average ensemble-level peak responses. We found that the first two PCs contain 65% of the variance ( fig. S3I). The loading of the first two PCs allows us to assess the relative contribution of each sensory neuron to ensemble representations ( fig. S3I). We observed a broad distribution of PC loading, which indicates a distributed contribution from all neurons to the separability of odorant representations.
We then asked how different odorants are distributed in the reduced PC space. Overall, the PC space appears compact without distinct clusters (Fig. 2F). Consistent with observations from hierarchical clustering, responses to certain classes of odorants, such as the straight-chain alcohols and two thiazoles, are close to each other in PC space. Responses to members of other classes, such as esters, are distributed more broadly.
Sensory neurons are either broadly or narrowly tuned in the chemical space Olfactory sensory neurons are tuned to odorants by the relative binding affinities of receptors for different ligands (7). In animals where sensory neurons express single-receptor types, this leads to a systematic dependence of ensemble representation on the chemical properties of odorants and receptors (4,7,60,61). In C. elegans, the tuning of a sensory neuron may also be shaped by the expression of multiple different receptors. To explore the tuning of sensory neurons in chemical space, we projected the activity of each neuron into a PC space based on the chemical structure of odorants ( fig. S3A) (52).
We observed a qualitative distinction in neuron tuning. Neurons AWA, AWB, AWC, and ASE each respond to most tested odorants at high concentrations, so we call them "broadly tuned" (Figs. 2, A to C, and 4A). In contrast, ADF, ADL, ASG, ASI, ASJ, and ASK each respond to a smaller set of odorants even at the highest tested concentrations, so we call them "narrowly tuned." ASH is broadly tuned at high concentrations and narrowly tuned at low concentrations (Fig. 4A), a pattern that might reflect its role as a nociceptor, mediating avoidance of any odorant when delivered at a sufficiently high concentration (Fig. 2, A to C). In previous behavioral experiments, most odorants in our panel were shown to be attractive at low concentrations and repulsive at high concentrations. A few odorants-1-heptanol, 1-octanol, and 1-nonanolare repulsive at any tested concentration (table S2) (35). The odorants to which ASH is most sensitive are generally those that are repulsive at all concentrations.
The responses of each sensory neuron appear to occupy contiguous domains in chemical odor space. Each domain encompasses chemically similar odorant molecules that are effective stimuli for each sensory neuron (Fig. 4, B to E, and fig. S5). At high concentrations, broadly tuned neurons, such as AWA, extend responses throughout the chemical structure space. Even at high odorant concentrations, narrowly tuned neurons, such as ADF, extend responses over a smaller contiguous region of chemical space. This can be quantified as the average pairwise distance between the odors in odor space that significantly activate each neuron (Fig. 4G). Broadly tuned neurons span the entire space and therefore have the same average pairwise distance between odors as the baseline average pairwise distance of the entire odor panel. Narrowly tuned neurons have lower average pairwise distances-odors that activate narrowly tuned neurons are close to each other in odor space and are thus likely to be chemically similar. Different sensory neurons can have overlapping response domains. However, each sensory neuron tends to be activated most strongly by a different part of odor space. We calculated the centroid of each neuron's activity in odor space, weighted by the strength of response to each odor and plotted relative to the center of our 23-odor panel (Fig. 4F). These activity centroids project in different directions from the center, suggesting that each neuron is most sensitive to a different region of odor space. The centroids of broadly tuned neurons are closer to center, as expected because they are activated by most odorants, and it is the weighting by response magnitude that pulls them off center. The centroids of narrowly tuned neurons are further from the center, as their significantly responding neurons occupy only one patch of odor space.
Most broadly tuned neurons extend responses over a smaller region of chemical space at lower concentrations than at high concentrations. Responses at low concentrations reveal the structural characteristics of molecules to which sensory neurons are most sensitive. AWA is most strongly activated by ketones, AWB is most strongly activated by esters, and ASE is most activated by alcohols ( fig. S5). These regions of odor space are consistent with the directions to which the activity centroids of these neurons project at high concentration. ASH responds to odorants throughout chemical space, perhaps allowing ASH to contribute to the repellent response of any odorant delivered at sufficiently high concentration. The observation that each sensory neuron extends its sensitivity range across a contiguous region of chemical structural space suggests that each neuron is tuned to shared molecular properties of a set of odorant stimuli, as opposed to being tuned to exclusively detect specific odorants.
Single-trial responses suffice for discriminating odorant pairs
We observed trial-to-trial variability in odorant responses, both across animals and across odor presentations to the same animal. However, ensemble-level coding might confer robustness when discriminating odorants. We compiled all single-trial responses to each odorant across all datasets. In some recordings where data from individual neurons were missing, we imputed missing activity patterns using the rest of the ensemble (Appendix D and fig. S6, A to D). We used two dimensionality reduction methods to visualize the space spanned by single-trial responses-PCA and Uniform Manifold Approximation and Projection (UMAP). In a PC space constructed from the peak responses of all single trials, chemically similar odorants exhibit more similar representations ( fig. S6D) and chemically dissimilar odorants exhibit dissimilar representations ( fig. S6E). Overlap in a low-dimensional PC space is an imperfect measure for odorant discrimination because <60% of variance is explained by the first three principal components. Plotting all singletrial responses to all 23 odorants in UMAP space, trials for the same odorant also cluster together, although it is difficult to segregate trials for different odorants in this two-dimensional representation (Fig. 5A). Both PCA and UMAP analyses indicate that ensemblelevel responses for the same odorant are similar. Both analyses also indicate that ensemble representations are high dimensional, as reduction to two or three dimensions removes a substantial fraction of the variance.
We asked whether olfactory representations were sufficiently dissimilar for reliable odorant discrimination based on single odorant presentations. To estimate the theoretical discriminability of odorant pairs, we computed errors in binary classification based on the pooled single-trial responses of each odorant pair using logistic regression (fig. S6F) and a support vector machine ( fig. S6G). In all cases, binary classification succeeded with low error rate. Thus, any two odorants in our panel can be distinguished from each other on the basis of single-trial ensemble responses.
Odorant identification with single-trial responses
We asked whether odorant identity could reliably be decoded from single-trial ensemble responses, a task more challenging than binary classification of an odorant pair. We trained a multiclass classifier to perform linear discrimination (Fig. 5B). We randomly divided all single-trial measurements into a training set (90%) and validation (testing) set (10%). After we trained the classifier with the training set, we tested its performance in predicting odorant identities from single-trial measurements with the validation set (see Appendix E for details). This classifier successfully identified odorants in most of the single-trial measurements at high concentrations (Fig. 5C). Classification accuracy declined at lower concentrations but succeeded in the plurality of measurements (Fig. 5, D to F).
We used a similar approach to determine whether odorant intensity could be estimated from single-trial measurements. With trained multiclass classifiers, we were able to predict the concentration of a given odorant using single-trial measurements, although accuracy declined at lower concentrations (Fig. 5, G and H). In principle, the ensemble-level spatial map of sensory neuron activity contains sufficient information to determine odorant identity and intensity from single-stimulus presentations.
Virtual neuron knockouts degrade classifier accuracy
To quantify the relative contribution of each sensory neuron to ensemble-level discriminability, we performed virtual knockouts. We performed virtual knockouts by removing (masking) specific sensory neurons from the dataset and retraining the multiclass classifier on the remaining data. Removing any single sensory neuron led to small decreases in classification accuracy compared to wild type (Fig. 6, B to D). Classification accuracy was lower after masking narrowly tuned neurons (ASI, ASK, ASJ, or ASG) than broadly tuned neurons (AWA, ASH, or AWC).
Masking different neurons degrades the classification accuracy of a given odorant to different degrees. For instance, pentyl acetate is correctly classified 68% of the time when all 11 chemosensory neurons are included. ASJ masking reduces this accuracy to 62%, but AWA masking reduces accuracy to 48%. Masking any two neurons further decreases average classification accuracy (Fig. 6E). We computed the average classification accuracy when randomly removing different combinations of multiple neurons. We observed an inverse linear relationship between the number of masked neurons and classification accuracy (Fig. 6F). Odor identity across olfactory space is thus encoded in a distributed manner across all 11 chemosensory neurons.
Responses to pheromone stimuli are distinct from those of volatile odorants
All amphid chemosensory neurons respond to volatile odorants. We asked whether ensemble-level responses extend to other stimuli. C. elegans communicate using pheromones, a mixed group of glycolipid molecules called ascarosides (24,29). We presented young adult hermaphrodites with a panel of five single ascarosides (#1, #2, #3, #5, and #8) (25).
Similarly to volatile odorants, ascarosides activated multiple sensory neurons (Fig. 7A). Some neurons-known to respond to ascarosides but narrowly tuned with respect to our volatileodorant panel, such as ADL, ADF, and ASK-were strongly activated across our five-pheromone panel (Fig. 7B). Pheromones also evoked some activity in neurons that are broadly tuned to volatile odorants. For example, AWA was activated less often by the pheromone panel than by the odorant panel. Thus, pheromone detection may also involve an ensemble-level code, but a code that relies more heavily on those neurons that are narrowly tuned to volatile odorants.
DISCUSSION
In insects and vertebrates, the integrated activity of chemosensory neuron ensembles is often presumed to enhance odorant discrimination and broaden the space of olfactory perceptions with ensemble-level codes (1-6). The C. elegans olfactory system contains only 11 pairs of chemosensory neurons. Each nematode chemosensory neuron is considered a unique class distinguished by dendrite morphologies, wiring partners, and stimulus selectivity (12, 34). Does the integrated activity of the C. elegans chemosensory ensemble contain information that might enhance and broaden olfactory discrimination?
We have simultaneously recorded calcium dynamics in all chemosensory neurons in nematodes exposed to a chemically diverse odorant panel. Nearly every distinct odorant stimulus evoked a distinct ensemble-level activity pattern among chemosensory neurons. We show that these highly reproducible ensemble-level patterns can, in principle, robustly encode odorant identity and intensity throughout a large chemical space.
Characterizing the ensemble-level olfactory code at the sensory periphery sets the stage for future studies aimed at their relevance for behavior and decision-making. Recording the activity of downstream interneurons will determine whether olfactory representations in circuits for behavior are similarly high dimensional, perhaps facilitating olfactory discrimination and diverse patterns of decision-making in complex environments.
Diverse sensory neuron tuning properties
The unique response properties of the chemosensory neurons allow each to contribute information to the spatial activity map that encodes olfactory stimuli. Ensemble-level activity appears to be largely independent of synaptic communication between chemosensory neurons (Fig. 3B). Moreover, the tuning of each chemosensory neuron is shaped by the expression and properties of multiple receptors, not by the sensitivity of a single receptor, as is typical in larger animals. Removing any chemosensory neuron lowers the accuracy of stimulus classification based on ensemble-level activity (Fig. 6).
We lack comprehensive information about the expression patterns and odorant sensitivity of most olfactory receptors in C. elegans. ODR-10, highly expressed in AWA, remains the only characterized olfactory receptor for diacetyl (38,53). However, AWA also responds to many other odorants in a manner that is independent of ODR-10, evidence that AWA expresses additional receptors (15)(16)(17)35). Other sensory neurons that do not express ODR-10 are also activated by diacetyl at higher threshold concentrations. We uncovered a diversity of odorant dose-response curves to diacetyl and other odorants (Fig. 2E and fig. S3, F and G). This diversity is consistent with the expression of multiple receptors in each chemosensory neuron. Variable dose-response curves may reflect the cumulative activities of different sets of receptors with different binding affinities for each odorant across chemosensory neurons. Every chemosensory neuron tends to be sensitive to structurally similar odorants, suggesting that the receptors expressed by each neuron might be correlated in their chemical binding affinities ( Fig. 4 and fig. S5).
Because we do not know the repertoire of odorant receptors that are expressed in each chemosensory neuron, we cannot easily characterize receptor-ligand interactions based on dose-response curves, as has been done in other animals (4,62,63). We note that the breadth of tuning to olfactory stimuli (defined as the fraction of the odor panel that elicit significant responses) is not strongly correlated with the number of expressed GPCRs (13). For example, ADL expresses the most GPCR genes of any chemosensory neuron but is sensitive to only three odorants in our panel. ASH, ASK, and ASJ express many GPCR genes, but only ASH is broadly tuned to our odorant panel. ASE, another broadly tuned neuron, expresses the smallest number of GPCR genes. Our inability to correlate the number of expressed GPCRs with the breadth of tuning might be because many GPCRs might not function as odorant receptors. For example, ADL, narrowly tuned for odorants but broadly tuned for pheromones, might use many GPCRs as ascaroside receptors.
Comparisons with olfactory systems in larger animals
In larger animals, chemosensory cells typically express single receptor types. In these animals, when domains of sensory neuron activity are represented in a chemical odor space, response domains tend to be clustered. Olfactory neuron ensembles span odor space by connecting the clustered response domains of different olfactory sensory neurons (1)(2)(3)(4)(5)(6).
In C. elegans, each sensory neuron is sensitive to a contiguous region of chemical space (Fig. 4). This suggests that each neuron is tuned to shared molecular properties, as opposed to being faithful detectors of unique odorant molecules. The broad tuning of many C. elegans sensory neurons is probably caused by the combined activities of different receptors. Each receptor may be tuned to a given region of chemical structural space. Connecting the regions of chemical space corresponding to each receptor could produce the broad region of chemical space sensed by each neuron. The tendency for even the most broadly tuned neurons to be most strongly activated by certain chemical classes suggests correlations in the cellspecific expression of receptor types. The multireceptor nature of C. elegans sensory neurons may also contribute to their graded responses over broad concentration ranges. As additional receptor types with higher thresholds are recruited at higher concentrations of a given odorant, a sensory neuron gradually and cumulatively becomes more active.
The C. elegans chemosensory neuron ensemble-level activity encodes odorant identity
Previous studies in C. elegans largely dissected the properties of individual chemosensory neurons in response to selected odorants (15)(16)(17)(18)(19)(20)(21). For example, single olfactory sensory neurons can exhibit complex temporal activity patterns in response to odorant stimulation (16,17,38,(46)(47)(48)(49). Many previous studies have mapped the activities of single sensory neurons to behavioral outputs. However, using selected odorants to stimulate single sensory neurons and evoke behavioral responses does not reveal how olfactory inputs might be encoded by the sensory neuron ensemble.
We found that most olfactory stimuli activate multiple chemosensory neurons in C. elegans (Fig. 2). Chemosensory neurons that have been principally studied for roles in olfactory learning and navigation-AWA, AWB, AWC, and ASE-are the most broadly tuned, having high sensitivity to many different types of molecules. AWA is comparatively more strongly activated by ketones, AWB by some esters, and ASE by alcohols. AWC is , which elicited significant responses in each neuron at high concentration (first row), compared with the fraction of pheromone stimuli (out of five stimuli total), which elicited significant responses (second row). Many neurons (such as ADF and ADL) that are narrowly tuned with respect to volatile odorants appear to be activated more often by the ascaroside pheromones.
inhibited by every odorant that we tested. Other olfactory neurons -such as ASK, ASJ, or ASG-are more narrowly tuned, activated by a small number of structurally similar odorants (Fig. 4 and fig. S5). When ensemble-level responses of broadly and narrowly tuned chemosensory neurons are taken together, a reproducible and distinct spatial activity map emerges for each stimulus. This map encodes both odorant identity and intensity across the space spanned by our panel of 23 diverse chemicals at multiple concentrations (Fig. 5).
How might C. elegans use an ensemble-level code for olfaction? Broadly tuned neurons permit coarse identification of odorants. Each narrowly tuned neuron is sensitive to a smaller region of olfactory space. When a narrowly tuned neuron is active, the possible identities of each olfactory stimulus are limited to those odorant molecules inside its region of sensitivity. When a neuron is inactive, molecules inside its region of sensitivity are ruled out. Combinatorial activity patterns among chemosensory neurons with different regions of sensitivity can provide enough information to pinpoint the identity and concentration of an odorant stimulus. Because these ensemble-level patterns are highly reproducible, accurate discrimination can be performed with single-stimulus presentations. Ensemble-level codes may also improve robustness, compensating for trial-to-trial variability in the responses of individual chemosensory neurons.
We stress that showing that ensemble-level neuron activity is capable of encoding odorant identity and intensity throughout olfactory space does not necessarily mean that downstream circuits use this olfactory information in full when making behavioral decisions. Demonstrating that information is encoded at the sensory periphery sets the stage for future experiments to determine what part of this information is decoded for animal behavior.
Pheromone detection engages the chemosensory ensemble in distinct ways
We found that chemosensory neurons that are more narrowly tuned to volatile odorants are more broadly tuned to pheromones (Fig. 7). Activation of pheromone-sensing neurons by volatile odorants might reflect cross-reactivity of pheromone receptors to small organic molecules. These narrowly tuned neurons might also express different receptors with high odorant specificity. We also do not know whether the activation of broadly tuned olfactory neurons by pheromones reflects cross-reactivity of olfactory receptors to pheromone molecules. In any case, widespread ensemblelevel activity across all chemosensory neurons in response to odorants and pheromones encodes substantial information that can be used to accurately identify any chemical stimulus.
Discrepancies with previously reported chemosensory responses
We have characterized >900 neuron-stimulus pairings, including many previously undescribed responses. Where our measurements overlapped with previous studies, we found general agreement with previously reported neuronal responses. However, we observed some discrepancies.
We did not observe previously reported OFF responses in AWC. There may be two reasons for this. First, to map the tuning properties of chemosensory neurons, we used stimulus conditions that would minimize adaptation. We presented odorants in short 10-s pulses with long intervening blank periods between presentations.
Previously reported OFF responses in AWC were obtained with longer odor stimulus presentations (15,37). Second, some previously reported OFF responses were observed in one of the two asymmetric AWC neurons. Here, we did not separate the responses of AWC ON and AWC OFF neurons, and so any asymmetric AWC response would be lost in the population average.
We also did not observe some previously recorded sensory neuron responses to ascarosides (26)(27)(28)(29). This might be due to differences in the age and sex of tested animals. To be consistent with our own volatile odorant experiments, we recorded from young adult hermaphrodites. Different ascaroside responses in previous reports were obtained using males and juvenile hermaphrodites.
Limitations and future studies
Calcium imaging is a coarse-grained measure of neuronal activity. We primarily quantified peak calcium responses, omitting differences in dynamics, spiking, or asymmetric responses, all of which likely encode additional information. Thus, our estimates of the information encoded in ensemble-level activity represent conservative lower bounds.
Our analysis of synaptic transmission mutants suggests that synaptic transmission is not the primary driver of ensemble-level responses (Fig. 3). Synaptic connections and feedback might shape the magnitude and dynamics of neuronal responses in important ways. For example, it has been suggested that feedback by neuropeptide signaling causes ASE to respond when benzaldehyde is detected by other sensory neurons (17). Nonsynaptic neurotransmitter signaling and electrical synapses might also coordinate activity among chemosensory neurons.
How does ensemble-level information relate to behavior? Many odorants studied in C. elegans have known behavioral valence: either attractive or repulsive to the animal (35). At high concentrations, many odorants become behaviorally repulsive. Do these switches in behavioral valence correlate with a change in the ensemble-level code? A simple prediction is that ASH activity increases as a stimulus becomes more repulsive. To understand changes in behavioral valence, it is necessary to simultaneously measure the ensemble-level olfactory code during decision-making in freely behaving animals. This is because it is difficult to calibrate previous experiments-where the behavioral valence of volatile odorants was determined with crawling animals on agar plates-with olfactory stimulation of immobilized worms using microfluidics.
Downstream from the chemosensory ensemble, interneuron networks resemble both a reflexive avoidance circuit (consisting of the premotor interneurons AVA, AVB, and AVD that primarily receive inputs from ASH) and a circuit for learning and navigation (consisting of the interneurons AIA, AIB, AIY, and AIZ that integrate the activity of the entire chemosensory ensemble) (Fig. 1A) (10,30,31,33,36,39,64,65). The activity of some of these interneurons is known to be modulated by differences in ensemble-level sensory neuron activity (41). ASH might be the start of a nociceptive reflex arc that maps the detection of noxious stimuli to rapid escape responses. The output of the entire chemosensory ensemble also appears to be integrated and decoded by another more complex interneuron network. Recently developed multineuronal recording methods (21,66,67) that extend from the chemosensory neurons to downstream interneurons might reveal how much olfactory information that is encoded at the sensory periphery is decoded in olfactory discrimination and behavioral decision-making.
We studied a broad panel of pure volatile odorants, all of which are known to elicit behavioral responses in the worm. In the natural environment of C. elegans, most olfactory cues will be mixtures of odorants, signifying diverse food sources or pathogens. Understanding how the chemosensory ensemble responds to these mixtures would illuminate ethologically relevant decision-making in downstream circuits.
The extent to which any animal exploits the collective activity of chemosensory neurons to decode olfactory inputs remains poorly understood. On one hand, the "dimensionality" of the olfactory code is often presumed to be as large as the number of distinct chemosensory neurons that contribute to the code (68). If so, the ability to detect even small numbers of different molecules, each with specificity to different chemosensory neurons, can create the potential to discriminate astronomical numbers of olfactory stimuli (69). On the other hand, animals might discard much of the high-dimensional olfactory information at the sensory periphery if it only needs to perform coarse categorizations of odorants such as "attractive" versus "repulsive." Rapid and efficient olfactory coding can be accomplished using only a small number of the earliest responding (or primary) olfactory receptors and neurons, as in recent experiments that explore "primacy models" of the olfactory code in rodents (70).
With advances in microfluidics and imaging technologies, it is becoming possible to combine high-throughput odorant stimulation with brain-wide imaging and tracking in behaving animals (50,(71)(72)(73)(74). With these tools, it will become possible to measure how much odorant information is decoded in behavioral discrimination tasks throughout the olfactory space that we characterized in this study. While the range of olfactory discrimination tasks and the combinatorial possibilities of the olfactory code are still large in C. elegans, its relatively small size makes it feasible to quantitatively assess the behavioral relevance of ensemble-level olfactory codes.
Experimental design
The primary objective of the study was to understand how the 11 chemosensory neuron pairs in C. elegans encode odorant identity and intensity. We developed a transgenic nematode in which all of the ciliated sensory neurons were fluorescently labeled with GCaMP, allowing the activity of the 11 chemosensory neuron pairs to be recorded from simultaneously. We assembled a broad panel of 23 volatile odorants and five pheromones and used a microfluidic device to present these stimuli to nematodes. We used confocal microscopy to record from chemosensory neurons as odorant stimuli at multiple concentrations were presented.
Worm maintenance
All C. elegans lines used in this project were grown at 22°C on nematode growth medium plates seeded with the Escherichia coli strain OP50. All animal lines were allowed to recover from starvation or freezing for at least two generations before being used in experiments. All animals used in experiments were young adults.
Plasmids and crosses
To construct the ZM10104 imaging strain, we created and then crossed two integrated lines, one expressing GCaMP6s and one expressing the wCherry landmark. The first of these lines, ADS700, was made by coinjecting lin-15(n765) animals with pJH4039 (ift-20 GCaMP6s::3xNLS) and a lin-15-rescuing plasmid. A stable transgenic line (hpEx3942) with consistent GCaMP expression in the chemosensory neurons was selected for integration, and transgenic animals were irradiated with ultraviolet (UV) light to integrate the transgenes into the genome. The resulting integrated line (aeaIs008) was backcrossed four times against N2 wild type. The second line, ADS701, was similarly made by coinjecting lin-15(n765) animals with pJH4040 (gpc-1 wCherry) and a lin-15-rescuing plasmid. A stable transgenic line with good wCherry expression was selected for integration, and transgenic animals were irradiated with UV light to integrate the transgenes into the genome. The resulting integrated line (hpIs728) was backcrossed four times against N2 wild type. To make ZM10104, ADS700 hermaphrodites were crossed with N2 males. Heterozygous aeaIs008/+ male progeny were then crossed with ADS701 hermaphrodites. F1 progeny were picked for wCherry expression, and F2 progeny were picked for both GCaMP6s and wCherry expression. The line was then homozygozed in the F3 generation.
Microfluidics
We used a modified version of a microfluidic system capable of delivering multiple odors to Drosophila larvae (4). The microfluidic chip is designed with an arbor containing delivery points for multiple stimuli, together with a buffer delivery point and two control switches, one for buffer and one for odor (Fig. 1B). At any given time, three flows are active: one of the control switches, the buffer blank, and one odor stimulus. The chip is designed to maintain laminar flow of each fluid, and the flow is split between a waste channel and an odor channel, which flows past the animal's nose. The chip described here is designed to switch rapidly from one stimulus to the buffer. After the flows pass the animal, they exit the chip via a waste port at atmospheric pressure. Waste is removed with a vacuum.
We grafted the odorant delivery arbor to a C. elegans loading chamber similar to those designed by Chronis et al. (55). We designed a loading chamber suitable for adult C. elegans, a narrow channel 62 μm wide and 30 μm high, with a gently tapered end. The tapered end serves as a guide to help hold the animal's nose in place without distorting the animal. The microfluidic device pattern was designed in AutoCAD, and the design was translated to silicon wafer using photolithography. The photomasks of the design were printed using CAD/Art Services Inc. The silicon wafer was then used as a mold for polydimethylsiloxane (PDMS) to fabricate microfluidic devices. The PDMS components were then removed from the silicon wafer, cut to size, and had access channels made with a biopsy punch. The completed PDMS components were then plasma-bonded to no. 1 glass cover slips. To minimize contamination from dust, all microfluidics assembly was done in a cleanroom.
Preparation of odorant and buffer solutions
Odorants were diluted in CTX buffer (5 mM KH 2 PO 4 /K 2 HPO 4 at pH 6, 1 mM CaCl 2 , 1 mM MgSO 4 , and 50 mM NaCl, adjusted to 350 mOsm/liter with sorbitol). To prevent contamination, each odor condition was mixed and stored in its own glass bottle and delivered through its own glass syringe and tubing. Furthermore, a new microfluidic device was used for a single consistent panel of odors. The single ascarosides (25) were diluted in CTX buffer to 200 mM concentration for presentation to the animals.
Stimulus delivery protocol
We chose to deliver 10-s odorant pulses separated by 30-s buffer blanks. These pulse and blank lengths were sufficiently spaced to elicit similar neuronal responses across pulses, with no indication of adaptation ( fig. S2, A and B). We carried out control experiments by presenting each animal in the microfluidic device with multiple conditions (odorants and concentrations) in a single trial ( fig. S2, C and D). Randomizing the order of odorant delivery, we did not observe any effects of odorant order on the responses of the sensory neurons.
We previously used a similar stimulus protocol by Yemini et al. (21), presenting three chemosensory stimuli separated by buffer blanks in a randomized order. There, we similarly observed no differences in average odorant-evoked responses that were correlated with odorant delivery order.
Thus, to reduce the risk of odorant cross-contamination, we elected to conduct the remaining experiments by presenting each animal with multiple presentations of one stimulus condition ( fig. S2, A and B) and averaged across the population of at least 10 animals per condition, treating the response to each odorant pulse response as an independent trial.
Imaging setup
We used a single-photon, spinning-disk confocal microscope to capture fluorescent images from intact C. elegans. The microscope was inverted to allow for easy access to the microfluidic device mounted on the stage. We used a 488-nm laser to excite GCaMP in vivo and used a 561-nm laser to excite the wCherry landmark. To minimize cross-talk between channels, lasers were fired sequentially during multicolor recordings. We captured images with a 60× water-immersion objective with a numerical aperture of 1.2. Volumes were acquired using unidirectional scans of a piezo objective scanner. All fluorescence microscopy is a trade-off between spatial resolution, temporal resolution, laser power, and signal strength. We optimized two sets of imaging conditions, one set for activity imaging and another set for landmark imaging. Both sets of imaging conditions capture the region containing most of the neurons in the head of C. elegans, a volume of 112 μm by 56 μm by 30 μm.
In any given experiment, acquisition of a landmark volume precedes acquisition of an activity movie. This volume, which contains both green and red channels, allows us to identify neurons of interest. The spatial resolution of these volumes is 0.5 μm by 0.5 μm by 1.5 μm per voxel, with the z-resolution of 1.5-μm set by the point spread function.
The activity movies were acquired at a high speed in the green channel only, with lower spatial resolution (1 μm by 1 μm by 1.5 μm per voxel). At this resolution, we could acquire volumes at 2.5 Hz in standard acquisition mode.
Analyzing multineuronal recordings
The neurons in each activity recording were identified and then tracked through time using a neighborhood correlation tracking method. The criteria for identifying each neuron class are described in the Supplementary Methods. Neurons that could not be unambiguously identified were excluded from the dataset. All neuron tracks were then manually proofread to exclude mistracked neurons. Activity traces were bleach-corrected and reported in ∆F/F 0 = [F(t) − F 0 ]/F 0 . Normalization by baseline fluorescence F 0 allowed for direct comparisons within a given neuron class across left and right sides and across individuals. The baseline F 0 value was determined individually for every recorded neuron, set at the fifth percentile of the distribution of bleach-corrected fluorescence values, with the opportunity for manual correction.
Statistical analysis
We used two-tailed, paired t tests to compare the mean signal during stimulus presentation with an unstimulated period of identical length within the same neuron. Neurons were tested for both ON and OFF responses. The P values were corrected for multiple testing using false discovery rate (76). To test for asymmetric neuron responses, we used two-tailed, two-sample t tests (unpaired). Sensory neuron responses to all conditions are publicly available at this data repository, together with plots of average responses, phase trajectories, and time trace correlation matrices.
Supplementary Materials
This PDF file includes: Supplementary Methods Figs. S1 to S6 Tables S1 and S2 References View/request a protocol for this paper from Bio-protocol. | 2022-06-02T13:23:06.514Z | 2022-05-29T00:00:00.000 | {
"year": 2023,
"sha1": "5fb24fbfe3c790a5af56099949f367c0537a8832",
"oa_license": "CCBYNC",
"oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.ade1249?download=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "746abb3fe0cccca1a527adcd1e197b04f088ff51",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
16301362 | pes2o/s2orc | v3-fos-license | Temporal trends of healthcare associated infections and antimicrobial use in 2011-2013, observed with annual point prevalence surveys in Ferrara University Hospital, Italy.
INTRODUCTION
Healthcare associated infections (HAIs) and misuse of antimicrobials (AMs) represent a growing public health problem. The Point Prevalence Surveys (PPSs) find available information to be used for specific targeted interventions and evaluate their effects. The objective of this study was to estimate the prevalence of HAIs and AM use, to describe types of infections, causative pathogens and to compare data collected through three PPSs in Ferrara University Hospital (FUH), repeated in 3 different years (2011-2013). The population-based sample consists of all patients admitted to every acute care and rehabilitation Department in a single day.
METHODS
ECDC Protocol and Form for PPS of HAI and AM use, Version 4.2, July 2011. Risk factor analysis was performed using logistic regression.
RESULTS
1,239 patients were observed. Overall, HAI prevalence was 9.6%; prevalence was higher in Intensive Care Units; urinary tract infections were the most common HAIs in all 3 surveys; E.coli was the most common pathogen; AM use prevalence was 51.1%; AMs most frequently administered were fluoroquinolones, combinations of penicillins and third-generation cephalosporins. According to the regression model, urinary catheter (OR: 2.5) and invasive respiratory device (OR: 2.3) are significantly associated risk factors for HAIs (p < 0.05).
CONCLUSIONS
PPSs are a sensitive and effective method of analysis. Yearly repetition is a useful way to maintain focus on the topic of HAIs and AM use, highlighting how changes in practices impact on the outcome of care and providing useful information to implement intervention programs targeted on specific issues.
Introduction
Healthcare associated infections (HAIs) represent a growing public health problem in terms of patient safety and economic burden [1][2][3]. The Center for Disease Control (CDC) estimates the increased mean length of hospital stay for each HAI to be 7 extra days, ranging from 1-4 days for urinary tract infections (UTIs) to 7-30 days for pneumonia (PN). In Europe, HAIs cause 16 million additional days of hospitalization per year, 37,000 related deaths and 7 billion euros of additional costs (direct costs only) [4]. The Italian National Health Institute estimates 450,000-700,000 HAIs per year in Italian hospitals, 30% of which could be prevented; HAIs could be directly responsible for 1,350-2,100 avoidable deaths per year [5]. Misuse of antimicrobials (AMs) is a growing public health problem worldwide, associated with an increase in drug resistant microorganisms and adverse drug reactions that generate huge economic costs [6,7].
The implementation of surveillance systems for both HAI and AM use is a relevant topic in modern public health [8,9]. Although continuous surveillance still represents the gold standard for infection control, it requires a huge amount of human and economic resources but has rarely been used in multicenter studies. Instead, Point Prevalence Surveys (PPS), despite their inherent limitations in terms of accuracy of results and possibility of bias, are a highly feasible alternative, easier to perform even on large scale multicenter studies, less expensive and less time consuming. PPSs offer many benefits, including easy repeatability and the ability to provide meaningful information to be used for specific targeted interventions. The introduction of standardized protocols such as the European Center for Disease Control (ECDC) Protocol for PPS of HAI and AM use in acute care hospitals, version 4.2 2011-2012 [10], guarantees consistency of results and easy repeatability. Results of local surveys may also be used for yearly intra-hospital comparison or benchmarking at regional, national or international level. In Ferrara University Hospital (FUH), infection and AM stewardship by PPS began in 1992, with a local Protocol and data entry form, updated over the years in agreement with the literature references [11]. This Protocol was used until 2011, when FUH participated in the first full scale ECDC PPS, October 2011. The survey was repeated in 2012 and 2013. Objectives of these studies were: to estimate the overall burden of HAIs and use of AMs in the FUH; to describe HAIs and AM use by type of functionally homogeneous wards; to allow a comparison of data collected during three surveys and with Italian and European data.
Methods
The surveys took place in October 2011, November 2012 and November 2013 in the FUH, a tertiary care hospital with 857 beds in 2011 and, after moving to a new hospital in 2012, with 711 beds. The materials and tools developed for the ECDC PPS of HAI and AM use in acute care hospitals were used for these surveys: the PPS protocol and codebook v4.2, including the case definitions of HAI, PPS data entry forms in an editable format for translation purposes, PPS hospital software HELIC-SWin.net, User manual -PPS hospital software HELIC-SWin.net [10]. All acute wards were included, except for Day-surgery and Day-Hospital departments. The study included all patients admitted to the ward before or at 8 a.m. and not discharged from the ward at the time of the survey, including neonates, if born before/at 8 a.m. For each ward, data had to be collected in a single day. Data collection for each survey was completed in two weeks. The surveys were carried out by trained medical doctors of the Postgraduate School of Hygiene and Preventive Medicine of Ferrara University, supported by doctors and nurses of the Hospital Network for Infection Control of each ward. The ECDC standard "Patient data form" was used, structured according to the following sections: demographic data, admission data, clinical data, AM use and HAI data [10]. Demographic, admission and clinical data, useful for identifying patient-based denominator data and risk factors, included: ward name, survey date, patient counter, age, sex, date of admission, surgery since admission, McCabe score [12], invasive devices in place on survey date (central vascular catheter-CVC, peripheral vascular catheter-PVC, urinary catheter, intubation). Only any active HAI on the survey date was recorded on the form [10]. Data collected for HAI included: presence of a relevant invasive device before onset (intubation for PN, central vascular catheter / peripheral vascular catheter for bloodstream infection-BSI and urinary catheter for UTI) [13], HAI present at admission, date of onset, origin of infection (if bloodstream infection, source) and microorganisms data. AM data (including generic or brand name, route, indication, diagnosis/site of infection, reason) were col-lected when a patient was receiving an AM on the day of survey (or in the 24 hours before the day of the survey for surgical prophylaxis). Registered drugs were classified according to the Anatomical Therapeutic Chemical (ATC) classification [14]. AMs included in the survey were Anatomical Therapeutic Chemical classes J01 (antibacterials), J02 (antifungals) and J04 (antimycobacterials). Indication for use of systemic AMs was recorded according to the following classification: communityacquired infection, infection acquired in long-term care facility (e.g. nursing home) or chronic-care hospital, acute hospital acquired infection, surgical prophylaxis (single dose, one day, more than one day), medical prophylaxis, other indications, unknown indication/ reason, unknown/missing information on indication not verified during survey [10]. Data were collected using the standard ECDC software HELICSWin.net v. 1.3. Statistical analysis was performed using Stata v.13. Difference in the distribution of nominal variables was assessed using Pearson's chi-square test with significance level set at 0.05. Continuous variables were tested for normality of distribution both graphically and by means of Shapiro-Wilkinson test, difference in distribution was then tested using Kruskal-Wallis test. Prevalence rate of HAI was calculated as the percentage of infected patients over the total number of patients observed during each survey. AM use prevalence was calculated as the percentage of the number of patients receiving at least one AM over the total number of patients observed. Risk factors analysis were performed by means of logistic regression in relation to two outcomes: presence of at least one HAI and receipt of at least one AM. Continuous variables were recoded into categories in order to maintain consistency with ECDC PPS [15] and to address the influence of outliers. The final models for both outcomes were developed by adding those risk factors which resulted to be significant (P < 0.2) in univariate analysis in a forward stepwise manner [16]. Significance level for inclusion in final model was set at p < 0.05. The presence of a central vascular catheter or peripheral vascular catheter was excluded from both models because of the correlation with the parenteral administration of AMs. Presence of relevant invasive devices was considered before the onset of an HAI for the HAI regression model. Length of stay in the HAI model was considered until the date of HAI onset if an HAI occurred during current hospital stay. Goodness-of-fit was assessed on eight smaller random sub-samples of the data using the Hosmer-Lemeshow chi square test. The discriminatory accuracy of the multiple logistic regression models was assessed using receiver operating characteristic (ROC) analysis. Standardized prevalence rates were calculated by using a 2-step method which takes into consideration predicted probabilities of the outcome according to the regression model and indirect standardization. The predicted probabilities were used to determine the mean predicted risk of HAI or AM use for each survey. Risk index ratios were calculated by dividing the observed (unadjusted) prevalence rates by the mean predicted risk of each survey, and adjusted prevalence rates were determined by multiplying standardized ratios by the observed prevalence rates in the entire study sample.
Results
Overall, 1,239 patients were observed in the three surveys; the mean age was 62.6 years and 47.3% were male. Mean length of stay was 9.4 days (median 6 days). At the time of survey, a central vascular catheter was present in 20.2% of observed patients; a peripheral vascular catheter in 56.0%; a urinary catheter in 35.9% and the percentage of mechanically ventilated / intubated patients was 3.8%. Differences among data collected during the three surveys proved to be statistically significant At the time of the surveys, results for microbiological investigation were available for 120 HAIs (85.0%). Escherichia coli was the most common pathogen, followed by Klebsiella pneumoniae and Enterococcus faecalis (Tab. II). Escherichia coli was the most prevalent pathogen even when stratifying by survey and also the most frequent causative pathogen for UTI. During the 3-year study period, isolated strains of Escherichia coli were frequently third-generation cephalosporin resistant (range 10%-20%), but only in 2011 were they also carbapenem resistant. In 2011, 33.3% of Klebsiella pneumoniae strains were thirdgeneration cephalosporin resistant and 16.7% were carbapenem resistant. Overall, the AM use prevalence was 51.1% (at least one AM). A total of 858 AMs were administered (Tab. III). Parenteral administration was the most prevalent route (69.
Discussion
The described prevalence rate of nosocomial infections was higher than the values reported in other studies [17][18][19][20][21] including the ECDC's 2011 report [15], which estimates a prevalence rate of 6.0% (country range 2.3%-10.8%) in European acute-care hospitals (6.1% in Italy). This difference in the reported values is due in part to the different characteristics of the hospitals included in the European survey which collects results from primary, secondary, tertiary care and specialized hospitals in different countries. However, the prevalence rate of HAI in FUH remains higher even when comparing results from tertiary care hospitals only (7.2%). One possible reason may be the fact that the surveys were carried out by independent auditors, to avoid conflicts of interest and to ensure the integrity of the auditing process. As con-Tab. II. Top five microorganisms isolated in healthcare-associated infections and percentage of antimicrobial resistance markers. what is suggested by the international consensus [30], further stressing the need for specific stewardship programs [31,32].
Microorganisms
Year by year analysis shows a decreasing, although not statistically significant, prevalence of AM prescription in FUH, dropping from 54.4% in 2011 to 48.4% in 2013, a result confirmed by standardization through logistic regression model. AM stewardship is a critical area of intervention in FUH, aimed at changing prescribing practices, leading to a better control of drug resistant microorganisms, improved appropriateness of antibiotic use and decreased costs.
Conclusions
FUH has a long history of activities aimed at risk management and infection control, based on a multimodal and multidimensional approach [11]. Moreover, the hospital's infection control policy includes: audit and feed-back to improve compliance of the healthcare workforce to good practices; retraining courses and educational programs; drafting reminders to support good practices for workers, patients and caregivers; continuous surveillance of surgical site infections; active support for the WHO Campaign "Save lives: clean your hands" since 2006, with the participation as an international site in the experimentation of WHO Guidelines on Hand Hygiene in Health Care (Advanced Draft) [33,34]. Despite their limitations, PPS are not expensive, take little time to carry out and need few human resources. PPS are easy repeatable and provide meaningful information to use for specific targeted interventions. The yearly repetition will be a useful means of keeping interest alive on the subject of HAI and AM use [35] and highlighting how changes in healthcare practices affect outcome variables. of congresses/conferences, and acting as investigator in clinical trials; the other Authors have no conflicts to disclose.
Authors' contributions
PA, GG, AS, MCM was responsable for the research coordination and contributed to the protocol definition, data collection, data analysis, manuscript drafting and critical revision of the manuscript. BB, AV, AF contributed to the data collection, data analysis and critical revision of the manuscript. All authors read and approved the final manuscript.
firmed by existing literature, Intensive Care Units were the most affected wards [15,[17][18][19][20][21]. UTIs were the most common HAI in all three surveys in FUH, unlike what is reported in other studies where PN and surgical site infections were more prevalent [15,17,18]. Use of urinary catheter, a well known risk factor for UTIs [22][23][24], was higher than what is reported in the literature [15,19,21]. Prevalence of surgical site infections was found to be lower than what is reported by other similar surveys [15,[17][18][19][20][21]. Appropriate urinary catheter indication is certainly an area which requires further analysis to assess possible overuse and guide practical interventions [25].
Year by year comparison of nosocomial infections and risk factors in the three surveys delivers substantially constant results even when corrected for case-mix by means of logistic regression. Risk factor analysis is consistent with data in the literature [15,19,21]. Statistically significant risk for HAI occurrence is independently associated with increased length of stay, McCabe Score "Rapidly fatal disease", use of urinary catheter and mechanical ventilation. Mechanical ventilation associated risk suggests a need for more effective preventive measures against ventilator-associated infections [26]. At the time of the surveys, results for microbiological investigation were available for 120 HAIs (85.0%). Escherichia coli was the most frequent microorganism isolated in all three surveys and the most frequent causative pathogen for UTI, followed by Klebsiella pneumoniae, Enterococcus faecalis and Candida albicans. These results show a higher prevalence of Enterobacteriaceae when compared with the ECDC's report data [15] which can be explained by the higher frequency of UTIs in FUH. AM use rates were higher than those reported in the literature [15,19], while the average number of AMs to treated patients ratio is consistent with the value reported by ECDC [15], showing no evidence of a higher rate of multidrug protocol prescriptions in FUH. Fluoroquinolones, third-generation cephalosporins and combinations of penicillins (including beta-lactam inhibitors) were the most frequent AM prescribed in all three surveys, a similar result to other literature reports which further underline a widespread use of broad spectrum antibiotics combined in multidrug protocols that is often necessary to counteract the increasing prevalence of AM resistance [15,[17][18][19]27]. On the other hand, the excessive and inappropriate use of antibiotics is the prime mover of the rapidly increasing prevalence of antibiotic-resistant microorganisms [28,29]. AMs were mainly prescribed to treat an infection (mainly community acquired). Medical prophylaxis was the second most frequent indication in all three surveys. These results are similar to those reported by the ECDC's 2011 point prevalence survey for Italy [15]. Surgical prophylaxis was mostly prescribed for more than one day, while one-day surgical prophylaxis was the least frequently prescribed. These results are substantially similar to those reported by ECDC for Italy in 2011 and other similar studies [15,18,19], underlining that antibiotics are used for longer than | 2018-04-03T05:36:34.826Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "99a04f67a712a02f1ec92627870582ffe8cba7f1",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "99a04f67a712a02f1ec92627870582ffe8cba7f1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258007407 | pes2o/s2orc | v3-fos-license | Emergence of disordered branching patterns in confined chiral nematic liquid crystals
Significance Chirality breaks the mirror symmetry and may introduce rich new dynamics to physical systems, as observed in particle physics, condensed matter, chemical, biological, and soft matter systems. Here, we consider a liquid crystal mixed with chiral molecules to explore the role of chirality in driving the destabilization of chiral bubbles and the branching dynamics of cholesteric fingers, as a response to thermal forcing on a confined space. We show that the large-scale organization and topology of the chiral textures emerge from stochastic tip branching and inhibition. Our work provides the means to control the organization of confined cholesteric patterns and paves the way for future studies in chiral systems, such as self-organization in noncentrosymmetric magnets.
Spatial branching processes are ubiquitous in nature, yet the mechanisms that drive their growth may vary significantly from one system to another.In soft matter physics, chiral nematic liquid crystals provide a controlled setting to study the emergence and growth dynamic of disordered branching patterns.Via an appropriate forcing, a cholesteric phase may nucleate in a chiral nematic liquid crystal, which self-organizes into an extended branching pattern.It is known that branching events take place when the rounded tips of cholesteric fingers swell, become unstable, and split into two new cholesteric tips.The origin of this interfacial instability and the mechanisms that drive the large-scale spatial organization of these cholesteric patterns remain unclear.In this work, we investigate experimentally the spatial and temporal organization of thermally driven branching patterns in chiral nematic liquid crystal cells.We describe the observations through a mean-field model and find that chirality is responsible for the creation of fingers, regulates their interactions, and controls the tip-splitting process.Furthermore, we show that the complex dynamics of the cholesteric pattern behaves as a probabilistic process of branching and inhibition of chiral tips that drives the large-scale topological organization.Our theoretical findings are in good agreement with the experimental observations.liquid crystals | chirality | interface dynamics | branching process Branching processes are responsible for the formation of a vast number of ramified structures observed in geology, chemistry, biology, and physics (1).In soft matter physics, fingering instability, whereby a flat interface becomes unstable, giving rise to tip splitting, is a well-known mechanism of spatial branching (2)(3)(4).Several macroscopic models have been formulated to describe the context-dependent mechanisms of branching (1,(5)(6)(7).However, it remains a challenging task to identify the key ingredients that lead to the large-scale branching self-organized patterns in each case.
The rich phenomenology of chiral nematic liquid crystals (CNLCs) renders them an ideal system to study pattern formation and branching (8)(9)(10)(11).CNLCs can be created by doping a nematic liquid crystal, characterized by a long-range orientational order, but not a positional one, with chiral molecules (12)(13)(14).The addition of chiral dopants can induce a spontaneous twist deformation in the nematic phase, creating a helical structure (12,13,15).The main feature of this phase is the characteristic length of the helix, known as cholesteric pitch p, which corresponds to the distance required for one full rotation of the nematic director vector n( r), where r = (x, y, z) is a position vector.The pitch is the mesoscopic manifestation of the molecular chirality (16), while the director vector field accounts for the local average orientation of liquid crystal molecules (17,18).When subjected to homeotropic anchoring in a cell of thickness d , the helical phase gets frustrated, so that given a critical degree of frustration, which is measured in terms of the ratio d /p, the system transitions to an unwound (nematic) metastable state.This state is purely geometric and is sustained by the competition between the pitch, geometric effects introduced by the cell thickness, and elasticity (13,19,20).The twisted or winded structure can be recovered by applying a voltage, a temperature difference, or changing the thickness to the cell in the unwound state (13).In general, the reappearance of the twisted phase is in the form of a translationally invariant configuration (TIC) or in the form of cholesteric fingers of type 1 (CF1).The TIC phase is characterized by a twist along the cell thickness n(z) (SI Appendix, Fig. S1) and the CF1 by a director field of the general form n(x, y, z) (Fig. 1 A-C and SI Appendix, Fig. S1).In directional growth experiments with voltage, other types of cholesteric fingers (CF2, CF3, and CF4) have been observed (21).The recovery of the twisted structure can be described by the minimization of the Frank-Oseen free energy with an additional chiral term (SI Appendix) (12).This type of noncentrosymmetric interaction is also modeled in chiral magnets and in particle physics (22,23).
Significance
Chirality breaks the mirror symmetry and may introduce rich new dynamics to physical systems, as observed in particle physics, condensed matter, chemical, biological, and soft matter systems.Here, we consider a liquid crystal mixed with chiral molecules to explore the role of chirality in driving the destabilization of chiral bubbles and the branching dynamics of cholesteric fingers, as a response to thermal forcing on a confined space.We show that the large-scale organization and topology of the chiral textures emerge from stochastic tip branching and inhibition.Our work provides the means to control the organization of confined cholesteric patterns and paves the way for future studies in chiral systems, such as self-organization in noncentrosymmetric magnets.The winding/unwinding transition of chiral nematic liquid crystals has been widely studied from experimental and theoretical perspectives (10,(24)(25)(26)(27)(28)(29).Near this transition, the distinctive CF1 appear (Fig. 1 A-I ) (30).These elongated chiral textures nucleate from the unwound background and may elongate in arbitrary orientations from both ends.(Fig. 1A and SI Appendix, Fig. S1 for the schematic director fields in the midplane of the cell and in a cross-section along d , respectively.)The CF1 are dissipative soliton-like structures with a well-defined width that is regulated by an in-plane good twist of the nematic director (Fig. 1B) (25,31).The elongation of fingers introduces the good twist in the frustrated sample.Fingers are asymmetric and exhibit two different tips, a rounded and a pointy one.The difference in morphology is associated with the handedness of the nematic director near the tips: The good twist gives rise to rounded tips (Fig. 1C), while the bad twist produces pointy ones (Fig. 1D) (25).In these frustrated CNLCs, above a critical forcing-of temperature, voltage, or confinementfingers invade all the system through a branching dynamic.Pointy tips propagate in a straight line, nucleating rounded tips through a side-branching mechanism, and rounded tips become unstable, undergoing tip splitting as they propagate (11,25,32).Pointy tips, unlike rounded tips, are not generated during branching events and quickly reconnect with the cholesteric pattern or merge with impurities in the system (25).A combination of side branching and reconnection of pointy tips gives rise to closed loops of CF1.Closed loops can transform into localized twisted objects (29,33,34).These localized structures have been termed elementary torons, in particular, triple-twist toron-1 (35,36).They exhibit a skyrmion-like structure in the midplane director field (36, 37) (cf.Fig. 1 J and K ).Here, we refer to these elementary torons as chiral bubbles, which have also been termed spherulites (13).While more complex cholesteric textures can arise to alleviate frustration (34)(35)(36)38), in our study, we focus only on CF1 and in the interface of chiral bubbles.Similar to glass beads, chiral bubbles can act as nucleation sites for CF1 avoiding the creation of pointy tips (29,31), which are energetically unfavorable.Hence, the long-term dynamic of growth is governed by the continuous elongation and splitting of rounded tips, resulting in a disordered branching cover.Despite all the work conducted in frustrated CNLCs, the mechanisms that drive the tip splitting of rounded tips of CF1 and the self-organization of disordered ramified patterns have not yet been studied in detail.In this work, we study how the tipslipping instability develops at cholesteric interfaces and which are the interaction rules that ultimately give rise to the large-scale cholesteric branching pattern.For this, we focus on temperaturetuned chiral nematic liquid crystal experiments that allow us to control the transition toward branching and the formation of ramified patterns by heating the system.Using an adequate order parameter and its minimal model, which is derived from first principles, we demonstrate the role of chirality in the tip-splitting mechanism and the emergence of the disordered branching pattern with a velocity-curvature equation for the cholesteric interface.We show that during the growth of the chiral fingers, there is a selection principle in the morphology and speed of the rounded tip, which depends on the forcing of the system.From these analyses, we deduce a small number of crucial interactions that regulate the growth process and show that the topological features of the large-scale pattern emerge from stochastic branching and termination events.
Results
Emergence of Disordered Branching Patterns.To explore the growth of cholesteric branching patterns experimentally (Fig. 1 and SI Appendix, Movie S1), we consider two chiral-nematic liquid crystal cells, composed of mixtures of a commercial nematic liquid crystal E7 (Merck) and chiral molecules EOS12 (39), under thermal forcing.The cholesteric pitch p in each sample depends on the EOS12 concentration and on the temperature within the cell (14).The samples were introduced into a thermal chamber and then placed between crossed polarizers (Fig. 1E).In this setup, dark regions correspond to the unwound phase, while birefringent regions (shades of blue) correspond to the cholesteric phase (Fig. 1F ).To trigger the emergence of the cholesteric phase, we initialize the experiments at room temperature (20 • C), where the CNLC is in the unwinding state, and increase the temperature at a rate of 0.35 • C min −1 until reaching a winding phase.Fig. 1F (cell #1; T = 51.3• C, p = 3.4 μm, d/p<58.8;Materials and Methods for details) shows a steady-state of the system, which corresponds to a disordered selforganized labyrinthine pattern (40).This pattern develops mainly from the elongation and splitting of rounded tips that leads to a ramified texture, constituted locally by various connected CF1 pointing in arbitrary directions.The cholesteric fingers may be initially nucleated from impurities in the unwound phase as shown in Fig. 1G (cell #2; T = 51.7 • C, p = 12.9 μm, d /p=0.7) or at the cholesteric interface of a chiral bubble, which is created by cooling a closed loop of CF1 (29) (see cell #1 in Fig. 1H ), or at the interface of glass beads, as depicted in Fig. 1I (cell #2), where molecular deformations are enhanced (31,41).Under the experimental conditions considered here, the winding/unwinding transition is characterized by the emergence of CF1 (Fig. 1), instead of the TIC phase, when d /p ≈0.7 and the transition temperature (T c ) is around 50 • C. In previous experiments, the TIC phase emerged subcritically in a mixture of E7 with EOS12 at 3 wt% with d /p = 0.4 and T c ≈61.3 • C (29).The texture selection and type of transition are governed by the elastic constants of the CNLC mixture E7-EOS12 and the confinement ratio in the cell (SI Appendix) (13).In consequence, in the current experimental setup, CF1 are more stable than the TIC phase.
In cells #1 and #2, the system generally avoids the creation of pointy tips by nucleating rounded tips of CF1 from chiral bubbles or glass beads instead.Therefore, the merging process of pointy tips described in the Introduction section can be neglected.We illustrate schematically the in-plane director field of the chiral bubble (Fig. 1 J -K ) and around the glass bead (Fig. 1 L-M ) to highlight the similarity between both interfaces and rounded tips (Fig. 1A).In the following, we focus our attention on the growth of fingers and their rounded tips, which can destabilize and undergo branching.
Before introducing a model to describe the rounded tip dynamic and the subsequent patterning process, we explored whether further qualitative insight into the growth process could be extracted from the spatial organization of the labyrinthine pattern (cell #2 in Fig. 2A).Analysis of the power spectrum of the spatial patterns (Fig. 2B) revealed a characteristic wavelength of λ c = 14.9 μm and powder-like ring spectrum with local order, proper of labyrinthine patterns (Fig. 2 B, Inset) (40).Furthermore, the distribution of segment lengths (defined as the distance between two branching points along the cholesteric phase) was well fitted by a gamma distribution (data in Fig. 2C), whose exponential tail suggests that the timing between consecutive branching events is uncorrelated (7).The typical segment length (observed as a kink for short fingers) indicates a short-term memory or maturation process between consecutive branching events of a tip.From the temporal evolution of the branching pattern, we noted that branching events could be inhibited by the neighboring pattern (green arrowheads in Fig. 2D), with some newly formed tips receding in favor of the growth of other, more developed tips (white arrowheads in Fig. 2D).These interactions lead to remodeling of the patterns, further contributing to the disordered self-organization of the patterns.
Altogether, these observations suggest that the dynamic of growth, branching, and inhibition of rounded tips controls the self-organization of the chiral labyrinthine patterns.To understand how these mechanisms arise in the context of CNLCs, in the following, we introduce a Ginzburg-Landau-type model that allows us to relate the interaction mechanisms to the chiral nature of the liquid crystal.
The Chiral-Anisotropic Ginzburg-Landau (CAGL) Model.Close to the winding/unwinding transition and in the long-pitch limit of the chiral nematic liquid crystal, the following model can be derived (SI Appendix for details) where A(x, y, t) = αe iθ is the complex order parameter close to the transition (26), ∂ η = ∂ x + i∂ y is the Wirtinger derivative, and Ā is the complex conjugate of A. Here, µ is the bifurcation parameter describing the winding/unwinding transition, while the parameter β = β(K 12 , K 32 , d/p) controls the type of bifurcation, which is subcritical (β > 0) for the experiments considered here (29).The elastic coupling, characterized by the parameter δ = (K 12 − 1)/2(K 12 + 1) (42), considers both isotropic and anisotropic effects.The last term breaks the mirror symmetry in the plane of the cell and is controlled by the parameter χ = χ(K 12 , K 32 , d/p).
where the parameters {K 1 , K 2 , K 3 } are the elastic constants of the CNLC (12).The model ( 1) is variational, i.e., ∂ t A = −δF[A, Ā]/δ Ā, where is the free energy of the system, which is minimized during the dynamics of Eq. 1.The CAGL Eq. 1 exhibits the same equilibria observed in CNLC experiments: a homeotropic phase A o = 0 (region I in Fig. 3A); a translationally invariant configuration (TIC) phase A T (region II in Fig. 3A); a modulated TIC (starting in region II and crossing the green curve into region V or VI in Fig. 3A), chiral finger states (region IV in Fig. 3A); chiral bubbles (region III in Fig. 3A); and cholesteric labyrinths (starting in region IV and crossing the blue curve into region V in Fig. 3A) (13,29).Additionally, the model (1) has a region of bistability of states A o and A T (µ lb ≤ µ ≤ µ ub ) that contains a Maxwell point µ MP , where both states are energetically equivalent ).The model also exhibits fingers and tip splitting (Right panels of Fig. 3A), where fingers nucleate from the homeotropic phase or from a chiral bubble and invade the system through elongation (region IV of Fig. 3A) or branching of their rounded tips (region V of Fig. 3A).Note that the fingers emerge at µ < µ MP (region V in Fig. 3A), where the state A o is more stable than A T .In brief, the appearance of a finger with a given width is not explained by a modulational instability, as in the case of the modulated TIC phase (13).
To understand the emergence of the chiral fingers from an energy minimization perspective, we first study the properties of an infinite finger in the CAGL Eq. 1.In the top panel of Fig. 3B, the polarized field ψ(x, y) ≡ Re(A)Im(A) of the infinite finger solution is shown, together with the horizontal profile of its modulus R and phase gradient ∂ x φ, where we use the polar representation A(x, y) = Re iφ .The profiles show bell-shaped soliton structures, which are characterized by their heights ( R and ¯ ) and widths (w and w φ ), and can be approximated by R ≈ Rsech(x/w) and ∂ x φ ≈ − ¯ sech(x/w φ ), respectively.Introducing these ansatz into the free energy (2), we obtain (SI Appendix) where F o = −2µ R2 − 2β R4 /3 + 16 R6 /45, and I 5 , I 6 , I 7 are integrals, which depend on the coupling between R, φ, and ∂ x φ.
The finger solution is supported by the homogenous state ( , which is positive for µ < µ MP .Hence, the only energetic contribution that stabilizes the finger solution is the chirality, proportional to χ in Eq. 3, while all the other terms in Eq. 3 act as a nucleation barrier.To find the optimal finger width, we minimize the free energy Eq. 3 with respect to w in the limit w/L 1, w φ /L 1, where L is the length of the finger in the y-direction.As a result of the dependence on the integrals in Eq. 3, we can only find a relationship between the optimal parameters of the finger solution; w 3 ≈ 3π 3 χw 3 φ R3 ¯ /4F o (SI Appendix).Therefore, the nontrivial phase structure plays a fundamental role in defining the width, and F o must be positive to observe stable finger solutions, i.e., the most stable homogenous state needs to be A o .Note that a similar energy dependence is obtained in bistable reactiondiffusion systems (6).
The bottom panel of Fig. 3B shows the variation of the onedimensional finger width w as a function of the chirality χ (red curve), which has a maximum at χ = χ b .When χ > χ b , the free energy F finger (Eq.3) becomes negative (yellow line in the Bottom panel of Fig. 3B) and the system favors the propagation of fingers by elongation of the two tips (Bottom Right panels of Fig. 3B).Conversely, when χ < χ b , the chiral finger shrinks and eventually disappears due to the merging of both tips.
When chiral fingers emerge, they propagate and cover the whole system.In the experiment of CNLCs, the temperature specifies the pitch and finger width and fixes the propagation speed of CF1, v.We note that the chiral finger growth has a selection mechanism similar to that observed in dendritic growth (44,45), where the propagation speed is controlled by the curvature of the tip.By increasing the temperature, fingers propagate faster, and the tip swells, as shown in Fig. 3C.
CF1 may be characterized morphologically by the shape factor w tip /w, where w tip is the diameter of the rounded tip and w is the finger width (cell #2 in the left panel of Fig. 3C).At the critical speed v ts and corresponding critical shape factor, propagating tips become unstable, swelling and undergoing tip splitting.This branching process may be interpreted as a more efficient dissipation mechanism to develop chirality than simple tip propagation.The tip-splitting dynamics is characterized by the inflation, flattening, and interfacial modulation of the rounded tip (see the snapshots t 1 − t 3 in the Left panel of Fig. 3C).The Right panel of Fig. 3C shows the change in morphology in numerical integrations of Eq. 1 for different values of the chiral parameter χ.Remarkably, the relation between speed and shape obtained through numerical simulations closely resembles the experimental observations.The critical value of the shape factor at which tip splitting takes place is close to its experimental value w tip /w 1.4, and in both cases, tips suffer the same curvature and morphological changes during branching.
One way to understand the emergence of tip splitting is to analyze it from the perspective of local interface dynamics (5,6).Recently, a local zero-dimensional interface equation was derived from Eq. 1 near the critical point {µ MP , χ o } for chiral bubbles (29).There, it was shown that a balance between metastability, a linear curvature term due to chirality, and a squared curvature contribution defined the size of chiral bubbles.Here, to describe the tip-splitting instability, one needs to account for the spatial modulation along the interface and the proper stabilization mechanism at small wavelengths.We model the interface of the rounded tip as the interface of half of the chiral bubble in model Eq. 1.The tip splitting is analogous then to a fourth mode instability of the interface of a full cholesteric bubble as shown in the Bottom Right panel of Fig. 3A.Therefore, to extract the curvature dynamics of the interface, we perform a nonlinear stability analysis around the interface of the chiral bubble solution, A cb = R o e iφ in Eq. 1 and obtain the speedcurvature or Gibbs-Thomson (46) relation where A, B, C, and D are constants (SI Appendix).A similar version of Eq. 4 has been heuristically proposed to explore the local behavior of interface dynamics (5), derived in the study of growth laws of droplets (47) and bistable reaction-diffusion systems (6) and also used in the framework of bacterial growth (48).The constants A, B, and D are always positive above the dark yellow line in the phase diagram of Fig. 3A.For large wavelength perturbations, Eq. 4 has a modulational instability due to a Mullins-Sekerka type of term (linear in curvature κ), which is tuned by the chirality, i.e., growth is enhanced in curved regions of the interface.At short wavelengths, the instability is saturated by the last term in Eq. 4, which plays the role of line tension.The cubic term in Eq. 4 is responsible for the tip splitting, where the constant C is defined by a nontrivial balance between chirality, diffusion, and energy differences between states, and it must be positive to ensure that the curvature dynamics is variational (SI Appendix).The relation between curvature and splitting explains why a tip must swell before branching, a dynamic that also explains the maturation (or refractory period) feature observed earlier in the segment length distribution (Fig. 2C).
Another key ingredient for the formation of the cholesteric branching patterns is the repulsion between the chiral fingers (25).To study the origin of the CF1 repulsion in our model, Eq. 1, we consider two infinite cholesteric fingers.The Bottom panels of Fig. 3D show two different instants of the interaction between two cholesteric fingers, where a nontrivial structure in the gradient of the phase is observed between the two fingers.As it turns out, this structure is responsible for the repulsion between fingers.Following this idea and the symmetry of the modulus R and the phase gradient ∂ x φ, we model the repulsion between fingers as the interaction between a single finger, at a position x o (t) from the origin, and half of the phase structure near where A(t) = R(t)e iφ (t) and θ b represents the phase structure near x = 0. Numerical observations show that the tail of θ b decays like e −bx /x for a positive constant b.Based on the variational form of model Eq. 1, the dynamic of the position x o is given by (SI Appendix) The prefactor N (µ, χ ) is positive in the range of parameters where chiral fingers are observed (region IV and V of Fig. 3A).Thus, the interaction between fingers is repulsive in order to minimize the energy of the system.By integrating the repulsive relationship, we get the semianalytical curve x o (t) shown in the Top panel of Fig. 3D.
Organization of the Large-Scale Chiral Branching Pattern.In the previous section, we described how the local destabilization of the nematic phase leads to the propagation of a chiral (or cholesteric) phase, which organizes into a ramified pattern that expands through the propagation of chiral tips.The chirality drives the growth and splitting of tips as well as the repulsion between fingers.From the propagation of the branching pattern (experiment Fig. 2A and model Fig. 4A), we note that actively elongating and branching tips localize entirely at the periphery of the growing pattern, while tips submerged within the labyrinthine structure arrest their growth due to steric interactions with the surrounding pattern.The interactions that give rise to the branching cholesteric pattern can be reduced then to tip propagation and branching, repulsion (that results in alignment of neighboring segments), and tip inhibition.Moreover, the characteristic width of CF1 combined with the energy minimization dynamics leads to a spatial pattern structure with a well-defined wavelength, with a short-scale order but largescale disorder (Fig. 2B).
To study how the large-scale disorder emerges from the local interaction rules, we first focused on the model (1), which allowed us to study the topological properties of large-scale patterns without the influence of additional structures (Fig. 4A).From these patterns, we were able to extract the branching trees (Fig. 4B), produced by reducing the pattern to a network of vertices and edges, where each edge corresponds to a chiral segment that connects vertices representing branching points.Here, we were concerned solely with the topology of the network, which is completely characterized by the levels (or generations) of branching.The subtrees of the network, defined as sets of branches with a last common ancestor node at level 2, showed a high heterogeneity of sizes (number of segments) and persistence (number of generations beyond level 2); see colored subtrees in Fig. 4 A and B indicating a rather random organization of the topology of the pattern as seen, for example, in biological tissues (7,49,50).A heterogeneous organization of the branching tree may arise from a stochastic process of branching and inhibition of propagating chiral tips, which was also supported by the exponential decay of segment lengths (Fig. 2C).To test this hypothesis, we computed the probability that tips at a given level arrest their growth (Fig. 4C).As time passes, tips that are not constrained may continue branching, lowering the termination probability.We then simulated a zero-dimensional birth-death process, where particles either branched or became inhibited with probabilities depending on their generations and given by Fig. 4C (Methods for details).To compare the results of the show the cumulative probability of (D) tree sizes and (E) tree persistence (full trees were used due to their small size) obtained from experiments (mean and SD) and birth-death process when using the termination probabilities in (G) as input.
model (1) and the stochastic birth-death process, we looked at the distribution of subtree sizes and persistence, from which we found excellent agreement (Fig. 4 C and D).These results show that even though the system has well-defined interaction rules, their large-scale organization emerges from random events of branching and termination that are regulated locally, at the single tip level (see the typical branching tree from the birthdeath process in Fig. 4F ).
To verify that these observations also apply to the experimental branching patterns, we looked at 7 realizations of the experiments, where multiple distinct patterns nucleated from the glass beads in the sample (Figs.1E and 4G).The final patterns had a range of sizes and interacted as they developed, in some cases inhibiting the growth of neighboring tips.By focusing on trees with more than one branching event, we reconstructed the termination probability (Fig. 4H ) and used it as input in the stochastic birthdeath process.These resulted, again, in good agreement between experiments and the stochastic process (Fig. 4 I and J , where full trees were analyzed due to their small size), strongly supported the conclusion that the large-scale topology of the chiral branching patterns is regulated locally by statistical rules of branching and termination, resulting in the disordered patterns observed.
In summary, we have investigated experimentally and theoretically the spatial and temporal organization of thermally induced branching patterns in chiral nematic liquid crystal cells.By using the Ginzburg-Landau-type description of CNLCs, we established the role of chirality in the formation of disordered branching patterns.Here, the (de)stabilization of chiral fingers arises from an energy minimization process, which also leads to tip splitting and causes repulsive interactions between fingers.We extracted a minimal set of local rules that regulate the pattern growth-tip elongation, branching, repulsion, and inhibitionand showed that the large-scale organization of the branching pattern described a stochastic birth-death process, where branching and termination events are probabilistic in nature.The large-scale organization of the chiral phase emerges then from local interactions at the single tip level, which minimizes energy efficiently through branching.
Our analyzes show that even though the liquid crystal structure is inherently 3D, the dynamics of growth of the branching phase follows simple local rules that take place on the 2D midplane, resulting in branching and inhibition of tips.Therefore, neglecting the three-dimensional liquid crystal structure is a good approximation for studying the formation of disordered branching patterns resulting from CF1 destabilization.An interesting future direction of research is to explore other finger structures, such as CF2, CF3, and CF4 (21) and investigate their space-filling dynamics.Additionally, in the broader context, it will be interesting to explore possible branching processes of stripe phases in chiral magnets (51).
Materials and Methods
Materials and Experimental Setup.We consider a chiral liquid crystal composedofamixtureofacommercialmulticomponentnematicliquidcrystalE7 (pure components: 4-cyano-4'-n-pentyl-1,1'-biphenyl (5CB-51\%); 4-cyano-4'n-heptyl-1,1'-biphenyl (7CB-25\%); 4-cyano-4'-n-octyloxy-1,1'-biphenyl (8OCB-16\%); 4-cyano-4"-n-pentyl-1,1' ,1" -terphenyl (5CT-8\%)) from Merck with chiral molecules EOS-12 (4-(5-dodecylthio-1,3,4-oxadiazole-2-yl)phenyl 4'-(1"-methyl heptyl-oxy)benzoate) at 25 wt% and 7 wt%.The cholesteric pitch p associated with each chiral-nematic mixture is measured with the Grandjean-Cano technique at the temperatures of observation, by using a planoconvex cylindrical lens of radius 10.3 mm (Thorlabs) as thickness modulation.Two cell preparations were implemented.In the first one, a chiral liquid crystal droplet (with EOS-12 at 25 wt%) is deposited over a soda-lime glass sheet using a microcapillary tube and covered with another sheet of the same characteristics (2.5 cm×2.5 cm area and 4 mm thickness).This type of glass induces a homeotropic anchoring on the liquid crystal sample.The squeezed disk-shaped droplet reaches an equilibrium diameter of approximately 1 cm.The cell obtained with this method is cell #1 with d = 200 μm.The cell thickness was obtained with a Mitutoyo digital micrometer with an accuracy of 1 μm.We note that to increase the resolution of the images of chiral bubbles, we used a 50x objective with a small working distance (Leica, HC PL APO 50x/0,90).Then, in our experimental setup, it was unavoidable to squeeze cell #1 with the objective, thus pushing the cell thickness to an effective value d <200 μm.For this reason, this cell was used exclusively for observational purposes (Fig. 1 F and H).The second method is by filling chiral nematic liquid crystals (with EOS-12 at 7 wt%), by capillary action at 70 • C, into a fabricated cell (SG025T090uT180 manufactured by Instec) of thickness d = 9 μm, which is chemically treated to give homeotropic boundary conditions, and its thickness is fixed by glass beads.This cell, #2, was used in the experimental measurements discussed in the text (Figs. 2, 3C, and 4 G-J) and in the observations shown in Fig. 1 G and I.The prepared cells are introduced into a LinkamT95-PE hot stage and placed between crossed polarizers in a Leica DM2700P microscope with 5x, 10x, and 50x objectives.A CMOS camera records the branching dynamics.
Numerical Integration of the Chiral-Anisotropic Ginzburg-Landau Equation.To solve model Eq. 1, we write the equation in terms of its real part u and imaginary part v (A = u + iv).Then, we discretize the space by using a finite difference scheme with a spatial step of x = 0.25 and a three-point stencil using nonflux boundary conditions.The coupled equations for u and v are numerically integrated in time with the Runge-Kutta 4 time integrator with temporal step t = 0.01.The finger solutions shown in Fig. 3 were created by perturbing the zero solution with a rectangular perturbation of width 2w and amplitude (u, v) = (1.5, 0) in region IV of the phase diagram in Fig. 3A.Depending on the proximity of the tips to the boundaries (with nonflux boundary conditions), we can annihilate tips and create fingers only with rounded tips (Fig. 3C) and without tips (Fig. 3 B and D).The chiral bubble solutions are created following the experiment.We start with a finger in region IV and sweep the parameter χ or µ to access the branching region V of the phase diagram in Fig. 3A.The pointy tip of a finger can merge with a side branch and create a CF1 loop solution (29).Then we change the parameters into region III and the CF1 loop solution collapses into the chiral bubble solution.This localized solution is used as the initial condition in Figs.3A and 4A (in the branching region V).All the numerical results related to Fig. 3 are obtained in square grids of size 200x200.In the case of Fig. 4A, we used a square grid of size 1000x1000.
Shape Factor w tip /w and Speed of CF1.To characterize the morphology of the chiral fingers, we introduced in the text the dimensionless shape factor w tip /w.The width w of the fingers is calculated as the finite width at halfmaximum of the transversal profiles of the fingers from the binarized images in the experiment and from the numerical solutions of Eq. 1.We determine the diameter of the tip w tip as the diameter of the biggest circle that fits the rounded tips of CF1.Once the biggest circle is fitted in the rounded tip of a CF1, we track the position of the tip and measure its speed v.In the experimental case, mixture of E7 and EOS-12 at 7 wt% within cell #2 with d/p ≈0.68, we averaged the width, tip diameter, and speed of five fingers, which are seen under crossed polarizers at different temperatures.Finally, the criteria to determine the tip-splitting speed v ts is when the far-most point of the rounded tip interface has zero curvature (flat front).
Numerical Simulation of the Stochastic Birth-Death Process.The topology and statistics of a branching tree depend on the growth dynamic and how tips interact with the surrounding structures.In particular, the exponential decay of segmentlengthsinthechiralbranchingtree(Fig.2C)suggeststhatthebranching and termination events are uncorrelated, thus following a Poisson-like process, albeit with a short refractory period.With this in mind, we questioned whether the large-scale topology of the branching tree could be fully characterized by its branching (and terminating) probabilities.For this, we formulated a simple birth-death model: a zero-dimensional branching process, where tip branching and terminations follow a stochastic rule.In this birth-death model, which has also been used to describe ramified biological tissues (49), tips are allowed to branch and terminate with probabilities estimated from the data.These probabilities are obtained from the termination probabilities q i (Fig. 4 C and H) and depend exclusively on the generation in the branching tree.We note that if the birth-death model was not able to recapitulate the branching topology of the tree, then it would indicate that correlations and spatial considerations would indeed be essential to the resulting large-scale chiral pattern.Numerically, the birth-death model was implemented as a discrete-time process, where, at every iteration, all active (tip) particles were allowed to either branch (with probability p branch = 1 − q i ) or become inactive (inhibited) with probability p inhib = q i , depending on the generation i at which the particles are.For this, we initialized the system with a number N ≥ 1 of particles to match the initial state observed either in the model (1) or the experiments.For each realization, we kept track of the history of all particles in order to reconstruct the branching trees (Fig. 4F).
Data, Materials, and Software Availability.The raw data used for this study are available in Zenodo repository (DOI: https://doi.org/10.5281/zenodo.7753119).All other study data are included in the article and/or SI Appendix.
Fig. 1 .
Fig. 1.Emergence of branching patterns in cholesteric liquid crystal cells.(A-D) and (J-M) display the schematic representation of the director field of the CNLC in the midplane of the cell z = d/2.The angles and correspond to the tilt angle of n from the z-axis and the angle between the x-axis and the projection of n in the plane of the cell, respectively.(A) shows the CF1 director field on the plane, characterized by a good twist across its (B) body and (C) rounded tip and a localized bad twist at its (D) pointed tip.(E) Schematic representation of the experimental setup.(F ), Steady-state cholesteric branching pattern reached after the tip-splitting dynamics.Evolution of branching patterns through the fingering instability of a cholesteric interface starting at t 1 = 0.00 s, which is triggered at (G), cholesteric fingers (t 2 = 5.27 s; t 3 = 5.83 s, t 4 = 6.63 s) of type I, (H) chiral bubbles (t 2 = 2.62 s; t 3 = 3.20 s, t 4 = 3.41 s), and (I), glass beads (t 2 = 0.88 s; t 3 = 1.28 s, t 4 = 1.63 s).The initial conditions from where the cholesteric interfaces were created are (G) impurities in the nematic phase, (H) closed loop of CF1, and (I) glass bead.The branching patterns (t 4 ) are observed at (G), 51.7 • C, (H) 51.3 • C, and (I) 51.7 • C. The green (purple) arrows in (G) illustrate the elongation of the rounded (pointed) tips of CF1.(J) depicts the director field of a chiral bubble on the plane, exhibiting a radial (K ) -twist from its center to the interface (skyrmion-like).(L-M) show a schematic representation of the director field build-up around a glass bead on the plane.
2 .
Tip-branching drives the formation of a cholesteric labyrinthine pattern.(A), Time lapse of the destabilization of multiple cholesteric interfaces around glass beads (cell #2 with d/p = 0.7) and formation of an extended labyrinthine pattern, at times t 1 = 3.3 s, t 2 = 7.8 s, t 3 = 12.2 s, and t 4 = 16.7 s. (B), Circular average of the 2D power spectrum (Inset Top) of the extended pattern in panel (A) at t 4 , showing a characteristic wavelength of the cholesteric pattern of c = 14.9 microns and (Inset Bottom) local Fourier transform characterizing the local order in the pattern.(C), segment length distribution of the experimental labyrinthine patterns (markers, with errors obtained from 7 independent realizations) and a gamma distribution fit (solid line, for scale and shape parameters), together with the distribution obtained from numerical integration of Eq. 1 at three different times, rescaled by the ratio of the experimental and model wavelengths c / CAGL ≈ 2.3.(D), Two time points (t 1 < t 2 ) of the experiment, showing remodeling of the patterns, where short fingers are inhibited (green arrowheads) allowing other fingers to continue elongating (white arrowheads).
Fig. 3 .
Fig. 3. Local ingredients for the appearance of cholesteric labyrinths.(A), Phase diagram of Eq. 1 with = 0.05 and = 1.lb = −1/4 and ub = 0 are the boundaries of the bistability region between A o (I) and A T (II).MP = −3/16 is the Maxwell point.o is the critical chirality, where chiral bubbles appear.The dark yellow line marks the saddle-node transition of chiral bubbles.The light green curve delimits the emergence of chiral fingers.The blue line represents the tip-splitting instability.Region III is the stable zone of chiral bubbles.In region IV, fingers enlarge from their tips.Rounded tips of chiral fingers are unstable in region V. Regions V and VI exhibit modulated TIC.Right panels show temporal snapshots for three different initial conditions with = −0.4, in the regions IV ( = 2.31) and V ( = 2.70).(B) shows (top) the profiles of the modulus |A| and the gradient of the phase ( ) in the x-direction ∂ x of an infinite chiral finger, with = 2.4 and = −0.4,and (Bottom) shows the variation of the finger width w with respect to (red line) in the one-dimensional case, with b = 2.3 when = −0.4; the yellow line shows the change in free energy F finger .The insets show the polarized field of chiral fingers with = 2.2 < b and = 2.4 > b , exhibiting shrinking and elongation, respectively.(C), Different morphologies of chiral fingers were observed experimentally in cell #2 with d/p ≈0.68 (Left panel) and numerically (Right panel) with = −0.4.In the experimental case, the graph shows the speed of the rounded tip against the shape factor w tip /w for different temperatures (the variations of the pitch with temperature in SI Appendix, Fig. S2).w tip is the biggest diameter within the rounded tip of the chiral finger, and w is the width of the finger far from the rounded tip.The insets show the morphologies associated with 49.7 • C (pink asterisk) and 50.4 • C (light blue asterisk).Dots are the average of five fingers moving inside a cell of CNLC.The vertical and horizontal bars are the SD of the speed and the shape factor, respectively.In model Eq. 1, the different morphologies are obtained by varying the parameter.The insets display the finger shapes in the case of = 2.31 (green asterisk) and = 2.45 (orange asterisk).The tip-splitting regime is shown for both cases (50.5 • C and = 2.46).Three snapshots (t 1 = 0.0 s, t 2 = 0.25 s, and t 3 = 0.4 s) of the chiral fingers interface are shown, demonstrating the advance, flattening, and modulation of the rounded tips.All speeds are normalized to the average speed previous to tip-splitting v ts .In the experiment, v ts = 27.4 μm s −1 .(D), (Top) Evolution of the distance x o (t) in the repulsion between two infinite fingers for = −0.4 and = 2.5.Black dots were obtained from direct simulations, and the solid line corresponds to the integration of Eq. 5.The Bottom panels display two instants, t 1 and t 2 , of the repulsive dynamics.
Fig. 4 .
Fig. 4. The large-scale organization emerges from probabilistic tip branching and inhibition events.(A), Numerical integration of the Eq. 1 showing the destabilization of a chiral bubble-like initial condition (t = 0) into a branched pattern (t = 240 and t = 360, measured in arbitrary units).Subtrees of the branching structure, defined as those trees that have a last common ancestor at branching level 2, are colored at different times to emphasize the variability in the tree growth dynamics.(B), Branching tree representation of the pattern at time t = 360, shown in (A).(C) The probability that a tip terminates at a given level in the tree at different time point obtained from n = 8 large-scale simulations of the model (1).(D and E) show the cumulative probability of (D) subtree sizes and (E) subtree persistence obtained from large-scale simulations of the model (mean and SD from n = 8 realizations) and birth-death process when supplied with the termination probabilities at each time point in (C), with mean (line) and SD (shaded) from n = 10 3 realizations.(F ), Branching tree resulting from the birth-death process using as input the termination probabilities at t = 360 shown in (C).(G), Representative stationary state from experiments; disconnected branching patterns are shown in different colors, and black-shaded patterns were not considered in the analysis as they cross the boundaries of the observation window.(H), Average termination probability obtained from 39 disconnected branching patterns from 7 experimental realizations.(I and J)show the cumulative probability of (D) tree sizes and (E) tree persistence (full trees were used due to their small size) obtained from experiments (mean and SD) and birth-death process when using the termination probabilities in (G) as input. | 2023-04-08T06:17:44.562Z | 2023-04-07T00:00:00.000 | {
"year": 2023,
"sha1": "e589d251840287c234f7e09964206e9ed2df7841",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1073/pnas.2221000120",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b35859ff6e33517ae71ad45f2db50739deac33a9",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
262045571 | pes2o/s2orc | v3-fos-license | Fatal heat stroke based on foudroyant irreversible multiple organ dysfunction in German summer
Abstract Objectives Heat stroke is a serious condition that might lead from moderate organ impairment to multiple organ dysfunction syndrome. Appropriate diagnosis-finding, fast initiation of cooling and intensive care are key measures of the initial treatment. Scientific case report based on i) clinical experiences obtained in the clinical management of a particularly rare case and ii) selected references from the medical scientific literature. Case presentation We present a case of a young and healthy construction worker who suffered from an exertional heat stroke with a body core temperature exceeding 42 °C by previous several hour work at 35 °C ambient temperature. Heat stroke was associated with foudroyant, not reversible multiple organ dysfunction syndrome, in particular, early disturbed coagulation, microcirculatory, liver and respiratory failure, and subsequent fatal outcome despite immediate diagnosis-finding, rapid external cooling and expanded intensive care management. Conclusions Basic knowledge on an adequate diagnosis(-finding in time) and treatment of heat stroke is important for (almost each) physician in the summertime as well as is essential for the initiation of an appropriate management. Associated high morbidity and mortality rates indicate the need for implementation of standard operation protocols.
Introduction
Heat stroke is a life-threatening syndrome secondary to failure of the thermo-regulation system caused by hyperthermia with a body core temperature of more than 40 °C.Consequently, dysregulation or failure of multiple organs can be observed.Clinical symptoms range fromdelirium seizures, coma, rhabdomyolysis, shock with consecutive electrolyte and acid-base abnormalities to acute renal and liver failure (as well as) acute respiratory distress syndrome (ARDS) and disseminated intravasal coagulation (DIC).
Mortality rates up to 50 % are reported [1].Etiologically, heat stroke can be categorized in two forms: -Exertional heat stroke (EHS) is caused by strenuous muscular exercise and occurs mainly in younger active personsin contrast, -Classical heat stroke (CHS) is caused by environmental heat and occurs primarily in elderly persons [2].
The two variants may or may not be accompanied by each other.However, EHS occurs especially in athletes (and) occupational workers when exaggerated acute phase response and altered heat shock response might lead from compensated heat stress to decompensated heat shock with severe complication(s) [1].
The aim of this scientific case report wasbased on current data, reports and references from the present scientific literature and own clinical experiencesto describe an extraordinary young patient with exertional heat shock from daily clinical practice including i) the causes and consequences of his disfavorable course (caused by a not reversible multiple organ dysfunction syndrome [MODS], which was mainly characterized by disseminated intravasal coagulation due to liver failure) as well as ii) the details of an appropriate diagnostic management and a step-wise therapeutic approach up to aspects of organ replacement as a possible ultimate treatment option.
Case report
A 31-year old construction worker was working outside on July 3rd, 2016.At the start of his shift at 8:00 a.m., the ambient temperature was 24 °C and rose up to 35.6 °C at 2:00 p.m.The wind speed was approximately 7 km/h all morning.The humidity was 44 %.In the afternoon, the man complained about dizziness.Suddenly, he collapsed while walking and lost consciousness.Upon arrival of the paramedics, the patient was comatose with a Glasgow Coma Scale of 3 (GCS -E1V1M1) and was gasping as a sign of respiratory insufficiency counting for a massively reduced circulation/ hemodynamic arrest.The peripheral oxygen saturation was 30 % underlining the assessment, the blood pressure was 100/60 mmHg and the patient suffered from a narrow QRS-complex tachycardia with a frequency of 180 bpm with no precise information how long the status had already persisted.The patient was intubated and paramedics tried to slow down heart rate by using Amiodaron (300 mg) and betablockers.Initially, the assessment of body core temperature was impossible due to exceeding of the temperature scale but throughout outpatient clinic service, the body core temperature was measured at 42.3 °C.According to the patient's brother, the medical history was unremarkable.Blood gas analysis showed a mild lactic acidosis (4 mmol/L) and high potassium (5.6 mmol/L).The remaining electrolytes and blood sugar were within normal range.To rule out ischemic heart disease, coronary angiography was performed without any sign of impaired ejection fraction, coronary atherosclerosis or cardiac hypokinesia.Cranial, thoracic and abdominal computed tomography (CT) was unremarkable except for fluid-filled small bowel and colon with signs of wall-thickening of the small bowel so that the clinical diagnosis suspected a massive gastrointestinal infection amongst other reasons.Supportive therapies such as intravenous fluids were given, compensation of coagulation was initiated (and) antibiotics were administered.
External cooling was started.Six hours after starting external cooling, the body core temperature was within normal range.The patient suffered from an acute kidney injury (AKI) at least partially related to a crush-syndrome caused by elevated myoglobin.Therefore, the patient was admitted to the intensive care unit and treated with continuous venovenous hemofiltration (CVVH) with a myoglobin filter.In the course of events, myoglobin-level declined as a result of performing a myoglobin filter (Figure 1A) but the leading clinical findings were progressive microcirculatory failure as well as liver and respiratory failureliver transplantation was discussed as ultima ratio.In the course of events, there was a continuous and rapid increase of liver enzymes such as AST, ALT and GLDH with a maximum at 42 and 48 h after admission to the hospital, respectively (Figure 1B and C).Due to continuous increases of Trop-T, CK and LDH ( Due to a rapid worsening of various organ functions as part of the MODS, the patient could not be seriously considered for a liver transplant.Approximately 72 h after hospital admission, the patient showed dilation of pupils without light responsiveness prompting to immediate cranial CT scan, which revealed an advanced brain edema with herniation of the brainstem as well as hypoxic areas.Due to missing brainstem reflexes and unfavorable prognosis, there was no neurosurgical intervention.The patient died of advanced and severe MODS 4.5 days after admission to the hospital.
Discussion
Heat stroke is a severe emergency that can lead to the patient's death if not treated properly by immediate reduction of body core temperature [1].Mortality rates up to 62 % with a median survival time of 13 days have been reported [10].Heat stroke can be accompanied by multiple organ dysfunction syndrome in 75 % of cases [11].Environmental conditions such as ambient temperature and humidity play an important role in the emergence of a heat stroke but abnormal endogenous thermogenesis and/or heat-loosing mechanisms seem to be as well of etiological relevance [12].In particular, Rae et al. assessed that the hyperthermic states experienced by their cases presented may have resulted from failure of their heat-losing mechanisms.Alternatively, they might have resulted from excessive endothermy, triggered by physical exertion and other unknown initiating factors.Excessive endothermy should be considered in cases of heatstroke that occur in mild to moderate environmental conditions [12].
Here, a case of a construction worker suffering from an exertional heat stroke (EHS) with a body core temperature exceeding 42 °C is presented that led to a MODS and resulted in the patient's death [1].It remains speculative whether MODS/consecutive liver failure are a result of ischemiain addition, hemodynamic instability because of tachycardia (or even ventricular fibrillation) and ischemia, which lead to MODS (as a theory), is possible.Perhaps, the fast rhythm is caused by a hyperthermia-induced Brugada syndrome (ion channel disease with electrical disturbance of heart function without detectable alteration of the heart tissue [structure], in which life-threatening cardiac irregularities can occur).
In general, heat dissipation can be improved by cooling methods using conduction (temperature gradiant), evaporation (water vapor pressure) and convection (velocity of air over the skin) [13].
Regarding conduction placing the patient in a tub with iced water while massaging the extremities for vasodilatation is the most frequently used technique with relatively low mortality rates [13].Besides that, application of ice packs seems to be reasonable.However, here a mortality rate of 22 % was reported [13].Alternative methods such as endovascular cooling or lavage of colon, stomach or bladder with cold water might be successful in reducing body temperature [13][14][15].Although, there are few reports on exertional heat strokesevaporation by using fans especially in combination with wet gauzes or spraying of atomized water onto the skin seems to be very effective [13].In general, usage of iced water is very effective especially while keeping the skin vessels dilated by massage.However, despite immediate admission of fluids upon arrival of paramedics, external cooling when admitted to the emergency room and support of organ functions on the intensive care unit might be insufficient to reduce body core temperature in some cases [12].It is suspected that a prolonged temperature reduction time might be caused by excessive endogenous thermogenesis that may lead to fatal outcome.Despite external cooling in the presented case, it took about 6 h to reach regular body core temperature.In addition, the patient showed elevated levels of procalcitonin (6.3 ng/mL) without any sign or proof of concomitant infection.These findings coincide with other publications [16].However, antibiotic therapy seems to be reasonable.In the course of events, the presented patient died from consequences of MODS, with significantly elevated tissue enzymes caused by direct thermal damage and impaired macro-and microhemodynamics.Compared to other studies with low mortality rates [13], it took longer to reduce body core temperature.It remains unclear whether excessive endogenous thermogenesis was at least partially responsible for the prolonged cooling period.
In particular, it was challenging to clarify primary differential diagnosis (each indicated by various aspects) with regard to -heat stroke (massively elevated body core temperature >42 °C, subsequently disturbed coagulation [prolonged prothrombin time; requiring compensation], microcirculatory, liver and respiratory failure), -pulmonary embolism (elevated D-dimers, subsequently respiratory failure) or -acute heart attack (elevated laboratory parameters such as heart enzymes and rhythmological alterations indicated in electrocardiogram).
In this context, there were factors that indicated an unfavorable outcome in the early stage of the heat shock [10].Accordingly, the patient showed an initial GCS of 3, a body core temperature of more than 42 °C, prolonged prothrombin time (due to liver failure; requiring substitution of coagulation factors, e.g., by administration of fresh frozen plasma) and immediate need for vasoactive drugs as early sign of a worse prognosis.The patient was considered for high-urgency liver transplantation due to acute und rapidly progressive liver failure.Interestingly, there are rare data that indicate a long-term survival (longer than 1 year) after liver transplantation due to an acute hepatic failure following exertional heat stroke [17].However, the patient died prior to a possible transplantation due to rapid worsening of his clinical status.
Conclusions
Heat stroke with consecutive treatment is of importance for physicians especially in the summertime.Rapid diagnosis of heat stroke and immediate cooling as well as additional intensive care measures are key factors to preserve organ function, again, in particular, coagulation, microcirculation, lung and liver.Since exertional heat stroke with MODS can be resistant to external cooling, standard operation procedures should be adjusted using alternative or additional cooling methods.High survival rates can be achieved by using methods that facilitate or maintain vasodilatation of skin vessels for improved conduction.
Although, there are not many cases published in the literature regarding liver transplantation following heatinduced liver failure, it should be seriously considered as salvage therapy if appropriate.
Research ethics: Data collection did not exceed the usual level of an inpatient admission.However, since the patient data was obtained and used prospectively for scientific evaluation, the project was prepared according to the instructions of the institutional ethics committee.The specific circumstances required emergency medical care with a consent as it can be normally predicted.This included pseudonymized patient data (such as single procedures/ procedural steps) for collection and the indication that no personal advantages or disadvantages would result from participation or non-participation.Finally, research involving human subjects complied with all relevant national regulations, institutional policies and is in accordance with the tenets of the Helsinki Declaration (as revised in 2013) -see also below to "General statement".There was no research involving animals.Informed consent: As mentioned above, the specific circumstances required emergency medical care with a consent as it can be normally predicted.Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.Competing interests: Authors state no conflict of interest.Research funding: There was no research funding involved and available, therefore, none declared.Data availability: Data was recorded in a subject-related register.
Table :
Case series of liver transplantation following heat shock-induced acute liver failure (search engine, Pubmed ® ; Search query, "heat stroke" and "liver transplantation"; time frame, -chronological order). | 2023-09-19T13:06:37.581Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "5bd2dfb89c0b9dc68435ac46b7d2da5d24603fc6",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/iss-2023-0013/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "342fa52711bccc76cc7f49433832d648072763b0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
215775311 | pes2o/s2orc | v3-fos-license | Repurposing a platelet aggregation inhibitor ticagrelor as an antimicrobial against Clostridioides difficile.
Drug resistance in Clostridioides difficile becomes a public health concern worldwide, especially as the hypervirulent strains show decreased susceptibility to the first-line antibiotics for C. difficile treatment. Therefore, the simultaneous discovery and development of new compounds to fight this pathogen are urgently needed. In order to determinate new drugs active against C. difficile, we identified ticagrelor, utilized for the prevention of thrombotic events, as exhibiting potent growth-inhibitory activity against C. difficile. Whole-cell growth inhibition assays were performed and compared to vancomycin and metronidazole, followed by determining time-kill kinetics against C. difficile. Activities against biofilm formation and spore germination were also evaluated. Leakage analyses and electron microscopy were applied to confirm the disruption of membrane structure. Finally, ticagrelor's ability to synergize with vancomycin and metronidazole was determined using checkerboard assays. Our data showed that ticagrelor exerted activity with a MIC range of 20-40 µg/mL against C. difficile. This compound also exhibited an inhibitory effect on biofilm formation and spore germination. Additionally, ticagrelor did not interact with vancomycin nor metronidazole. Our findings revealed for the first time that ticagrelor could be further developed as a new antimicrobial agent for fighting against C. difficile.
Results
Ticagrelor exhibited the bactericidal activity against C. difficile. We first tested the antimicrobial activity of ticagrelor against different ribotypes of C. difficile. Ticagrelor exhibited a MIC of 40 µg/mL for all ribotype tested, with an exception for ribotypes 027 (strain R20291), 106, and 117, which had a MIC of 20-40 µg/ mL (Table 1). Although the MICs of ticagrelor were markedly higher than metronidazole and vancomycin, the same range of MICs of ticagrelor was observed in all strains of C. difficile tested regardless of their sensitivity background to metronidazole and vancomycin.
After the antimicrobial activity of ticagrelor was confirmed, we investigated the killing kinetics of ticagrelor with C. difficile ribotype 027 (strain R20291). The time-kill kinetics was represented in relative growth, OD 600 at time n was compared to the initial OD 600 . The results revealed that ticagrelor exhibited a rapid killing profile compared to metronidazole and vancomycin at their respective MIC values (Fig. 2a). Ticagrelor drastically reduced numbers of bacteria after 1 hour (h) incubation, while metronidazole and vancomycin took virtually 5 h to lower the bacterial count to the same level (Fig. 2a). The results from time-kill kinetics were in agreement with the reduced amount of pellet obtained from ticagrelor-treated cells compared to the control (Fig. 2b). Furthermore, there was no viable bacterial count after 24 h incubation even after 1 h exposure to ticagrelor (Fig. 2c). These data thus clearly demonstrated the bactericidal activity of ticagrelor against C. difficile.
Ticagrelor inhibited the formation of biofilm. We further investigated the effect of ticagrelor on biofilm formation at the sub-MICs and MIC level with C. difficile strain R20291. The production of biofilm at 2.5 and www.nature.com/scientificreports www.nature.com/scientificreports/ 5 µg/mL ticagrelor treatment was reduced to 85% and 83%, respectively. Statistical analysis revealed no significant difference to the control. Increased concentration of ticagrelor to 10 µg/mL significantly reduced the formation of biofilm to approximately 78% (p = 0.022). At 20 µg/mL ticagrelor, the biofilm production was depleted completely (p < 0.0001) (Fig. 3).
Ticagrelor reduced spore germination of C. difficile. We tested the effect of ticagrelor on C. difficile spore germination using the ribotype 012 (strain 630). Spore germination was measured in a reflection Ticagrelor exhibits a bactericidal activity against C. difficile. (a) Time-kill kinetics was represented in a relation to the initial OD 600 over 12 h exposure to various antibiotics; DMSO, circle (•); metronidazole, square (■); vancomycin, triangle (▲); ticagrelor, diamond (♦). (b) Bacterial pellets exposed to PBS, DMSO, and 20 μg/mL ticagrelor after 1 h. Intact pellet was barely visible in ticagrelor treatment. (c) Bacterial cells exposed to 20 µg/mL ticagrelor and 4% DMSO for 1-5 h were diluted, spread on BHI plate, and incubated for 24 h. . Ticagrelor inhibits biofilm formation in C. difficile. Biofilm formation was reduced in sub-MIC treatment with ticagrelor, however it was mostly inhibited when treated with ticagrelor at MIC. Data are presented as mean ± SEM. Bars denoted by (*) and (****) indicate significant difference at p < 0.05 and p < 0.0001, respectively by one-way ANOVA with post-hoc Tukey's multiple comparison test. (2020) 10:6497 | https://doi.org/10.1038/s41598-020-63199-x www.nature.com/scientificreports www.nature.com/scientificreports/ of reduction in OD 600 over 1 h and represented in relation to the initial OD 600 . Ticagrelor at the MIC level of 20 µg/mL showed strong inhibitory effect on spore germination, up to 80% reduction compared to the control (p = 0.0499). Increasing concentration of ticagrelor (to 40 µg/mL) substantially hindered the germination rate, suggesting the dose dependent action of ticagrelor on spore inactivation (Fig. 4).
Ticagrelor caused leakage of cellular components from the bacterial cells. The amount of proteins and DNA leaked from the bacterial cells were measured by Bradford protein assay and NanoDrop spectrophotometer, respectively. Proteins and DNA were detected in the supernatant fraction of ticagrelor-treated C. difficile cell culture as early as 1 h after incubation, and increasing over time, while little was observed in other treatment conditions (Fig. 5). The results from gel visualization were in concordant with the quantitative data.
We further investigated the ultrastructure of C. difficile by both scanning and transmission electron microscopy (SEM and TEM, respectively). In contrast to the control C. difficile cells exposed to DMSO that revealed normal morphology of rod-shaped structure, extensive damages on bacterial cell and leakage of intracellular components were observed in ticagrelor-treated C. difficile cells (Fig. 6).
Ticagrelor has an additive effect to metronidazole or vancomycin. Finally, we evaluated the interaction of ticagrelor with metronidazole and vancomycin. The results showed that ticagrelor has an additive effect to metronidazole and vancomycin by checkerboard assay with the fractional inhibitory concentration index (FICI) falling in a range of 1.5 -3 11 . It is possible that ticagrelor possesses distinct mode of action compared to metronidazole and vancomycin. . Ticagrelor inhibits C. difficile spore germination. (a) Spore germination kinetics was presented as the reduction of OD 600 relative to the initial OD 600 over 1 h. (b) Percentage of spore germination was calculated from the slope of kinetic curve. BHIY, brain heart infusion broth supplemented with 0.5% yeast extract; TA, taurocholic acid. Bars denoted by different letters indicate significant difference by non-parametric ANOVA with post-hoc Dunn's multiple comparison test.
Discussion
First-line antibiotics for CDI treatment include metronidazole and vancomycin. However, there have been reports on resistance to these drugs, leading to therapeutic failure and poor patient outcome. Therefore, new effective antibiotics are of utmost important in the shadow of lacking approved vaccines 12 . Recently, a report showed that ticagrelor, an approved drug for preventing of thrombotic events in cardiovascular diseases, exhibited antimicrobial activity against several Gram-positive bacteria but not Gram-negative bacteria 10 . As ticagrelor is an FDA-approved drug, development of ticagrelor for CDI falls into drug repurposing approach which could shorten the development process as its pharmacokinetic and safety profiles are readily avilable.
Our data revealed that ticagrelor has a MIC range between 20-40 µg/mL against C. difficile (Table 1). These observations are in good accordance with the range previously reported for other Gram-positive bacteria 10 . These findings are also in line with another study that reported the similar MICs of different adenosine analogs against Gram-positive bacteria, which ranged from 16 to 128 µg/mL 13 . Nucleoside analogs have reportedly shown to disrupt membrane function and possess an inhibitory activity for biosynthetic processes including peptidoglycan, cell wall, nucleic acids, folate, and proteins 14 . We hypothesized that ticagrelor might act on the membrane of the bacterium comparable to the known membrane disruptors including nisin and polymyxin B, which have shown to dissipate membrane poteintial 15,17 -as the killing kinetics was comparable to both membrane disruptors 16,17 and the bacteriolytic activity was clearly demonstrated (Figs. 2a and 6).
Leakage assays showed that ticagrelor treatment caused bacterial cells to disrupt and release intracellular contents (Fig. 5). These findings support the data from the killing kinetics of ticagrelor, which showed a drastic decrease in cell numbers after 1 h incubation (Fig. 2a). However, leakage of proteins and DNA were also apparent in other treatment conditions at 4 and 5 h post exposure. We hypothesized that this phenomenon is due to the holding capacity of phosphate-buffered saline (PBS) that is incapable to maintain large number of bacterial cells healthy for a prolonged period. Furthermore, deterioration of cellular morphology and membrane surface disruption were observed through both SEM and TEM upon the treatment with ticagrelor, suggesting that ticagrelor exhibited the potent effect against C. difficile through cell membrane lysis (Fig. 6). These observations are similar to those of bacteria exposed to TiO 2 , which reportedly causes bacterial cell rupture 18 . However, the exact mechanism how ticagrelor kills C. difficile remains to be further explored.
Biofilm formation is one of the features that contributes to pathogenicity of CDI. Solitary C. difficile is not highly pathogenic, unless they have aggregated and produced toxins 19,20 . It has been shown that the strains with greater ability to form biofilm are likely to be more virulent 21 . Therefore, the ability of ticagrelor to reduce biofilm formation was evaluated. We showed that ticagrelor at MIC and sub-MIC values reduced biofilm formation. Nevertheless, we speculate that the complete depletion of biofilm at the MIC level of ticagrelor is not likely due to the inhibition of biofilm formation, but rather the inhibition of the bacterial cell growth as there were very little viable cells observed after the treatment. Although biofilm formation in C. difficile has been reported to Figure 6. Electron micrographs of C. difficile exposed to ticagrelor. Scanning electron micrographs of C. difficile exposed to (a) 4% DMSO and (b,c) 80 µg/mL ticagrelor for 2 h. Transmission electron micrograph of C. difficile exposed to (d,f) 4% DMSO and (e,g) 80 µg/mL ticagrelor for 1 h. The scale bars are embedded within the micrographs. (2020) 10:6497 | https://doi.org/10.1038/s41598-020-63199-x www.nature.com/scientificreports www.nature.com/scientificreports/ be stimulated by sub-MIC levels of metronidazole 22 and vancomycin 20 , our data revealed that ticagrelor at the sub-MICs did not induce biofilm formation in C. difficile.
Spore is a major transmissive agent in CDI as vegetative cells cannot torelate aerobic environment 23 . Although C. difficile spore is naturally resistant to most antibiotics including vancomycin and metronidazole 24,25 , however, in this study, we found that ticagrelor inhibited spore germination in a dose-dependent manner. It has been shown that nucleoside antibiotic derivatives inhibit the outgrowth of C. difficile spores at the concentration of 2X MIC 26 . It is possible that nucleoside analogs may compete with the nucleosides required as germinants for C. difficile spores 27 . However, the actual mechanism of how nucleoside analogs can disrupt spore germination is still under investigation.
Ticagrelor, formerly known as AZD6140, is a synthetic compound mimicking ATP formulated for an oral administration. It is normally prescribed for prevention of thrombotic events in cardiovascular diseases. As it is an FDA-approved drug, therefore it is relatively safe to use in human. Furthermore, an experiment in murine model reveals low toxicity at the effective dose 10 . However, physiochemical properties showed that ticagrelor has moderate water solubility. In addition, pharmocokinetic data indicate that ticagrelor is poorly absorbed to the circulation with approximately 36% absolute bioavilability 28 and 84% is excreted, of which 58% through feces and 26% through urine 29 . Considering this information, ticagrelor is deemed suitable for treatment of intestinal pathogens as it fits with the criteria for colon targeting oral drugs 30 . As the dose for human administration approved for cardiovascular disease is about twice of bactericidal concentrations, further investigations are warranted in order to bring this compound forward for drug development. We proposed that the compound should go through the lead optimization process, especially to reduce the binding affinity to its natural receptor P2Y 12 31 to reduce the effect of antiplateleting.
conclusions
Altogether, we postulated that ticagrelor, an FDA-approved drug for the treatment of acute coronary syndrome, exhibited a bactericidal activity, supposedly with bacteriolytic mode of action against C. difficile. Furthermore, ticagrelor also inhibited C. difficile biofilm formation and spore germination. Additionally, ticagrelor did not interact with either metronidazole or vancomycin. Ticagrelor could become a promising drug candidate for further development through repurposing approach. However, further investigations are warranted to evaluate the frequency of resistance of C. difficile against ticagrelor as well as the effect of ticagrelor in animal models of CDI.
Methods
Bacterial culture and minimal inhibitory concentration (Mic) determination. C. difficile ribotypes 012 (strain 630), 017, 020, 023, 027 (strain R20291), 029, 046, 056, 095, 106, 117, and 126 were cultured in brain heart infusion (BHI) broth medium supplemented with 0.5% yeast extract (BHIY). Anaerobic condition was provided by anaerobic workstation (Don Whitley Scientific) maintaining at 37 °C. Minimal inhibitory concentration (MIC) assay was performed by microdilution method as per CLSI M11-A6 32 . Briefly, assay plates were pre-filled with 100 µL of various concentrations of test compounds; 0.03-16 µg/mL for metronidazole and vancomycin, 0.15-80 µg/mL for ticagrelor. Ten microliters of 10 7 CFU/mL bacterial inoculum was then added and incubated for 48 h in anaerobic workstation at 37 °C. Vancomycin solution was prepared in deionized water. Metronidazole and ticagrelor were prepared in DMSO. The assay plates were measured for OD 600 by a microtiter plate reader (Tecan) to determine the bacterial growth. The MIC value is defined by the lowest concentration of test compound that shows no bacterial growth comparable to blank BHIY medium.
Time-kill assay.
To determine the killing kinetics of the compounds, the time-kill assay was performed with ribotype 027 (strain R20291). Briefly, 100 µL of bacterial inoculum at ~1.5 × 10 8 CFU/mL was incubated with 0.5 µg/mL metronidazole, 1 µg/mL vancomycin, and 20 µg/mL ticagrelor, then bacterial growth was observed by measuring OD 600 every 10 min interval for 12 h at 37 °C in a microplate reader (Tecan) under anaerobic conditions. The relative growth was calculated as a ratio of OD 600 measured at the times T n and T 0 . Cell pellets from DMSO and ticagrelor treatment groups were diluted 100-fold, spread onto BHI agar plates, and incubated for 24 h for viability check.
Biofilm formation assay. Biofilms of C. difficile were generated as mentioned previously with some modifications 33,34 . Briefly, an overnight culture of C. difficile strain R20291 was diluted 100-fold into fresh BHIY supplemented with 0.1 M glucose and various concentrations of ticagrelor and incubated in 24-well plate for 48 h at 37 °C. Wells of BHIY without cultures were used as negative controls. To measure biofilm biomass, the cultures were carefully removed from the biofilm plate and wells were washed gently with phosphate-buffered saline (PBS). The biofilms were stained with 0.2% filtered-crystal violet and incubated for 30 min at room temperature. The excess dye was removed from the wells before washing twice with PBS. One milliliter of 1:1 ethanol and acetone solution was added into each well to dissolve dye from biofilm and the absorbance was measured at the wavelength of 570 nm. Spore germination assay. C. difficile strain 630 was plated on 70:30 sporulation medium and incubated for 5-7 days at 37 °C. To harvest spores, sporulation-induced lawns were collected using distilled water. Spore suspensions were treated with proteinase K, followed by heat treatment at 65 °C for 1 h to eliminate vegetative cells, and washed with distilled water at least 5 times to remove cell debris. Spore purity was confirmed by phase contrast microscopy, and germination test was performed to ensure the viability of the spores. For germination test, prepared spores were heat-activated at 65 °C for 30 min and allowed to cool down on ice. BHI with 0.1% taurocholic acid (TA) was used as a germination medium. Germination kinetics was followed by monitoring the loss of OD 600 at 1 min interval for 1 h at 37 °C. Germination rate was obtained and calculated from a steepest slope of kinetic plot.
www.nature.com/scientificreports www.nature.com/scientificreports/ Leakage assay. Leakage assay was performed to determine the integrity of bacterial cell by observing DNA and protein released from the bacterial cells into supernatant. Briefly, an overnight culture of C. difficile strain R20291 was collected and adjusted to OD 600 of 1.5 with PBS and incubated with 4X MICs of either ticagrelor (80 µg/mL), metronidazole (2 µg/mL), or vancomycin (4 µg/mL) for 5 h at 37 °C. Supernatant and cell pellets were collected every 1 h. Supernatant was used for determination of DNA by agarose gel electrophoresis and NanoDrop spectrophotometer and of protein by Bradford assay and SDS-PAGE. The remaining pellets were further examined by electron microscopy to evaluate the morphology after treatments.
Scanning electron microscopy. Bacterial cell pellets from leakage assay treated with ticagrelor were observed by scanning electron microscopy (SEM). Briefly, samples were fixed with 4% glutaraldehyde for 24 h, washed twice with 0.5X PBS. Then a series of dehydration in various concentrations of ethanol (50-100%) followed by drying with critical point dryer and platinum/palladium sputtering were applied. Samples were visualized by Hitachi SU8010 field-emission scanning electron microscope (FE-SEM) at an accelerating voltage of 10 kV. transmission electron microscopy. C. difficile was treated with either 4% DMSO or 80 µg/mL ticagrelor and incubated for 1 h at 37 °C prior to subjected to transmission electron microscopy (TEM). Bacterial pellets were collected and fixed in 2.5% glutaraldehyde and postfixed in 1% osmium tetroxide for 1 h. Pellets were dehydrated in a series of ethanol ranging from 30 to 100% followed by propylene oxide treatment and finally embedded in Epon epoxy resin. Thin sections of appoximately 90-100 µm were achieved using an ultramicrotome (Leica UC7) and post-stained with 2% uranyl acetate and lead citrate. Samples were imaged using Hitachi HT7700 transmission electron microscope at an accelerating voltage of 100 kV. checkerboard assay. Checkerboard assay was performed to investigate the interaction between ticagrelor and either metronidazole or vancomycin. Two-fold serially diluted ticagrelor ranging from 5-80 µg/ mL was mixed with either metronidazole or vancomycin ranging from 0.125-32 µg/mL in 96-well plate. Then 10 µL of bacterial inoculum was added to each well and incubated for 48 h under anaerobic conditions at 37 °C. Endpoint growth was measured by OD 600 . The interpretation of fractional inhibitory concentration index (FICI) value was followed; synergy (≤ 0. Statistical analysis. GraphPad Prism 8.3.1 was used for statistical analysis. Data from each experiment were tested for normality. Upon passing normality test, data were analyzed by ANOVA with post-hoc Tukey's multiple comparison test. Otherwise, data were analyzed using non-parametric ANOVA with post-hoc Dunn's multiple comparison test.
Data availability
No datasets were generated or analysed during the current study. | 2020-04-16T14:36:19.666Z | 2020-04-16T00:00:00.000 | {
"year": 2020,
"sha1": "83b5643f44fc155b3ed610a94e104e90bae05fd4",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-63199-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83b5643f44fc155b3ed610a94e104e90bae05fd4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259163888 | pes2o/s2orc | v3-fos-license | The Disorazole Z Family of Highly Potent Anticancer Natural Products from Sorangium cellulosum: Structure, Bioactivity, Biosynthesis, and Heterologous Expression
ABSTRACT Myxobacteria serve as a treasure trove of secondary metabolites. During our ongoing search for bioactive natural products, a novel subclass of disorazoles termed disorazole Z was discovered. Ten disorazole Z family members were purified from a large-scale fermentation of the myxobacterium Sorangium cellulosum So ce1875 and characterized by electrospray ionization–high-resolution mass spectrometry (ESI-HRMS), X-ray, nuclear magnetic resonance (NMR), and Mosher ester analysis. Disorazole Z compounds are characterized by the lack of one polyketide extension cycle, resulting in a shortened monomer in comparison to disorazole A, which finally forms a dimer in the bis-lactone core structure. In addition, an unprecedented modification of a geminal dimethyl group takes place to form a carboxylic acid methyl ester. The main component disorazole Z1 shows comparable activity in effectively killing cancer cells to disorazole A1 via binding to tubulin, which we show induces microtubule depolymerization, endoplasmic reticulum delocalization, and eventually apoptosis. The disorazole Z biosynthetic gene cluster (BGC) was identified and characterized from the alternative producer S. cellulosum So ce427 and compared to the known disorazole A BGC, followed by heterologous expression in the host Myxococcus xanthus DK1622. Pathway engineering by promoter substitution and gene deletion paves the way for detailed biosynthesis studies and efficient heterologous production of disorazole Z congeners. IMPORTANCE Microbial secondary metabolites are a prolific reservoir for the discovery of bioactive compounds, which prove to be privileged scaffolds for the development of new drugs such as antibacterial and small-molecule anticancer drugs. Consequently, the continuous discovery of novel bioactive natural products is of great importance for pharmaceutical research. Myxobacteria, especially Sorangium spp., which are known for their large genomes with yet-underexploited biosynthetic potential, are proficient producers of such secondary metabolites. From the fermentation broth of Sorangium cellulosum strain So ce1875, we isolated and characterized a family of natural products named disorazole Z, which showed potent anticancer activity. Further, we report on the biosynthesis and heterologous production of disorazole Z. These results can be stepping stones toward pharmaceutical development of the disorazole family of anticancer natural products for (pre)clinical studies.
D-Lys6-LHRH, the disorazole Z conjugate demonstrated an increased cytotoxicity in vitro in HCC 1806 and MDA-MB-231 triple-negative breast cancer cells (20). In the meantime, disorazole Z was also identified as a maytansine site ligand, which was confirmed by solving the crystal structure of the tubulin-disorazole Z complex (21).
Due to the novelty of these compounds' structures, their intriguing biological activities, and their promising potential for clinical application, massive efforts were made to develop chemical synthesis routes. Total synthesis has been achieved for some disorazole congeners and nonnatural analogs, for example, disorazole A1 (compound 1) and also a simplified disorazole Z (22,23). Nevertheless, biotechnological methods still present significant advantages for obtaining these types of complex natural products (24). Furthermore, taking advantage of synthetic biotechnology, rational engineering of the biosynthetic pathway in an advantageous heterologous host could also facilitate high-efficiency production of certain target components or generate nonnatural compounds (25,26).
In this work, large-scale fermentation afforded the isolation of 10 disorazole Z family members featuring two-carbon shorter monomers and unprecedented modification of a geminal dimethyl group compared to disorazole A. We report the first full structure elucidation of the disorazole Z congeners using nuclear magnetic resonance (NMR), Mosher ester, and X-ray analysis. Furthermore, we characterized the bioactivity of disorazole Z1 (compound 3). In addition, the disorazole Z biosynthetic pathway was identified by comparative analysis and heterologous expression. Promoter engineering and gene deletion experiments were carried out using the heterologous expression system, which allowed the function of a methyltransferase involved in disorazole Z biosynthesis to be assigned.
RESULTS AND DISCUSSION
Isolation and full structure elucidation of the disorazole Z family of compounds. After large-scale fermentation of S. cellulosum So ce1875, we were able to isolate 10 disorazole Z congeners (compounds 3 to 12), achieving a production titer of 60 to 80 mg/L disorazole Z1 (compound 3). An overview of the chemical structures of these compounds is given in Fig. 1 NMR spectrum in acetone-d 6 presented 23 protons, which were correlated with their corresponding carbons in a 1 H, 13 C heteronuclear single quantum coherence (HSQC) NMR spectrum with the exception of one doublet of a secondary alcohol proton (OH-26) at 4.16 ppm. The 1 H, 1 H correlation spectroscopy (COSY) NMR spectrum allowed us to assign two main structural parts, A and B (Fig. 2). H-5 of part A and a 1 H singlet (3-H) at 8.53 ppm were connected by a long-range coupling. The corresponding carbon C-3 had no correlation in the heteronuclear multiple-bond correlation (HMBC) NMR spectrum. However, the direct 1 J C,H coupling of 213 Hz indicated its role in a hetero-aromatic ring (27). The further ring carbons C-2 and C-4 were identified by their HMBC correlations with H-3, while the exact connection of the oxazole ring to structural part A was shown by the HMBC correlation between C-4 and H-5. However, H-6 did not show any HMBC correlation, appearing as a flat and broad proton signal. The continuation of structural part A at the opposite oxymethine C-12 (d C 76. 28 X-ray analysis revealed the relative configuration of disorazole Z1 (compound 3). In order to elucidate the absolute configuration, Mosher's method based upon 1 H and 13 C NMR chemical shift differences was applied (28)(29)(30)(31). Consequently, all asymmetric centers (12,25,26) of compound 3 are in the S configuration, as shown in Fig. 1. A full description of the stereo-chemical analyses is given in the supplemental material. The structural assignment is in full agreement with the structure of the tubulin-disorazole Z complex published recently (21).
Besides disorazole Z1 (compound 3), nine more variants have also been isolated and characterized. Two of them turned out to be isomers of compound 3 with different double bond geometries, i.e., disorazole Z2 (compound 4, D 7,8 -cis-disorazole Z) and Discovery and comparative analysis of disorazole Z biosynthetic gene cluster. The significant structural differences between the disorazole Z and A family compounds motivated us to investigate the disorazole Z biosynthetic pathway. The draft genome sequence of the alternative producer strain S. cellulosum So ce427 was obtained by Illumina sequencing and subjected to antiSMASH analysis for annotation of biosynthetic gene clusters (32). A trans-AT PKS-NRPS gene cluster (dis427 gene cluster; GenBank accession number OQ408282) which exhibited significant overall similarity to the known dis12 gene cluster (GenBank accession number DQ013294 or AJ874112) was speculated to be responsible for the biosynthesis of disorazole Z (Fig. 3). Intriguingly, this gene cluster has two main distinguishing features: on the one hand, one PKS module is lacking in the second polyketide synthase gene, disB 427 , which corresponds nicely with the two-carbon shortened monomeric unit of disorazole Z compared to one half-side of the disorazole A congener; on the other hand, the methyltransferase gene disF 427 was found downstream of disD 427 , which is not present in the dis12 gene cluster and could be assigned to the O-methylation function needed for the methylation of the carboxyl group found in disorazole Z. The carboxyl group most likely arises from the oxidation of one of the methyl groups at positions C-25 and C-33. However, no gene flanking the corresponding biosynthetic gene cluster (BGC) encoding an expected oxidative function could be assigned for this hypothetical step. The organization of the remaining part of the disorazole Z pathway is quite similar to that of the disorazole A pathway. The biosynthesis of the half-side of the disorazole Z dilactone begins with condensing one malonyl coenzyme A (malonyl-CoA) with the starter acetate. After five additional rounds of extension with malonyl-CoA, the ketosynthase (KS) domain in module 7 is not expected to extend the polyketide chain, because the catalytic histidine of KS7 is substituted with alanine (Fig. S7). The mutation of the conserved motif C-H-H to C-A-H most likely causes malfunction of this KS domain, as the histidine residue is known to play an essential role in decarboxylation and condensation reactions (33). However, this type of KS was proven to be a gatekeeper and still capable of transferring the polyketide chain between different domains in FR901464 biosynthesis (34). The adenylation domain of DisC 427 activates serine, also found in the disorazole A pathway. Similar tandem heterocyclization (HC) domains were proven to be essential for vibriobactin and anguibactin biosynthesis (35,36). It is assumed likely that the HC domains of DisC 427 work in the same fashion. However, it is also possible that one of the HC domains condenses the serine with the polyketide chain, and the other HC domain might perform the cyclization of the serine moiety to form the oxazoline ring, which is finally oxidized to its oxazole form by the oxidation domain. The extension of the nascent intermediate bound to the carrier protein of module 9 is supposed to stop here, because the two following PKS modules are most likely nonextending ones, which are suggested by annotation to be noncondensing KS9 and KS10 domains similar to KS7 (see Fig. S7 in the supplemental material). Finally, the termination of the assembly line and the cyclization of the two monomeric subunits are likely to be similar to the mechanism described for enterobactin, elaiophylin, and conglobatin biosynthesis (37)(38)(39), eventually forming the bis-lactone structure. The carboxylic acid methyl ester in disorazole Z might be introduced before or after release from the assembly line by stereospecific oxidation of a methyl group, giving rise to the free carboxylic acid and subsequent methyl ester formation. Nevertheless, the oxidation process involved remains enigmatic and needs further study. Intriguingly, after expression of the dis427 gene cluster, disorazole Z1 is also formed, being the major heterologous product in the host M. xanthus DK1622 (see below), which indicates that the missing functionalities are encoded within either the BGC or the heterologous host to harbor similar genomic functions capable of oxidizing the precursor. The final step of methyl ester formation could be connected to DisF, encoded in the corresponding BGC.
Heterologous production and biosynthesis of disorazole Z. Since Sorangium cellulosum strains are slow growing and unsuitable for in situ genetic manipulation, we cloned the disorazole Z biosynthetic gene cluster (from disA 427 to disF 427 ) from the genomic DNA of So ce427 for heterologous expression and functional verification. Onestep capture of large gene clusters is usually challenging, especially for myxobacteria, due to their complex genomes. Therefore, the gene cluster was divided into three smaller parts for cloning and then assembled into a p15A-cm vector using RecET-mediated linear-linear homologous recombination (LLHR) (Fig. S9) (40). After that, the chloramphenicol resistance gene was replaced by a km-int cassette coding for phage Mx8 integrase, which could be employed for site-specific integration of the gene cluster into the genome of the heterologous host M. xanthus DK1622 (41). The dis427 gene cluster under the control of its native promoters was successfully expressed in M. xanthus DK1622, leading to the production of disorazole Z1 (compound 3) as the primary compound, which was confirmed by HPLC-MS and NMR analysis ( Fig. 4A; Table S3; Fig. S17 and S18). However, the production yield is only about 0.2 mg/L in the standard fermentation procedure (see the supplemental material). Replacement of the native promoter before disA 427 with a tetracycline-inducible Ptet promoter resulted in at least a 4-fold increase in heterologous production of compound 3 (Fig. S10). Furthermore, inserting a strong constitutive Papr promoter before disD 427 led to a further improved yield but not by much. When the gene cluster was under the control of a vanillate-inducible Pvan promoter, a nearly 9-fold increase was achieved (Fig. S10). Nevertheless, the production yield of compound 3 (about 1.8 mg/L) is still much lower than that obtained by using the original producer strain under optimized fermentation conditions, which may be due to the low level of phylogenetic similarity between Sorangium and Myxococcus. Further systematic genetic engineering to improve the yield using heterologous expression is therefore still required to achieve competitiveness. Nevertheless, this system, for the first time, allows for genetic manipulation of the disorazole Z biosynthesis, as the native host was found to be genetically intractable despite significant efforts.
As mentioned above, the methyltransferase gene was found exclusively in type Z disorazole gene clusters and was thus supposed to be involved in methyl ester formation. In order to verify its function in disorazole Z biosynthesis, we replaced disF 427 with a gentamicin resistance gene using Redab-mediated linear-circular homologous recombination (LCHR; see the supplemental material) (42). Deletion of disF 427 completely abolished disorazole Z1 (compound 3) production in M. xanthus DK1622, whereas the compounds disorazole Z9 (compound 11) and disorazole Z10 (compound 12) accumulated in the culture broth, which was confirmed by HPLC-MS and NMR analysis ( Fig. 4A; Tables S12 and S14; Fig. S55 and S61). The methyltransferase DisF 427 was then expressed and purified as N-terminal His-tagged recombinant protein using E. coli BL21(DE3). Incubating purified DisF 427 with compound 11 or compound 12 and S-adenosyl methionine resulted in almost complete conversion to the corresponding methylated compounds in vitro at 30°C in 1 h ( Fig. 4B and C). These results clearly demonstrated the methyltransferase DisF to be responsible for methyl ester formation in disorazole Z biosynthesis. The accumulation of compound 12 as the major component in the absence of disF 427 also indicated that stereospecific oxidation and subsequent methylation might occur initially on one side of the symmetric substrate. However, it remains unclear how the geminal methyl group was oxidized to the hydroxy group and furthermore to the carboxyl group, which implies novel tailoring biochemical steps and motivates further investigation, which is ongoing in our laboratory. Biological activity of disorazole Z. Disorazole Z1 (compound 3) was tested on a small panel of human cancer cell lines and displayed very pronounced cytotoxic activity, with IC 50 s in the range of 0.07 to 0.43 nM (Table S16). In comparison to the previously described disorazole A1 (compound 1), it showed similar activity, although it tended to be less potent (by a factor of 4 to 5) when tested on hepatocellular carcinoma (HepG2) and osteosarcoma (U-2 OS) cells.
In order to explore the effects of disorazole Z on microtubule dynamics, U-2 OS cells were treated with disorazoles followed by immunostaining of a-tubulin and fluorescence microscopy. After 5 h treatment with compound 1 or compound 3, a slightly higher density of interphase microtubules around the nuclear periphery was observed, which resembles a local destabilization. The same effect was already described for, e.g., the microtubule-destabilizing agent vinblastine (43). Interestingly, the acetylated microtubule population, which plays an important role in dynamic cellular processes, was much more affected. This might be caused by the ability of disorazoles to preferentially suppress dynamic mechanisms at the binding sites at the end of microtubules. After prolonged treatment, microtubules were completely depolymerized and the low-abundance acetylated tubulin population was no longer detectable by immunostaining (Fig. 5A). In particular, endoplasmic reticulum (ER) dynamics are directly associated with acetylated microtubules, an effect termed ER sliding (44). Thus, we studied whether specific responses of the ER can be observed after treatment of cells with disorazoles. Both compound 1 and compound 3 induced a delocalization of the ER structure, which coaligns with (depolymerized) microtubules. However, this event does not trigger ER stress, as determined by applying GRP78/BiP immunostaining to disorazole-treated cells (Fig. 5B). The 78-kDa glucose-regulated protein (GRP78) functions as a chaperone and is a master regulator of the unfolded-protein response (UPR) (45), and it is found to be upregulated after treatment of cells with the ER calcium ATPase inhibitor thapsigargin. However, disorazoles did not directly induce ER stress, although dynamics of the ER are probably greatly impaired due to the delocalization of the complex.
To further assess compound 3 and its ability to induce apoptotic processes, the mitochondrial membrane potential (MMP) and caspase activation were determined. Following microtubule depolymerization and a concurrent cell cycle arrest at the G 2 /M checkpoint, many tubulin-binding agents are described as inducing a loss of MMP followed by cytochrome c release and activation of the caspase cascade (46). Here, we could demonstrate that treatment of cells with compound 3 at nanomolar concentrations results in mitochondrial swelling (5 h), followed by a complete loss of MMP (24 h). In line with these findings, we found caspase-3/7 activation upon 24 h treatment of cells with compound 3 at concentrations as low as 0.3 nM (Fig. 5C).
Conclusion and outlook. In this study, we describe the full structural elucidation of 10 novel disorazole congeners exhibiting a significantly modified basic structure compared to disorazole A and thus grouped as a new subclass of disorazole anticancer drugs termed disorazole Z. This family of compounds possesses a shortened polyketide chain in each half-side of the bis-lactone ring and a carboxymethyl ester at the position where a geminal methyl group is installed in disorazole A. The discovery of the disorazole Z biosynthetic gene cluster and comparison to the disorazole A biosynthetic pathway allowed us to understand the structural differences between these two types of disorazoles. The successful heterologous expression of the disorazole Z gene cluster in an amenable host organism paved the way for detailed biosynthesis studies, e.g., elucidating the intriguing biosynthetic steps involving the oxidation of one methyl center in the geminal dimethyl group, and rational biosynthetic engineering to further improve the yield of disorazole Z. The system will also allow us to generate nonnatural disorazole family compounds through combinatorial biosynthesis. Activity assays of disorazole Z1 and disorazole A1 revealed similar biological activities in cancer cell lines and thus great potential for this family of compounds to be employed as antitumor drugs, a possibility which is being explored by using a peptide-drug conjugate.
MATERIALS AND METHODS
General experimental procedures. Melting points were measured on a Büchi-510 melting point apparatus. UV data were recorded on a Shimadzu UV/Vis-2450 spectrophotometer in methanol (UVASOL, Merck). Infrared (IR) data were recorded on a Bruker Tensor 27 IR spectrophotometer. 1 H NMR and 13 C NMR spectra were recorded on Bruker Avance III 700, DMX 600, UltraShield 500 or DPX 300 NMR spectrometers, locked to the deuterium signal of the solvent. Data acquisition, data processing, and spectral analysis were performed with standard Bruker software and ACD/NMRSpectrus. Chemical shifts are given in parts per million, and coupling constants are in hertz. Analytical reverse-phase high-performance liquid chromatography (RP-HPLC) was carried out with an Agilent 1260 HPLC system equipped with a diodearray UV detector (DAD) and a Corona Ultra detector (Dionex) or a maXis ESI time-of-flight (TOF) mass spectrometer (ESI-HRMS; Bruker Daltonics). HPLC was carried out with a Waters Acquity C 18 column, 100 by 2 mm, 1.7 mm; solvent A was H 2 O-0.1% formic acid, and solvent B was acetonitrile-0.1% formic acid. The gradient was 5% B for 1 min, increasing to 95% B in 18 min; the flow rate was 0.6 mL/min; and the column temperature was 45°C. All elemental formulae were assigned using the high-resolution data of molecular ion clusters measured with a Bruker maXis ESI-TOF mass spectrometer and calculated with the SmartFormula tool of the Compass DataAnalysis program (Bruker). The myxobacterial strain Sorangium cellulosum So ce1875 was isolated in 2001 from soil with plant residues collected near Holbrook, AZ, in 1996 and can be obtained from the DSMZ (German Collection of Microorganisms and Cell Cultures) under the depository number DSM 53600. Fermentation of S. cellulosum So ce1875. A fermentation medium (300 L) was inoculated with 10 L precultured S. cellulosum So ce1875 in 2-L Erlenmeyer flasks. The fermentation medium contained 0.8% starch (Cerestar), 0.3% soy meal, 0.05% Casitone, 0.02% soy peptone, 0.1% MgSO 4 Á 7H 2 O, 0.075% CaCl 2 Á 2H 2 O, 8 mg/L Na-Fe-EDTA, and 1% Amberlite XAD-16 resin. The pH was 7.3 before autoclaving. Glucose (0.25%) in H 2 O was added after autoclaving. The strain was cultivated at 30°C with a pO 2 level at 20% for 14 days. At the end of fermentation, the XAD resin was collected from the culture by sieving. The production of 76 mg/L disorazole Z1 (compound 3) was analyzed by HPLC.
Extraction and isolation of disorazole Z from S. cellulosum So ce1875. The XAD adsorber resin (3.03 kg) from a large-scale (300-L) fermentation of S. cellulosum strain So ce1875 was separated from adhering cells by flotation with water before it was eluted in a glass column with 2 bed volumes of 30% aqueous methanol followed by 3 bed volumes of 100% methanol. The methanol eluate was evaporated to an aqueous mixture, diluted with water, and extracted twice with equal portions of ethyl acetate. After evaporation, the aqueous oil was subjected to a 90% methanol-heptane partitioning removing lipid products with three equal portions of heptane. The aqueous oil was diluted with water and extracted with dichloromethane (DCM) to give 72 g crude extract after evaporation of the solvent. Crystallization from ethanol provided 36.7 g of raw crystalline disorazole Z1 (compound 3), including several minor structural variants and about 1.7 equivalents of the solvent.
A portion of 10.7 g raw crystals was dissolved in DCM and toluene and evaporated to dryness twice before the material was subjected to Si flash chromatography (Reveleris silica cartridge, 330 g, 61 by 223 mm [Grace], equilibrated with DCM; flow rate, 90 mL/min; solvent A, DCM; solvent B, acetone; gradient, 0% B for 15 min, to 10% B in 15 min, 50 min at 10% B, to 20% B in 5 min, 10 min at 20% B, to 100% B in 35 min, and 10 min at 100% B). A total of 8.2 g of disorazole compound 3 was obtained in the first peak (UV at 280 nm; evaporative light scattering detector) between 40 and 85 min and crystalized from ethanol to give 6.8 g of disorazole compound 3 as crystals containing 1.7 equivalents of ethanol after drying in a vacuum. Further fractions with disorazole variants were collected.
Fraction 6 (628 mg) was separated similarly with a gradient of 10% B to 25% B in 130 min to give a main peak of disorazole Z7 (compound 9) (387 mg), which was crystallized from ethanol, yielding 310 mg of white crystals.
Si flash chromatography of mother liquor (38 g) of a disorazole Z crystallization provided further fractions, which were separated by RP-MPLC.
A sample of 6 g disorazole Z from Si flash chromatography was further separated by RP-MPLC in two runs (column, 60 by 500 mm, YMC ODS-AQ, 120 Å, 21 mm; solvent A, 50% methanol; solvent B, methanol; gradient, 20% B for 160 min, to 30% B in 240 min; flow rate, 60 mL/min; UV detection, 313 nm) to give a mixture of disorazoles (680 mg) eluting in front of compound 3. This mixture was again separated by RP-MPLC (column, 30 Preparation of disorazole Z Mosher esters. To prepare disorazole Z (S)-Mosher ester (13), 20 mg of crystalline disorazole Z1 (compound 3) was twice dissolved in pyridine and toluene and evaporated to dryness. The residue was dissolved in 0.2 mL of dry pyridine. Twenty-five microliters of (R)-(2)-a-methoxya-trifluoromethylphenylacetyl chloride was added in three portions to the stirred solution for 24 h. The mixture was dissolved with pyridine and sodium hydrogen carbonate solution (1%), extracted with DCM, washed with water twice, and dried by evaporation with toluene. The residue was purified by Si flash chromatography (12 g silica gel, 40 mm; Reveleris [Grace]; solvent A, petroleum ether; solvent B, ethyl acetate; gradient, 0% B for 1 min, in 1 min to 9% B, 9% B for 1 min, in 4.7 min to 36% B, for 2.5 min 36% B; flow rate, 36 mL/min) The main peak was collected and evaporated to dryness, yielding 39 mg of compound 13. C 60 H 60 F 6 N 2 O 16 M was 1179.1; for NMR data, see Table S2.
(ii) Refinement. All non-H atoms were located in the electron density maps and refined anisotropically. C-bound H atoms were placed in positions of optimized geometry and treated as riding atoms. Their isotropic displacement parameters were coupled to the corresponding carrier atoms by a factor of 1.2 (CH, CH 2 ) or 1.5 (CH 3 , OH) for compound 3 (sh3137_a_sq) and compound 6 (sh3191). Restraints of 0.84 (0.01) Å were used for O-H bond lengths. For compound 7 (sh3279), the O-bonded H atoms were located in the electron density maps. Their positional parameters were refined using isotropic displacement parameters which were set at 1.5 times the equivalent isotropic displacement parameter (U eq ) value of the parent atom. Regarding disorder, for compound 7 (sh3279), each oxirane atom is not fully occupied (O4a, 0.76; O4b, 0.24); furthermore, the propylene group of the main compound as well as one solvent ethanol molecule is split over two positions. Their occupancy factors refined to 0.87 and 0.76, respectively, for the major components. For compound 6 (sh3191), two of the solvent ethanol molecules and the propylene residue of the structure were split over two positions. Their occupancy factors refined to 0.55, 0.86, and 0.81 for the major components, respectively. Regarding SQUEEZE, for compound 3 (sh3137_a_sq), the unit cell contains approximately 2 solvent ethanol molecules (occupancy factor less than 1.0 for each ethanol molecule), which were treated as a diffuse contribution to the overall scattering without specific atom positions by SQUEEZE/PLATON.
Cloning and engineering of the dis427 gene cluster. The myxobacterium Sorangium cellulosum So ce427 was cultivated using CYH medium (1.5 g/L Casitone, 1.5 g/L yeast extract, 4 g/L starch, 1 g/L soy meal, 1 g/L glucose, 1 g/L calcium chloride dihydrate, 0.5 g/L magnesium sulfate heptahydrate, 5.96 g/L HEPES, 4 mg/L Na-Fe-EDTA; pH 7.3) at 30°C. Clumpy cells were collected by centrifugation and then homogenized to become a suspension for isolation of genomic DNA according to the published protocol (42). The genomic DNA was treated with RNase A and the appropriate DNA restriction endonuclease (MluCI or BstXI) to eliminate RNA contamination and to release the gene cluster. The digested DNA was recovered by phenolic chloroform extraction and ethanol precipitation and finally dissolved in autoclaved Milli-Q water. The linear cloning vectors (p15A-cm or pBR322-amp) containing homology arms with homology to the end of the released genomic DNA fragment were achieved by PCR. The digested DNA and corresponding cloning vector were then used for electroporation of Escherichia coli GB05-dir expressing RecET recombinase for linearlinear DNA homologous recombination (LLHR) (42). Recombinant plasmids were isolated from antibiotic-resistant colonies and verified by restriction digestion and Sanger/Illumina sequencing. The correct recombinant plasmids p15A-cm-MluCI-dis427 and pBR322-amp-BstXI-dis427 were then digested with MluCI and BstXI, respectively, to release the cloned fragments. These two fragments were then assembled with a PCR-amplified fragment and a p15A-cm vector by LLHR, leading to the generation of plasmid p15A-cm-dis427. To construct p15A-km-int-dis427 and p15A-km-int-Ptet-dis427, the km-int or km-int-Ptet cassette (Table S15) was amplified by PCR and electroporated into E. coli GB05-red harboring p15A-cm-dis427 and expressing Redab recombinase for linear-circular homologous recombination (LCHR) (42) to replace the cm cassette. Similarly, the expression plasmid p15A-km-int-Pvan-dis427 was constructed based on p15A-km-int-dis427 by LCHR using the apr-Pvan-disA cassette. p15A-km-int-Ptet-Papr-dis427 and p15A-km-int-Ptet-dis427-gent-delF were constructed based on p15A-km-int-Ptet-dis427 by LCHR using Papr-disD and gent-delF cassettes, respectively.
Electroporation of M. xanthus DK1622. M. xanthus DK1622 was inoculated into CTT medium (10 g/L Casitone, 10 mM Tris-HCl, 8 mM magnesium sulfate, 1 mM potassium phosphate; pH 7.6) and incubated at 30°C with shaking until exponential phase. For preparation of electrocompetent cells, 1.8 mL culture was transferred into a 2-mL microcentrifuge tube with a hole punched in the cap. Cells were centrifuged, washed twice, and finally resuspended in 50 mL autoclaved Milli-Q water. After addition of 1 to 3 mg plasmid DNA, the mixture was then transferred into a 1-mm cuvette. Electroporation was performed at 650 V, 400 X, and 25 mF using a Bio-Rad Gene Pulser Xcell electroporation system. The pulsed cells were mixed with 1.6 mL fresh CTT medium and transferred back into the 2-mL microcentrifuge tube. After recovery at 30°C for 6 h in an Eppendorf thermomixer, the cells were mixed with 10 mL CTT soft agar (0.6% agar, supplemented with 50 mg/mL kanamycin) and poured onto a CTT agar plate (1.2% agar, supplemented with 50 mg/mL kanamycin). The plate was incubated at 30°C, and colonies appeared after 4 to 7 days. Kanamycin-resistant colonies were picked out and inoculated into a new CTT agar plate supplemented with 50 mg/mL kanamycin. In order to verify intact integration of the biosynthetic gene cluster, three pairs of primers located at different positions of the gene cluster were used for colony PCR. Cells were scraped from the agar plate, suspended in 50 mL autoclaved Milli-Q water, and incubated at 100°C for 20 min. After centrifugation, 1 to 2 mL of the supernatant was used as the PCR template.
Heterologous production of disorazole Z in M. xanthus DK1622. M. xanthus DK1622 mutants that had the integrated disorazole biosynthetic gene cluster were inoculated in 1.6 mL CTT medium supplemented with 50 mg/mL kanamycin in a 2-mL microcentrifuge tube with a hole punched in the cap and incubated at 30°C in an Eppendorf thermomixer for 1 day. After that, 1 mL of the culture was inoculated into 50 mL CTT medium supplemented with 50 mg/mL kanamycin in a 300-mL baffled flask and incubated at 30°C, 180 rpm for 2 days. After addition of inducer (0.5 mg/mL anhydrotetracycline when the Ptet promoter was used, 1 mM vanillate when the Pvan promoter was used) and 1 mL XAD-16 resin, incubation was continued for 2 more days. Cells and XAD-16 resin were collected by centrifugation and resuspended in methanol for extraction. After filtration, the extracts were dried by rotary evaporation in vacuo and redissolved in 1 mL methanol for HPLC-MS analysis using the method described above.
Isolation of disorazole Z from M. xanthus DK1622 mutants. For heterologous production of disorazole Z1 (compound 3), the fully grown culture of M. xanthus DK1622::km-int-Ptet-Papr-dis427 or M. xanthus DK1622::km-int-Pvan-dis427 was inoculated 1:100 into 2 L CTT medium supplemented with 50 mg/mL kanamycin in 5-L unbaffled flasks and cultivated at 30°C and 180 rpm for 2 days. After addition of inducer (0.5 mg/mL anhydrotetracycline when the Ptet promoter was used, 1 mM vanillate when the Pvan promoter was used) and 2% XAD-16 resin, incubation was continued for 3 more days. Cells and XAD-16 resin were collected by centrifugation, lyophilized to dryness, and extracted stepwise with methanol. The methanol extract was concentrated and partitioned with n-hexane to remove nonpolar impurity. After evaporation in vacuo, the methanol extract was then dissolved in Milli-Q water and extracted twice with equal portions of ethyl acetate. The ethyl acetate extract was evaporated to dryness, redissolved in methanol, and fractionated using Si flash chromatography on a Biotage system. The gradient was set as follows: 0 to 20 column volume (CV), hexane to ethyl acetate; 20 to 40 CV, ethyl acetate to MeOH; 40 to 45 CV, MeOH. Fractions containing targeted compounds were combined and further separated by semipreparative HPLC on a Waters XSelect peptide BEH C 18 column (250 by 10 mm; 5 mm) using A (Milli-Q water plus 0.1% formic acid) and B (acetonitrile plus 0.1% formic acid) as mobile phases at a flow rate of 5 mL/min and a column temperature of 45°C. Gradient conditions were set as follows: 0 to 4 min, 5% B; 4 to 5 min, to 53% B; 5 to 20 min, 53% B; 20 to 21 min, to 95% B; 21 to 25 min, 95% B; 25 to 26 min, to 5% B; 26 to 30 min, 5% B. Fractions were detected by UV detection at 320 nm and were collected using an AFC-3000 fraction collector based on a retention time of 17.7 min. Heterologous production and purification of disorazole Z9 (compound 11) and Z10 (compound 12) were done in a similar manner using the strain M. xanthus DK1622::km-int-Ptet-dis427-delF. For purification of compound 11 using semipreparative HPLC, separation was achieved using a Waters XBridge peptide BEH C 18 column (250 by 10 mm; 5 mm) using the following gradient: 0 to 5 min, 5% B; 5 to 15 min, to 40% B; 15 to 25 min, 40% B; 25 to 26 min, to 95% B; 26 to 31 min, 95% B; 31 to 32 min, to 5% B; 32 to 35 min, 5% B. Fractions were detected by UV detection at 320 nm and were collected using an AFC-3000 fraction collector based on a retention time of 21.8 min. Similarly, compound 12 was purified using a modified gradient:0 to 5 min, 5% B; 5 to 15 min, to 33% B; 15 to 50 min, 33% B; 50 to 51 min, to 95% B; 51 to 56 min, 95% B; 56 to 57 min, to 5% B; 57 to 60 min, 5% B. Fractions were detected by UV detection at 320 nm and were collected using an AFC-3000 fraction collector based on a retention time of 46.0 min.
Purification and in vitro reaction of recombinant protein DisF 427 . The DNA fragment containing disF 427 was amplified by PCR using So ce427 genomic DNA as the template and oligonucleotides HisTEV-disF-F and HisTEV-disF-R. The plasmid pHis-TEV (50) was linearized by NcoI/XhoI and assembled with the PCR fragment by Gibson assembly, generating the protein expression construct pHisTEV-disF427. After Sanger sequencing, this construct was electroporated into E. coli BL21(DE3). The resulting strain was cultivated in 300-mL flasks at 37°C overnight in 50 mL LB medium (10 g/L tryptone, 5 g/L yeast extract, 5 g/L sodium chloride; pH 7.0) supplemented with 50 mg/mL kanamycin. Twenty milliliters of overnight culture was used to inoculate 2 L fresh LB medium supplemented with 50 mg/mL kanamycin in a 5-L flask. After cultivation at 37°C until the optical density at 600 nm (OD 600 ) was about 0.6, the culture was cooled to 16°C, induced with IPTG (isopropyl-b-D-thiogalactopyranoside) at a final concentration of 0.1 mM and then further cultivated at 16°C overnight. Cells were harvested by centrifugation at 4°C, resuspended in ice-cold lysis buffer (25 mM Tris, 200 mM NaCl, 10% glycerol, 20 mM imidazole; pH 8.0) and lysed using a continuous-flow cell disrupter (Constant Systems) at 25,000 lb/in 2 and 4°C. After centrifugation at 23,500 rpm and 4°C for 30 min, the cell debris was removed and the supernatant was loaded onto a 5-mL HisTrap HP column (GE Healthcare) for nickel affinity chromatography using the ÄKTA protein purification system. Fractions containing recombinant protein in elution buffer (25 mM Tris, 200 mM NaCl, 10% glycerol, 250 mM imidazole; pH 8.0) were collected and loaded onto a HiPrep 26/10 desalting column (GE Healthcare) to remove imidazole using desalting buffer (25 mM Tris, 200 mM NaCl, 10% glycerol; pH 8.0). The eluents were collected, concentrated, and stored at 280°C.
The in vitro reaction was carried out at 30°C for 1 h in a 50-mL mixture containing 25 mM Tris, 200 mM NaCl, 2 mM MgCl 2 , 2 mM S-adenosyl methionine, 5 mL (about 1 mg/mL) recombinant protein, 0.5 mL (1 mg/ mL in MeOH) disorazole Z9 (compound 11) or Z10 (compound 12). Boiled (100°C for 10 min) recombinant protein was used as a negative control. After addition of 50 mL MeOH, the mixture was vortexed and centrifuged at 15,000 rpm for 15 min. Two microliters of the supernatant was used for HPLC-MS analysis.
Biological characterization. (i) IC 50 . Cell lines were obtained from the German Collection of Microorganisms and Cell Cultures (DSMZ) or the American Type Culture Collection (ATCC) and were handled according to standard procedures as recommended by the depositor. Cells were seeded in 96-well plates and treated with disorazole A1 (compound 1) and disorazole Z1 (compound 3) at serial dilutions for 48 h. Viability was determined by adding resazurin sodium salt for 3 h. Fluorescence measurements were performed using a SpectraMax T5 plate reader (Molecular Devices). Readouts were referenced and IC 50 s were determined by sigmoidal curve fitting using OriginPro software. Data for compound 3 were determined as duplicates in three independent experiments.
(ii) Immunostaining and high-content imaging. U-2 OS cells were seeded in 96-well imaging plates. After overnight equilibration, the cells were treated with compound 1 and compound 3 as assigned and incubated for up to 24 h. Cells were fixed with cold (220°C) acetone-MeOH (1:1) for 10 min. After being washed with phosphate-buffered saline (PBS), the cells were permeabilized with 0.01% Triton X-100 in PBS. The following primary antibodies (Sigma) were used: a-tubulin monoclonal antibody (MAb), acetylated tubulin MAb, and GRP78/BiP MAb. For labeling, cells were incubated with primary antibody for 45 min at 37°C, followed by incubation with the secondary antibody (Alexa Fluor 488 goat anti-mouse or anti-rabbit immunoglobulin; Molecular Probes) under the same conditions. After cells were washed with PBS, the nuclear stain Hoechst 33342 (5 mg/mL) was applied for 10 min. Samples were imaged on an automated microscope (BD Pathway 855) suitable for high-content screening. In order to capture full-width pictures of a larger area, a built-in stitching technique was used to combine multiple frames. A representative part of the larger microscopy image was cropped for illustrations. In case of GRP78 immunostaining, fluorescence intensity was determined in the cytoplasmic segment as defined by a ring around nuclei. The relative intensity of GRP78 fluorescence in this region of interest was used as a measure for ER stress/UPR.
(iii) Live-cell imaging of the ER. U-2 OS cells were seeded in 96-well imaging plates and transfected using CellLight ER-GFP (Thermo Fisher Scientific) according to the manufacturer's protocol. After treatment with compound 1 and compound 3 in triplicate as indicated, live cells were imaged on an automated microscope (BD Pathway 855).
(iv) MMP. U-2 OS cells were seeded at 8 Â 10 3 cells/well in 96-well imaging plates and were treated after overnight equilibration with compound 3 as assigned. Following treatment, the cells were washed twice with PBS and 100 mL of a staining solution (50 nM tetramethyl rhodamine methyl ester [TMRM] and 5 mg/mL | 2023-06-16T06:16:23.842Z | 2023-06-15T00:00:00.000 | {
"year": 2023,
"sha1": "26c5bec77c4a68f2d8a39229e229017785536286",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/spectrum.00730-23",
"oa_status": "GOLD",
"pdf_src": "ASMUSA",
"pdf_hash": "a8d7fbe1c4fdadaf572a57e464ebcd59565bf816",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249726167 | pes2o/s2orc | v3-fos-license | Soluble guanylate cyclase (sGC) stimulator vericiguat alleviates myocardial ischemia-reperfusion injury by improving microcirculation
Background This study aimed to verify the effect of soluble guanylate cyclase (sGC) stimulator vericiguat on myocardial ischemia-reperfusion injury and explore its mechanism. Methods A myocardial ischemia-reperfusion injury model of mice was established and intravenous administration was performed 2 minutes before reperfusion. Triphenyltetrazolium chloride (TTC) staining and echocardiography were used to verify the effect of vericiguat on myocardial ischemia-reperfusion injury in the infarct area, and immunofluorescence was used to observe myocardial pathological changes at different time points after reperfusion. Quantitative proteomics was conducted to analysis the main differentially expressed proteins after drug intervention. The distribution of endothelial cells and sGC after myocardial ischemia-reperfusion injury in mice was observed by immunofluorescence. RNA sequencing of endothelial cells was used to search for differentially expressed molecules. Thioflavin-S staining was used to observe the effect of vericiguat on improving the nonrecurrence phenomenon and reducing the infarct size after reperfusion. Results The effect of the sGC stimulator vericiguat on myocardial ischemia-reperfusion injury was verified, and myocardial microcirculation significantly increased after drug intervention. Quantitative proteomics found that the protein expression of myocardial tissue in the ischemia-reperfusion area was not significantly different in the drug intervention group, except for increased adenosine triphosphate (ATP) activity. Vericiguat, nitroglycerin, and nitrite did not directly affect apoptosis or cell viability. RNA sequencing of human umbilical vein endothelial cells screened the upregulated antioxidant response. Conclusions SGC stimulator vericiguat ameliorated myocardial ischemia-reperfusion injury through indirect pathways of improving microcirculation.
Introduction
Despite significant advances in surgery and drug therapy in recent years, ischemic heart disease remains a leading cause of death worldwide (1)(2)(3). A reduction in myocardial oxygen supply due to thrombosis caused by coronary atherosclerotic plaque leads to cardiac tissue damage and subsequent biochemical and metabolic changes that ultimately lead to myocardial cell death (4)(5)(6). When the coronary artery is completely occlusive, cell death is further intensified and coronary artery microcirculation is significantly reduced, resulting in severe structural and functional disorders and acute myocardial infarction (AMI). Currently, reperfusion therapy is considered to be the most effective intervention to reduce infarct size and improve clinical outcomes, among which the most classical are percutaneous coronary intervention (PCI) and surgical coronary artery bypass grafting (CABG), which are the gold standard therapies for restoring blood flow.
Nitric oxide (NO) is a small signal molecule that can pass freely through the biological barrier and plays an important role as a messenger and effector in the cardiovascular system (12,13). Researchers have proposed the strategy of exogenous NO to expand blood vessels and reduce the production of reactive nitrogen. Animal experiments have confirmed the effectiveness of NO donor, such as organic nitrates and nitrites, in alleviating myocardial ischemiareperfusion injury (14,15). These drugs may reduce infarct area and improve cardiomyocyte survival rate by reducing platelet aggregation, alleviating inflammation, and reducing oxidative stress (16)(17)(18)(19)(20). Although many animal experiments have confirmed that nitrite drugs can protect against myocardial ischemia-reperfusion injury, the clinical trial results of these drugs are not ideal, and their efficacy in patients with acute coronary syndrome (ACS) is still uncertain. The primary endpoint and all secondary endpoints of the 2 human trials of nitrite as a pretreatment agent were negative (21)(22)(23).
Recently, soluble guanylate cyclase (sGC) agonists and stimulators have been developed and applied to improving the prognosis of patients with heart failure with reduced ejection fraction (EF). The largest clinical study of sGC stimulators/agonists is the Vericiguat Global Study in Subjects with Heart Failure with Reduced Ejection Fraction (VICTORIA) study (24)(25)(26)(27). As a phase III clinical trial, the study involved 5,050 patients, the median follow-up time was 10.8 months, and cardiovascular death and hospitalization rates were reduced by 3%. Based on the results of the VICTORIA study, vericiguat was officially approved by the U.S. Food and Drug Administration in January 2021 for patients with heart failure with reduced EF.
Considering that patients require drugs before PCI or at the same time, and the myocardial tissue is generally in a state of hypoxia and reduced sGC is common, sGC stimulators may be better than sGC agonists. This study aimed to verify the protective effect of sGC stimulator vericiguat on myocardial ischemia-reperfusion injury, compare whether its effect was better than traditional NO donors, and explore the specific mechanism of vericiguat in improving myocardial ischemia-reperfusion injury. We present the following article in accordance with the ARRIVE reporting checklist (available at https://atm. amegroups.com/article/view/10.21037/atm-22-2583/rc).
Animals
The animals used in this experiment were adult male C57BL/6 mice. The mice were raised in a clean animal enclosure, with a constant temperature of 22 ℃, free access to food and water, and a 12/12-hour alternating day and night cycle. All animal experiments were approved by the animal ethics committee of Zhongshan Hospital, Fudan University (Ethical Approval No. 2020-091) and were carried out in accordance with the Guide for the Care and Use of Laboratory Animals, 8th edition. A total of 27 adult male C57BL/6 mice were randomly divided into a control group, ischemia-reperfusion group, and vericiguat group for tandem mass tag (TMT) protein mass spectrometry. A further 18 adult male C57BL/6 mice were randomly divided into the same 3 groups for triphenyltetrazolium chloride (TTC) staining.
Myocardial ischemia-reperfusion model
The mice were anesthetized with isoflurane and fixed on a board. Ethanol (75%) was used to disinfect the chest area of the mice, the skin was cut, the pectoralis major and pectoralis minor muscles were separated in turn, vascular forceps were inserted at the most obvious place of heart beat, and the heart was opened. A needle was inserted 2 mm below the left atrial appendage, an area with a width of about 4 mm was ligated from left to right, and 2 cm of loose knot thread was left outside the thoracic cavity to facilitate release during reperfusion. The incision was sutured with a purse string, and the mice were placed on a constant temperature heating pad. After 30 minutes, the ligation coil was loosened with the reserved thread head, the wound was sutured, and the mice were returned to the enclosure.
Quantitative proteomics
After the mice were anesthetized with 3% Pentobarbital, we cut off the thoracic cavity to expose the heart, disconnect the abdominal aorta, inject 5ml PBS solution into the left ventricle, disconnect the heart above the auricle after washing completely, carefully peel off the left and right atrium and right ventricle, and soak them in precooled saline. The myocardial tissue below the ligation point was then cut off and put into the liquid nitrogen tank for preservation. The tissues were then detected by quantitative proteomics after fermentation desalination, sample labeling, fractionation and other steps. It should be noted that in proteomic analysis, a single sample requires about 100 mg of myocardial tissue, while the myocardial tissue below the ligation site of a single mouse is about 30mg. Therefore, in practice, we combined the myocardial tissue of the ischemia-reperfusion area of three mice in the same group into one sample.
Measuring of cyclic guanosine monophosphate (cGMP) concentration
Concentration of cGMP in both tissue and cells in vitro was measured by cGMP Direct Immunoassay Kit (Abcam, Cambridge, MA, USA).
Cardiac ultrasound
The mice were depilated preoperation and 1 week, 2, and 3 weeks postoperation. The mice were anesthetized and placed on a heating plate. The parasternal long-axis section was visualized in the precordial area of mice with a small animal ultrasonic probe, the M-type section image was obtained at the most obvious position of mitral valve opening and closing, and the B-type section image was recorded. EF and fractional shortening (FS) values were calculated.
TTC and Evans blue staining
After anesthesia, the mice were fixed on a board, the heart was exposed, the previously reserved thread was located and tied, and 0.5 mL Evans blue staining solution was injected through the left ventricle with a 1 mL needle into the apex of the heart. The heart was kept at −20 ℃ in a refrigerator for 1 hour and then sliced below the ligation point. Preprepared 2% TTC dye was added, the slices were incubated at 37 ℃ in the dark for 30 minutes, and then fixed with 4% paraformaldehyde.
Isolation of cardiac myocytes
Cardiac myocytes were isolated by a simplified, Langendorff-free method (28).
Statistical analysis
Results are presented as mean ± standard error of mean (SEM). Comparisons between 2 groups were made using Student's t-test, and data obtained from multiple groups were compared using ANOVA. P value less than 0.05 was considered significant. GraphPad Prism 8.0 (GraphPad Prism Software Inc., San Diego, CA, USA) and SPSS 18.0 for Windows (SPSS, Inc., Chicago, IL, USA) were used for statistical analysis.
Difference of protein levels between normal myocardium and ischemic myocardium in mouse myocardial ischemiareperfusion injury model
In order to investigate the changes of protein level in myocardial tissue of mice during ischemia-reperfusion myocardial injury, we analyzed the protein expression in myocardial tissue of the experimental group and control group with TMT protein mass spectrometry. Proteomics analysis demonstrated that expression of most molecules was generally upregulated ( Figure 1A), which indicated that in the early stage of ischemia-reperfusion injury, the myocardial tissue was in a relatively "excited" state. Gene Ontology (GO) analysis showed that the upregulated proteins were concentrated in biological processes such as protein kinase activity, cell differentiation, collagen biosynthesis, platelet activation, and cell adhesion, among which the enhancement of protein kinase activity was the most obvious ( Figure 1B). Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis focused on the activation of complement and coagulation systems ( Figure 1C). Reactome enrichment pathway analysis and interaction grid analysis did not find significantly upregulated pathways or core molecules ( Figure 1D,1E). The main reason is that the myocardial tissue in the myocardial ischemic area of a single mouse (about 30-40 mg) could not meet the requirements of quantitative proteomics analysis (at least 100 mg), so we have to use the myocardial tissue of three mice as one sample, which increased the confounding factors and may cause a certain bias. Secondly, in addition to cardiomyocytes, the tissue of myocardial ischemic area still contained other cells such as endothelial cells, neutrophils, smooth muscle cells, fibroblasts and so on. Although cardiomyocytes made up most of the ischemic tissue, it may still cause some deviation. These results suggested that the occurrence and development of myocardial reperfusion injury was closely related to the enhancement of protein kinase activity. Considering that the NO-cGMP-protein kinase G (PKG) pathway is an important pathway affecting protein kinase activity, and nitrates play an important role in the treatment of ACS, we then focused on NO and its downstream molecules.
Dynamic changes of the NO-cGMP-PKG pathway and its downstream molecules at different time points of reperfusion
Considering the effect of NO on protein kinase function and the important role of the NO-cGMP-PKG pathway in the treatment of ACS, we measured the dynamic changes of cGMP and its downstream molecules at different time points of myocardial ischemia-reperfusion in mice using Western blot and quantitative reverse transcription polymerase chain reaction (RT-qPCR). The results showed that the expression of molecules of the NO-cGMP-PKG pathway were significantly upregulated from 3 to 6 hours after reperfusion (Figure 2A). The Western blot results of GUCY1B3, the core molecule of sGC, were consistent with the PCR results ( Figure 2B,2C). The cGMP levels measured by cGMP kit at different time points also verified the above results, and the upregulation was most obvious 6 hours after reperfusion ( Figure 2D). Considering the role of NO and the use of related drugs in ACS, we speculated that the molecules related to the NO-cGMP-PKG pathway increased in a compensatory way and played a protective role in the process of myocardial ischemiareperfusion in mice. The cGMP level and NO-cGMP-PKG pathway-related molecules decreased gradually at 12 and 24 hours of reperfusion and then remained at a relatively low level.
Vericiguat could reduce infarct size and improve cardiac function after myocardial ischemia-reperfusion injury
Firstly, we determined the drug concentration of vericiguat used in the experiment. The safety of vericiguat 10 mg once daily in patients with heart failure and reduced EF had been verified by VICTORIA study. The safety of other dosages in patients has not been tested in clinical practice. And concentration of 0.01 to 100 μM in vitro and 3 to 10 mg/kg in animal models is usually used and is basically safe. Because the main function of vericiguat is to stimulate the sGC receptor, enhance its stability after binding with NO, and promote the production of cGMP, we tested the ability of vericiguat to promote the increase of cGMP at different concentrations in vivo and in cardiomyocytes ( Figure 3A). The effect of vericiguat on cGMP was notable at a concentration of 1 μM in vitro and 3 mg/kg in mice.
And thus the concentration of vericiguat in this study was chosen. TTC staining showed that vericiguat significantly increased the area of viable myocardium, suggesting that vericiguat significantly alleviated myocardial ischemiareperfusion injury in mice, and its effect could be blocked by endothelial NO synthase (eNOS) inhibitor NG-nitro-Larginine-methyl ester (L-NAME) ( Figure 3B-3D). Cardiac ultrasound also confirmed this result ( Figure 3E-3G). The left ventricular EF and left ventricular short axis FS in the treatment group were significantly better than those in the control group from 1 week to 3 weeks postoperation.
Vasodilation of myocardial microcirculation in acute/ subacute phase after ischemia-reperfusion injury
In order to explore the specific mechanism of vericiguat in improving myocardial ischemia-reperfusion injury in mice, wheat germ agglutinin (WGA) staining was performed 6 hours and 4 days postoperation. Microcirculation blockage is an important pathological manifestation of myocardial ischemia-reperfusion injury. The results showed that the morphology of myocardium in the control group was complete 6 hours after reperfusion; the myocardial morphology was disordered, and the number of cardiomyocytes decreased significantly in the ischemiareperfusion injury group; and in the drug intervention group, there was notable opening of myocardial microcirculation and relatively complete morphology of cardiomyocytes ( Figure 4). The staining results on the 4 th day postoperation showed that a large number of capillaries were opened in both the ischemia-reperfusion injury group and drug intervention group, while the microvessel density was greater and the myocardial morphology was more complete in the drug intervention group. Through the above immunofluorescence images, the obvious effect of vericiguat on expanding myocardial microcirculation in the acute and subacute stages of myocardial ischemia-reperfusion injury could be seen.
Quantitative proteomics to explore the effect of vericiguat on cardiomyocytes
In order to explore the specific mechanism of vericiguat in reducing the infarct area in mice model of myocardial ischemia-reperfusion injury, we conducted quantitative proteomic analysis of the myocardial tissue in the ischemiareperfusion area of mice in the vericiguat intervention group and simple operation group. The results showed that compared with the simple operation group, there was no significant difference in protein expression of myocardial tissue of the ischemia-reperfusion area in the drug intervention group [except for the enhancement of adenosine triphosphate (ATP) function] ( Figure 5). Since the enhancement of ATP activity and binding ability is more the result than the cause of alleviating myocardial ischemiareperfusion injury, we concluded that administration of vericiguat could not directly intervene in the biological function of myocardial tissue in the ischemia-reperfusion area, but it improved myocardial ischemia-reperfusion injury through some indirect way.
Direct effect of vericiguat on cardiomyocytes in vitro
Based on quantitative proteomic analysis of myocardial tissue in the mouse myocardial ischemia-reperfusion area, we found that in addition to ATP function, the protein expression of myocardial tissue in the ischemiareperfusion area in the vericiguat intervention group was not significantly different from that in the simple operation group. Therefore, we speculated that vericiguat drug intervention could improve myocardial ischemiareperfusion injury through indirect ways, and that it had no direct effect on the biological function of myocardial tissue in the ischemia-reperfusion area. Therefore, in the next experiment, we used AC16 cells and adult mouse primary cardiomyocytes to establish an in vitro hypoxia/reoxygenation model (hypoxia for 6 hours/ reoxygenation for 1 hour) and compare the effects of vericiguat with traditional NO donor on cardiomyocyte apoptosis and cell viability in vitro.
We found that in the hypoxia/reoxygenation model of AC16 cells cultured in vitro, the intervention of vericiguat could not directly reduce apoptosis, and the classic antiischemic drug nitroglycerin could not reduce the apoptosis of cardiomyocytes either ( Figure 6A). In the hypoxia/ reoxygenation model of primary cardiomyocytes of adult mice, vericiguat could not directly affect the survival rate of cardiomyocytes ( Figure 6B). Similarly, nitroglycerin and nitrite could not directly affect the survival rate of cardiomyocytes. In addition, vericiguat and NO donor drugs could not directly affect the ATP concentration of primary cardiomyocytes in mice ( Figure 6C). This was consistent with the results of quantitative proteomics in the previous step, which once again verified that vericiguat could not directly interfere with the biological function of myocardial tissue in the ischemia-reperfusion area.
Distribution of sGC after ischemia-reperfusion injury
In contrast to traditional nitrates or nitroglycerin, sGC stimulators can have an effect on muscle cells and also directly on endothelial cells. We selected different time points (6 hours and 1 day) after myocardial ischemiareperfusion in mice and found that on the basis of the proliferation of myocardial microvascular endothelial cells, the expression of sGC increased significantly after myocardial ischemia-reperfusion in mice (Figure 7). SGC was widely expressed in cardiomyocytes and endothelial cells, but its distribution was more concentrated in endothelial cells. This indicated that sGC stimulator had mostly direct effects on endothelial cells, which was consistent with our previous speculation. The proliferation and spread of endothelial cells after myocardial ischemiareperfusion injury in mice also reflected the possibility of vericiguat acting on myocardial microvascular endothelial cells to indirectly alleviate myocardial ischemia-reperfusion injury.
Discussion
There have been many studies on the pathophysiological process of myocardial ischemia-reperfusion injury and its related therapeutic targets, including ischemic preconditioning/postconditioning, inhibition of MPTP opening, apoptosis, circadian rhythm, energy metabolism, Ca 2+ imbalance and extracellular vesicles (29)(30)(31). Some studies have achieved good results, but they are still rarely transformed into specific therapeutic measures in the clinical field (32)(33)(34). Since NO is an important therapeutic target and NO donors including nitroglycerin and isosorbide mononitrate have been used clinically to alleviate AMI for many years, researchers have been focusing on regulating the production of NO by NO donors to improve myocardial ischemia-reperfusion injury. Previous studies have shown that the lack of NO synthase can lead to spontaneous myocardial infarction and the aggravation of myocardial ischemia-reperfusion injury (35)(36)(37). NO, a small volatile signaling molecule that belongs to the so-called "gas transporter" class, plays an important role as a messenger and effector in the cardiovascular system. Many teams have studied the use of NO donors to improve the bioavailability of NO after myocardial ischemia. Animal experiments of NO donors have confirmed the effectiveness of nitrate and nitrite in reducing myocardial ischemia-reperfusion injury by function of anti-platelet aggregation, anti-inflammatory, reducing oxidative stress, anti-arrhythmia and anti-apoptosis (38)(39)(40). However, the results of clinical trials are still inconclusive and the outlook remains uncertain. The results of clinical studies show that there are certain defects in the application of simple NO donors in myocardial ischemia-reperfusion injury. Considering the contradiction between animal experiments and clinical trials, we suspect that NO is insufficient in the early stage of myocardial ischemia-reperfusion injury. Moreover, excess NO can easily form superoxide nitrite particles with negative oxygen ions, which will aggravate oxidative stress (41)(42)(43). Thus, we conducted a series of experiments to test this hypothesis.
This study demonstrated that the cGMP level of myocardial tissue in the ischemic reperfusion area experienced first an obvious rise and then decline after injury, and the downstream molecules of the NO-cGMP-PKG pathway also experienced the same change, which was most significant at 6 hours. The level of cGMP and the downstream molecular expression level of the NO-cGMP- PKG pathway then decreased gradually. These findings verified our theory that NO was not lacking in the initial stage of myocardial ischemia-reperfusion injury. Compared with NO donors, sGC stimulants-Vericiguat can enhance the function of NO by stabilizing sGC conformation and enhancing its binding ability with NO, while it does not influence the production of NO (44). We then demonstrated that the sGC stimulator vericiguat was superior to NO donor nitrite or nitroglycerin in the protection of ischemiareperfusion injury. According to the results of our research, the novel oral sGC stimulant Vericiguat used 2 minutes before revascularization may reduce the infarct size after ligation of left anterior descending branch in mice. Thus we think it may be a potential clinical therapeutic method in the occurrence of AMI to alleviate myocardial ischemiareperfusion injury. When searching for the mechanism of action, we found that there was no significant difference in the expression of myocardial tissue protein in the drug intervention group compared with the surgery group alone, which was unusual and suggested that the sGC stimulator vericiguat must have involved other mechanisms to improve myocardial ischemiareperfusion injury. Unlike traditional nitrates, which must be activated via smooth muscle cells, the sGC stimulator vericiguat also acted on endothelial cells, which proliferated significantly in anoxic environments. These results indicated that due to the specialization of its structure and function, cardiomyocytes could not process complex biological signals, and myocardial microvascular endothelial cells may have played a role in mediating cardiomyocytes to detect the external environment and respond accordingly. Thus, after verifying the high expression of sGC in endothelial cells, messenger RNA (mRNA) sequencing will be performed in further experiments to explore how vericiguat acts on endothelial cells.
Our study had some limitations. Firstly, as the myocardial tissue in the myocardial ischemic area of a single mouse cannot meet the quality requirements of quantitative mass spectrometry analysis, we had to use the myocardial tissue in the myocardial ischemic area of 3 mice as a sample, which may have increased confounding factors and caused certain bias. Secondly, in addition to cardiomyocytes, the tissue of the myocardial ischemic area contained other cells, including myofibroblasts, endothelial cells, vascular smooth muscle cells, nerves, and neurons. Cardiac mitophagy also plays an important role in the pathological process of myocardial ischemia-reperfusion injury. Whether vericiguat may enhance cardiac mitophagy and reduce the production of ROS needs to be further studied. | 2022-06-17T15:05:47.471Z | 2021-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "cfb911dd79fdb17e8271cae6a5c7875229fad7d3",
"oa_license": "CCBYNCND",
"oa_url": "https://atm.amegroups.com/article/viewFile/96491/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b23a3efd262fcd80cba73facacc2cbe4b6b0018b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119126832 | pes2o/s2orc | v3-fos-license | New porous medium Poisson-Nernst-Planck equations for strongly oscillating electric potentials
We consider the Poisson-Nernst-Planck system which is well-accepted for describing dilute electrolytes as well as transport of charged species in homogeneous environments. Here, we study these equations in porous media whose electric permittivities show a contrast compared to the electric permittivity of the electrolyte phase. Our main result is the derivation of convenient low-dimensional equations, that is, of effective macroscopic porous media Poisson-Nernst-Planck equations, which reliably describe ionic transport. The contrast in the electric permittivities between liquid and solid phase and the heterogeneity of the porous medium induce strongly oscillating electric potentials (fields). In order to account for this special physical scenario, we introduce a modified asymptotic multiple-scale expansion which takes advantage of the nonlinearly coupled structure of the ionic transport equations. This allows for a systematic upscaling resulting in a new effective porous medium formulation which shows a new transport term on the macroscale. Solvability of all arising equations is rigorously verified. This emergence of a new transport term indicates promising physical insights into the influence of the microscale material properties on the macroscale. Hence, systematic upscaling strategies provide a source and a prospective tool to capitalize intrinsic scale effects for scientific, engineering, and industrial applications.
(Dated: 25 April 2014) We consider the Poisson-Nernst-Planck system which is well-accepted for describing dilute electrolytes as well as transport of charged species in homogeneous environments. Here, we study these equations in porous media whose electric permittivities show a contrast compared to the electric permittivity of the electrolyte phase. Our main result is the derivation of convenient low-dimensional equations, that is, of effective macroscopic porous media Poisson-Nernst-Planck equations, which reliably describe ionic transport. The contrast in the electric permittivities between liquid and solid phase and the heterogeneity of the porous medium induce strongly oscillating electric potentials (fields). In order to account for this special physical scenario, we introduce a modified asymptotic multiple-scale expansion which takes advantage of the nonlinearly coupled structure of the ionic transport equations. This allows for a systematic upscaling resulting in a new effective porous medium formulation which shows a new transport term on the macroscale. Solvability of all arising equations is rigorously verified. This emergence of a new transport term indicates promising physical insights into the influence of the microscale material properties on the macroscale. Hence, systematic upscaling strategies provide a source and a prospective tool to capitalize intrinsic scale effects for scientific, engineering, and industrial applications. a) Electronic mail: m.schmuck@imperial.ac.uk.
I. INTRODUCTION
The Poisson-Nernst-Planck (PNP) equations can be applied in many different physical contexts such as modeling of ionic transport, e.g. batteries, supercapacitors, fuel cells, and capacitive desalination devices. Especially the fields of electrokinetics and electrohydrodynamics gained an increasing interest in recent years. Actual research aims to take advantage of scale effects in micro-and nano-fluidic devices for industrial applications and for the creation of chip-like devices ("lab on a chip"). Such devices can perform separation, mixing, and chemical analysis tasks. It is also possible to design electrokinetic pumps. 1 The study of geometric effects on the scale of cell membranes, muscles, and neurons by means of PNP equations finds currently a lot of attention in biology and medcine. 2, 3 The essential goal is to better understand how calcium ions, i.e. Ca 2+ -ions, move in voltagedependent calcium channels for example. These channels are a group of voltage-gated ion channels which can be found in muscles, glial cells, and neurons. Recent research attempts to mimic such biological ion channels with synthetically built channels. 4 For example by modifying channel geometry and surface charge one tries to better understand the effect of rectification. Rectificiation can be descriptively explained by the comparison of ionic flux with an electric current through a pn-diode. One usually studies rectfication factors (ratio of forward current to reverse current) in this context, see 5 for example. This broad range of applications in heterogeneous environments strongly rely on models which reliably and systematically account for effects of the microscale on the macroscale. A very common approach for deriving effective macroscopic equations is volume averaging. [6][7][8] Unfortunately, it is still unclear how to systematically treat nonlinear terms by this intuitive method. A technically slightly more involved approach is the homogenization method [9][10][11] which provides a reliable and systematic alternative under the assumption of a periodic pore distribution.
The general importance and the strong demand of properly upscaled equations in engineering, and design as well as optimization of scientific and industrial devices call for mathematical tools that rely on well-established principles for multiscale problems. Here, we want to systematically extend the widely accepted PNP system from the free space case towards solid-electrolyte composites showing a high contrast between its electric permittivities. For this purpose, we consider the full PNP equations with the help of a modified asymptotic two-scale expansion. This new approach accounts for the nonlinear and coupled structure of the system, see Theorem III.2. As our main result, we derive the following new effective macroscopic porous medium PNP equations, that is, which is valid under local (pore level) thermodynamic equilibrium and for arbitrary Debye lengths λ D > 0. The parameter p denotes the porosity and D r , M, and ǫ ǫ ǫ 0 are effective transport tensors defined by the upscaling subsequently performed. The variable u r 0 represents effective macroscopic quantities such as the concentration of positively charged ions n + for r = 1, the density of negatively charged ions n − for r = 2, and the electrostatic potential Φ for r = 3.
All equations appearing during the upscaling are rigorously justified by well-posedness criteria. In particular, Lemma III.4 (Section III) guarantees the solvability of the new system (1) which shows a new term D r that accounts for a dominant influence of the oscillating electric potential on the concentrations. We emphasize that this new term emerges as a result of an adapted asymptotic multi-scale expansion introduced in order to account for the heterogeneity induced by the porous medium and by the electric permittivities. That means, in the classical asymptotic expansion we assume a special separation between the micro-and macro-scale in the terms u 1 and u 2 in difference to related literature [12][13][14][15] and classical homogenization theory, 10,16 see (24) in Theorem III.2 below. We point out that the nonlinear character of the Nernst-Planck equations leads with (2) to ill-posed reference cell problems without additional assumptions.
Hence, we suppose that the reference cells, which define the micro-geometry of the porous medium, are in thermodynamic equilibrium. This guarantees then well-posedness. This solvability issue might also be the reason why the upscaling of the PNP equations was mainly restricted to thin double layer type approximations 17,18 or linearized formulations in the context of Onsager reciprocal relations 27,28 so far. 12,13 Physically, such situations occur when the electric permittivities between the electrolyte and the solid material are far apart, see Section I A.
The article is organized in the following way: The dominant oscillating behavior of the electric potential is motivated by the contrast in the electric permittivities between the solid and the liquid phase in Section I A where also a related effective media theory is discussed.
A historical overview of closely related upscaling results is given in Section I B. In Section II, we state elementary results and introduce necessary notation. The main results follow in Section III. Finally, we prove all results in Section IV.
A. Physical motivation: Dielectric permittivities of solids and liquids
In this section, we state the physical setting which leads to strongly oscillating electric potentials in composites such as a porous medium permeated by a dilute electrolyte. A related example where such oscillations are well-known is the electric field over a material with strongly heterogeneous conductivities. This is also one of the classical fields of effective media theory 19 and homogenization theory 9,10 where one often assumes a periodic representation of the heterogeneities for simplicity. In fact, the high-frequency electric permittivity and the low-frequency electric conductivity are formally equivalent because of the equivalence in the governing equations. However, the situation for the PNP system here is slightly different since we have to deal with a non-linearly coupled system of equations. As explained previously, we account for this difference by a non-standard asymptotic expansion that factors the strong influence of the electric potential in. Moreover, the equivalence between permittivity and conductivity implies that their mathmatical computation is equivalent.
As in the case of conductivity, the strength of the oscillations can be controlled by the distance between the different electric permittivities for our problem here. Since we study a dilute electrolyte, we can expect an electric permittivity of the liquid phase to be around 80 at room temperature and a frequency under 1kHz (of course, this also depends on the electrolyte employed). For the solid phase, we can expect an electric permittivity between 2 and 5, i.e., paper 3, alumina 4.5, teflon 2.1, porcelain 5.1, and plexiglas 2.8. But in many fields, a systematic derivation of effective media quantities such as the electric permittivity is still lacking. 20 Our subsequently derived equations reliably define such an effective electric permittivity for periodic porous media. We emphasize the importance of characterizing porous materials with respect to dielectric properties in microelectronics. 21 Moreover, we believe that a systematic and reliable upscaling of such complex composites using geometric and material properties together with experimental validation gives promising perspectives for new scientific, technological, and industrial applications.
B. Review of related upscaling results of the PNP equations
We briefly give a shortened historical overview, see Table I, and point out differences of contributions mainly based on the homogenization method. In 14, the authors perform a singular limit with respect to the dimensionless Debye length.
A weighted Debye length, i.e., λ α , and the use of λ as the homogenization parameter has the meaning of upscaling the PNP system parallel to taking special (α > 0 arbitrary) thin double layer limits of the system. Espeically for α = 1, this is an interesting problem since the thin double layer approximation is a widely used simplification as already mentioned above.
II. NOTATION AND PRELIMINARIES
The following classical exposition recalls central definitions and results from 9-11. For the microscopic variable y := x s , we obtain the following relation for gradients applied to functions ψ s (x) := ψ x, x s , i.e., Homogeneous Neumann problems for Poisson equations for example require the use of For notational brevity, we do not introduce additional notation for an element of this equivalence class. A representative element of this equivalence class can be chosen by the following mean zero condition, that means, where, Lemma II.1 The following quantitiy, defines a norm on W ♯ (Y ). Moreover, the dual space (W ♯ (Y )) ′ can be identified by the set, with, Definition II.2 Let c, C ∈ R be such that 0 < c < C and let D ⊂ R N . We call N × N matrices A = {a ij } 1≤i,j≤N ∈ (L ∞ (U)) N ×N strongly elliptic, if for any u ∈ R N and a.e. in D it holds that, In our analysis we mainly have to deal with A = {δ ij } 1≤i,j≤N which obvously satisfies the conditions of Definition II.2.
Theorem II.3 Let A be a strongly elliptic matrix with Y -periodic coefficients and f ∈ (W ♯ (Y )) ′ . Then the problem, has a unique solution. Moreover, Remark II.4 Since Theorem II.3 makes a uniqueness statement, we consider in this case (5). We apply this convention in the whole article.
We frequently use the following space, A. Review of the classical PNP equations Before we come to the main results in this article, we briefly recall basics about the PNP system. In view of computational convenience (block matrix solvers, e.g. 34), notational clearity and compactness, and a non-linear (i.e., non-symmetric) extension of the classical Onsager relations, 27,28 which classically only hold in the linearized case, 12,13 motivate us to recall the PNP equations from 15 written by field vectors u := [n + , n − , Φ] ′ as, where We further use the convention Ω T :=]0, T [×Ω. The notation Ω Ω Ω T accounts for the fact that the components of the field vector u are defined in different domains of the porous medium later on, i.e., either in the whole domain Ω or only in the electrolyte phase Ω s . We further denote the coordinate indices 1 ≤ k, l ≤ N and s i k j l (u) = s ij (u)δ kl with δ kl the Kronecker symbol, ∇ n := n · ∇ with n the normal vector pointing outward of Ω, Γ Γ Γ ι represents the Dirichlet (for ι = D) and Neumann (for ι = N) boundary surrounding the porous medium Ω, i.e. ∂Ω = Γ D ∪ Γ N . Hence, the first equation (14) 1 is equivalent to the following classical system, which can be interpreted as a gradient flow of the following free energy, We recall that (17) builds the basis of dilute solution theory which accounts for thermodynamic quantities such as entropy S formed by the first summand in the integral (17).
The remaining integrands such as energy density of interactions (second term) and energy density of the electric field (third term) constitute to the internal energy U. We note that from the energy (17) we can obtain the chemical potential of the ion densities u 1 = n + and u 2 = n − by taking the first variation with respect to u 1 and u 2 , respectively.
An interesting question is whether the minimization of the free energy (17) by a gradient flow also follows the physically relevant path far from thermodynamic equilibrium. In which physical sense does the flow with respect to the Wasserstein distance 35-38 provide optimality?
MAIN RESULTS
The study in this article relies on the system (14) reformulated for periodic porous media.
A scaling parameter s is defined as the ratio between the microscopic length scale ℓ and the macroscopic size L of the porous medium, i.e. s := ℓ L ≪ 1. It is assumed that s scales the periodicity of the reference cell Y ⊂ R N which defines the micro-geometry. In this reference cell, we denote the fluid (liquid) region by Y s ⊂ Y such that its complement is the solid phase. After periodically covering the domain Ω by such cells Y , we denote the resulting macroscopic domain of the periodic union of the subsets Y s by Ω s and its complement by B s := Ω \ Ω s . Hence, the perforated domain B s represents the solid phase and Ω s the liquid phase, see Figure 1. Top, middle: The reference cell Y represents a characteristic mean pore geometry. Right: The "homogenization limit" s := ℓ L → 0 stands here for the leading order approximation of nonstandard two-scale expansions.
Under these considerations, the material tensor S from (14) 2 depends now on s too, i.e., where ǫ(x) := λ 2 χ Ω s (x) + αχ B s (x) with the classical dimensionless Debye length λ := λ D L of the PNP system (16), α = ǫm ǫ f is the dimensionless dielectric permittivity where ǫ m and ǫ f are the dielectric permittivities of the solid and liquid phase, respectively. In (18) one recognizes that also the physical quantities like concentrations n + s , n − s and electric potential depend on the scaling parameter s. Hence, the problem (14) reads now in the periodic setting as follows, D t u s − div div div (S s (u s )∇ ∇ ∇u s ) = I I I(u s ) , in Ω Ω Ω s where f i := [δ i1 , δ i2 , δ i3 ] ′ for i = 1, 2, 3 and δ ij is the Kronecker delta and I s T := ∂Ω s T \ Γ D T ∪ Γ N T the solid-electrolyte interface. From (19) it follows that the flux with respect to u is in general not differentiable. This motivates to study (19) in the sense of weak solutions.
Moreover, the main difficulty and difference of this work is the nonlinear structure which prevents the material tensor S s (u s ) to be a strongly elliptic operator. For convenience, we where the corresponding ionic fluxes are defined by, and the boundary and initial conditions are, where ι = l, r. Our main result relies on the assumption of local thermodynamic equilibrium, which is widely used and generally accepted. 25 where µ r 0 denotes a constant value of the chemical potential of positive (r = 1) and negative (r = 2) ion densities. Hence, the locally constant potential µ r 0 only assumes different values in different reference cells.
We note that for the classical asymptotic two-scale expansions 9,10 an upscaling is performed in 25 where also error erstimates between the microscopic periodic formulation and the upscaled equations are derived.
Remark III.3 (1) Theorem III.2 is an extension of the two-scale convergence results in 15
by the non-classical asymptotic expansion (24) 1 . We note that the expansions (24) are only formal because convergence of such series is a priori not guaranteed and possible boundary layers are neglected.
(2) The effect of the upscaling in the above theorem can be best seen in the change of the material tensor (18), which reads for the new system (25) as follows A comparison of (27) with (18) clearly motivates the use of the term "material tensor" in the context of porous or composite media.
(3) The effective material tensor (27) can also be considered as a generalized effective, concentration dependent conductivity tensor (as in heat/diffusion equations).
From a rigorous point of view, Lemma III.4 finally guarantees that the second order terms in the asymptotic expansion (24) are locally well-defined.
We make now the formal Ansatz of the asymptotic expansion u s (t, x) ≈ u 0 (t, x, x/s) + su 1 (t, x, x/s) + s 2 u 2 (t, x, x/s) + . . . , with u i (t, x, y) for i = 0, 1, 2, . . . such that The above Ansatz is formal because there is no guarantee that the series (33) is finite.
(1) Problem for terms of order O(s −2 ): After inserting (33) into (19) 1 , using (31) and (32) gives a sequence of problems by equating terms with equal power in s, that means, If we use definition (31), then we can rewrite (35) 1 as the following equation, wich is equivalent to the system, One recognizes immediately that solvability must be first established for equation (37) 3 . This is immediately achieved by Theorem II.3. Moreover, this theorem implies that u 3 0 (x, y) is invariant (constant) in y ∈ Y as a solution of (37) 3 , i.e., Using invariance (38) in equations (37) 1 and (37) 2 implies with Theorem II.3 the additional invariances u 1 0 (t, x, y) = u 1 0 (t, x) and u 2 0 (t, x, y) = u 2 0 (t, x) .
Let us go over to the next problem in the sequence of equal power in s.
Let us write (40) in a more intuitive form by its single components, i.e., O(s −1 ) : The system (41) is a linear, elliptic second order partial differential equation. Hence solvability of (41) follows immediately with Lax-Milgram's Theorem by starting with problem (41) 3 . The fact that u 0 is independent of y together with the linearity of (41) and that S 0 only contains derivatives in y, motivates to make the following Ansatz for u 1 (t, x, y), that means, We use now (42) and the independence of u 0 of y in order to rewrite (41) as a problem for ξ r j for r = 1, 2 and 1 ≤ j ≤ N as follows, We point out that under local thermodynamic equilibrium, that means, in each reference cell we have due to the induced separation of scales by the limit ǫ → 0, see Definition III.1. Hence, the term with the summation on the left-hand side in (43) 1 disappears. System (43) defines the reference cell problems for the porous media corrector functions ξ r j for r = 1, 2 and 1 ≤ j ≤ N. Such correctors finally define the effective tensors (26).
Remark IV.1 We point out that the Ansatz (42) is an extension from linear homogenization theory and is canonically chosen to account for the problem's coupled and nonlinear structure. The interpretation of the Ansatz (42) 1 is that oscillations in the microscopic variable of the electrostatic potential dominate the oscillations of the concentration variables.
Step 1: Problem (44): The assumptions of Lax-Milgrams are easily verified since κ(y) is a strongly elliptic matrix, see Definition II.2.
Step 2: Again, solvability follows by Theorem II.3 after verification of F r 2 ∈ (W ♯ (Y s )) ′ . Due to Theorem II.3 and Lemma II.1, it must hold for r = 1, 2 that which reads in explicit form as follows, Since we use (59) represents the by Lemma III.4 well-posed effective model, we herewith guarantee that (58) holds.
With representation (42) 2 we can rewrite (57) in the following way, Using (38) allows us to write (60) more intuitively by, such that ǫ ǫ ǫ 0 := {ǫ 0 ik } 1≤i,k≤N is defined by We can write down in the same way effective equations for equations (54) by using (42) 1 .
(4) Derivation of the second order correctors: In order to compute the second order corrector for u 3 0 we use (42) 2 in equation (51) 2 , i.e., Inserting equation (61) into (68) leads to the following problem, With equation (69) the right-hand side F 3 2 in (53) 2 can be rewritten by, The same arguments as those for (42) suggest to look for a function u 3 2 of the following form, where ζ 3 kl is the solution of, Lemma IV.4 There exists a unique ζ 3 kl for 1 ≤ k, l ≤ N that solves equation (72) in the weak sense, that means, ζ 3 kl is a unique solution of the following problem, where we define, Proof. 3 The existence and uniqueness is an immediate consequence of Theorem (II.3) and Lemma (II.1).
The same considerations as those for the derivation of (72) can be applied to (55) (or to (51) in the context of its classical formulation). With (71) and (42) we can rewrite (51) as, We make now a corresponding Ansatz for the functions u r 2 with the indices r = 1, 2 under the same considerations as those for (71), i.e., u r 2 (t, x, y) = N k,l=1 ζ r kl (t, x, y)u r 0 .
After inserting definition (76) into (75) we obtain an equation for the second order corrector New porous medium Poisson-Nernst-Planck equations functions ζ r kl , that means, In order to guarantee solvability of equation (77), we assume that u r 0 > 0 for u r 0 (0, x) ≥ η > 0. This can be obtained by special test function techniques as applied in 41 to prove nonnegativity of solutions. Existence and uniqueness of the corrector functions defined by equations (72) and (77) is achieved in the following Lemma IV.5 Let u r 0 ∈ V (Ω T ), u r 0 (t, x) > 0 for all (t, x) ∈ Ω T and 1 ≤ r ≤ N, and assume that the reference cells Y are in local thermodynamic equilibrium, see Definition (III.1).
Then, there exists a unique ζ r kl for 1 ≤ k, l ≤ N that solves equation (72) in the weak sense.
(83)
With the regularity available for ξ, ζ and u r 0 , the continuity of the second term (II) follows. We estimate the first term (I) by, which implies the desired continuity.
where D r (t, x) indicates that its t and x dependence originate from v r for r = 1, 2. The choice of v r ∈ V (Ω T ) guarantees that the right-hand side in (85) 1 is in L 2 (Ω). Hence, there exists a unique solution u r ∈ L 2 (0, T ; H 2 0 (Ω)), ∂ t u r ∈ L 2 (0, T ; L 2 (Ω)) by standard parabolic theory. | 2012-09-28T19:23:42.000Z | 2012-09-28T00:00:00.000 | {
"year": 2012,
"sha1": "979a58665a35cf26fee687b456b4e38bb269ddac",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1209.6618",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "979a58665a35cf26fee687b456b4e38bb269ddac",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
250546990 | pes2o/s2orc | v3-fos-license | Concomitant Pulmonary Embolism and Anterior Myocardial Infarction as the Initial Presentation of Antiphospholipid Syndrome
Antiphospholipid antibody syndrome (APLS) is a systematic autoimmune disease characterized by thrombotic events, including arterial, venous
Antiphospholipid antibody syndrome (APLS) is a systematic autoimmune disease characterized by thrombotic events, including arterial, venous, and microvascular complications.Often, patients initially present with either arterial or venous thromboembolism but rarely present simultaneously with arterial and venous thromboembolic events. 1 We report an unusual case of a 21-year-old woman who presented to our hospital with chest pressure secondary to a concomitant pulmonary embolism (PE) and anterior myocardial infarction.
Clinical case
A 21-year-old woman with no significant past medical history, only on oral contraceptive pills, presented to the hospital with complaints of 4 days of progressively worsening substernal chest pressure associated with dizziness and lightheadedness.In the emergency department, she was mildly tachycardic but otherwise hemodynamically stable.An electrocardiogram revealed sinus rhythm with anterior T-wave abnormality (Figure 1A).Initial laboratory testing revealed mild thrombocytopenia and elevated troponin I levels of 9.63 ng/mL.A computed tomography angiography of the chest revealed small nonocclusive emboli in the right lower lobe pulmonary artery (Figure 1B); therefore, venous duplex ultrasound of the lower extremities was performed, which showed no evidence of deep vein thrombosis.Anticoagulation with intravenous heparin was started, and the patient was admitted to the floor with telemetry monitoring.Early morning on day 2 of hospitalization, the patient reported worsening chest pain, and the troponin I levels subsequently trended up to 20.08 ng/mL.A repeat electrocardiogram showed subtle ST-segment elevations, with QS complexes in the anterior leads.A bedside transthoracic echocardiogram revealed hypokinesis of the anterior wall and a severely reduced left ventricular ejection fraction.The patient was transferred to the cardiac catheterization laboratory for emergency coronary angiography.
Coronary angiography showed 100% occlusion of the left anterior descending artery (LAD) at the ostium.The right coronary artery was patent with extensive right to left collaterals supplying the LAD.A filling defect was noted in the proximal LAD, consistent with a thrombus.Thrombus aspiration was performed using the Indigo System CAT RX (Penumbra, Inc), followed by the placement of a single 3.0 Â 18 mm XIENCE Skypoint (Abbott) drug-eluting stent in the proximal LAD, with resultant TIMI-3 flow.A formal transthoracic echocardiogram demonstrated a left ventricular ejection fraction of 35%-40%, with a large left ventricular thrombus.
Subsequent testing revealed antibody positivity for APLS, with elevated titers of anticardiolipin, lupus anticoagulant, and beta-2glycoprotein antibodies.The patient was discharged on aspirin, ticagrelor, and warfarin along with high-intensity statin therapy, a low-dose angiotensin-converting enzyme inhibitor, and β-blocker therapy.The patient's oral contraceptive pills were discontinued, and the patient was counseled about the use of an alternative nonhormonal mode of contraception.Approximately 12 weeks after discharge, the patient's outpatient laboratory reports showed persistent elevation of all 3 antibodies, thus establishing the diagnosis of triple-positive APLS.
Discussion
Massive or submassive PEs can cause acute right heart strain and result in a small elevation of troponin levels.However, a high initial troponin level in patients with a nonmassive PE should raise suspicion of acute coronary syndrome, even with a low pretest probability of acute coronary syndrome. 2,3Therefore, the detection of a high initial troponin level should be followed by serial troponin level testing and electrocardiograms, an echocardiogram, and an ischemic evaluation.Patients with APLS may present with complications of arterial or venous thromboembolism; however, they rarely present with both simultaneously.Myocardial infarction and PE associated with APLS have been extensively reported in the literature; however, there are limited reports regarding the incidence of simultaneous events as the initial presentation of APLS. 4 A high index of suspicion should be maintained for APLS in young patients Keywords: antiphospholipid antibody syndrome; collateral circulation; left ventricular thrombus; percutaneous coronary intervention; pulmonary embolism; STelevation myocardial infarction.* Corresponding author: kphadke@kaleidahealth.org(K.Phadke).
Figure 1 .
Figure 1.(A) Follow-up electrocardiogram demonstrating new QS complexes with subtle ST-segment elevation in the anterior leads.(B) Computed tomography angiography of the chest showing evidence of pulmonary embolism. | 2022-07-15T15:10:54.348Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "76feddd95b34f2d2baee6fad55b8d73653c7ca65",
"oa_license": "CCBYNCND",
"oa_url": "http://www.jscai.org/article/S2772930322003970/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c27c253d56a93708e327ccb3dff086400ffda2c0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
220326720 | pes2o/s2orc | v3-fos-license | Psychomotor speed as a predictor of functional status in older chronic heart failure (CHF) patients attending cardiac rehabilitation
Background The association among psychological, neuropsychological dysfunctions and functional/clinical variables in Chronic Heart Failure (CHF) has been extensively addressed in literature. However, only a few studies investigated those associations in the older population. Purpose To evaluate the psychological/neuropsychological profile of older CHF patients, to explore the interrelation with clinical/functional variables and to identify potential independent predictors of patients’ functional status. Methods This study was conducted with a multi-center observational design. The following assessments were performed: anxiety (Hospital Anxiety and Depression Scale, HADS), depression (Geriatric Depression Scale, GDS), cognitive impairment (Addenbrooke’s Cognitive Examination Revised, ACE-R), executive functions (Frontal Assessment Battery, FAB), constructive abilities (Clock Drawing Test, CDT), psychomotor speed and alternated attention (Trail Making Test, TMT-A/B), functional status (6-minute walking test, 6MWT) and clinical variables (New York Heart Association, NYHA; Brain Natriuretic Peptide, BNP; left ventricular ejection fraction, LVEF; left ventricular end diastolic diameter, LVEDD; left ventricular end diastolic volume, LVEDV; tricuspid annular plane systolic excursion, TAPSE). Results 100 CHF patients (mean age: 74.9±7.1 years; mean LVEF: 36.1±13.4) were included in the study. Anxious and depressive symptoms were observed in 16% and 24,5% of patients, respectively. Age was related to TMT-A and CDT (r = 0.49, p<0.001 and r = -0.32, p = 0.001, respectively), Log-BNP was related to ACE-R-Fluency subtest, (r = -0.22, p = 0.034), and 6MWT was related to ACE-R-Memory subtest and TMT-A (r = 0.24, p = 0.031 and r = -0.32, p = 0.005, respectively). Both anxiety and depression symptoms were related to ACE-R-Total score (r = -0.25, p = 0.013 and r = -0.32, p = 0.002, respectively) and depressive symptoms were related to CDT (r = -0.23, p = 0.024). At multiple regression analysis, Log-BNP and TMT-A were significant and independent predictors of functional status: worse findings on Log-BNP and TMT-A were associated with shorter distance walked at the 6MWT. Conclusions Psychological and neuropsychological screening, along with the assessment of psychomotor speed (TMT-A), may provide useful information for older CHF patients undergoing cardiac rehabilitation.
Introduction
Chronic heart failure (CHF) affects more than 37 million people in the world and represents one of the diseases with the highest impact on health outcomes. It is a chronic condition associated with high rates of hospitalization, re-admissions, severe disability and high risk of mortality. In developed countries, 1-2% of the adult population suffers from this chronic condition and the rates are even higher, reaching 10%, in people aged 70 years or over [1].
Although many studies have focused on specific somatic symptoms (dyspnea, fatigue and physical exertion), mental health aspects (affective/mood disorders) and cognitive function/ impairment, few have investigated the inter-relationship between these factors [2]. Anxiety and depression are reported to be present in 20-30% of CHF patients [3]. The presence of depressive symptoms increases the rates of CHF appearance, especially in older patients and those with previous cardiovascular diseases [4,5]. Moreover, anxiety and depressive symptoms are associated with a worsening of primary outcomes in terms of frequent re-hospitalizations, increased healthcare costs and higher risk of mortality [5,6]. Since the effect of anxiety on selfcare management can be masked by the stronger influence of depression on self-care management, it is important to consider the wide spectrum of mental comorbidities in patients with CHF [3,7], including cognitive impairment. Indeed, in a recent systematic review and metaanalysis, the odds ratio for cognitive impairment in the CHF population in case-control studies was 1.67 [95% confidence interval (CI) 1. 15-2.42] and the prevalence of cognitive impairment in CHF cohorts (n = 26 studies, 4176 participants) was 43% (95% CI 30-55) [8]. The presence of cognitive impairment has a great impact on CHF patients' health status as it contributes to low self-care, poor adherence to clinical prescriptions, increased re-hospitalizations and higher risk of mortality [9][10][11]. In particular, poorer global cognitive score and working memory, psychomotor speed, and executive function dysfunctions are significant predictors of mortality [12].
Psychological and neuropsychological dysfunctions and their association with functional and cardiac variables have been extensively reported, but these investigations often consider both younger and older CHF patients without distinguishing them [12][13][14]. Hence, in older (age 65+ years) and oldest-old (80+) CHF patients, these associations deserve to be specifically investigated [15].
Cardiac rehabilitation is a recommended and suitable treatment for CHF patients since it reduces symptoms, decreases disability, increases participation in physical and social activities and improves functional outcomes [16,17]. Psychological and neuropsychological aspects, functional status and clinical variables can be all considered and measured in a cardiac rehabilitation setting [18].
The aims of this multi-center observational study were to evaluate the psychological and neuropsychological profile of older CHF patients undergoing inpatient rehabilitation and to investigate the relationships between neuropsychological and psychological data with clinical and functional ones. Furthermore, a specific investigation was performed to explore which of the considered variables might be independent predictors of functional status.
Participants
All CHF patients aged over 65 years consecutively admitted to the Maugeri Clinical Scientific Institutes IRCCS Cardiac Rehabilitation Departments of Montescano, Camaldoli, Tradate and Lumezzane, Italy, for inpatient cardiac rehabilitation between January and December 2017, were screened for admission. The first part of the recruitment was performed at the Montescano Institute (first six months of the year), followed by Camaldoli, Tradate and Lumezzane Institutes (two months for each Institute).
CHF was defined as: I) signs (e.g. elevated jugular venous pressure, pulmonary crackles and peripheral edema) and symptoms of HF [New York Heart Association (NYHA) functional class II-IV] in the presence of reduced ejection fraction (LVEF <40%); or II) signs and symptoms of HF (e.g. elevated brain natriuretic peptides and significant structural heart disease/diastolic dysfunction) with preserved (LVEF �50) or mid-range ejection fraction (LVEF 40-49%) [1].
The exclusion criteria were: severe clinical conditions (chronic inflammatory diseases, severe and acute respiratory diseases, neoplastic diseases, cerebrovascular diseases), no Italian education, severe visuo-perceptive deficits, low subjective motivation/interest or refusal to undergo the evaluation, severe psychiatric disorders (at medical psychiatric evaluation) and severe cognitive deterioration, evaluated through the Mini-Mental State Examination (MMSE) (MMSE score �18.3) [19] that is included in the Addenbrooke's Cognitive Examination-Revised (ACE-R) [20,21].
Executive and verbal functions. For the executive functions screening, the Frontal Assessment Battery (FAB) was administered [24]. It is a neuropsychological test composed of six sub-tests that explore specific cognitive or behavioral domains related to the frontal lobes such as conceptualization, mental flexibility, motor programming, sensitivity to interference, inhibitory control and environmental autonomy. Specific tests were added to more deeply evaluate other cognitive functions. Both language and executive functions were evaluated with the phonemic fluency test [25], in which subjects were asked to list in the space of one minute all the words that they know starting with the letters "F"-"A"-"S". The semantic fluency test (similar modality but applied to the specific categories of cars, fruits and animals) was administered to evaluate the semantic cognitive residual resources [26]. The Clock Drawing Test (CDT) was administered only in the free-drawn condition with a score range 0-15 [27]. A correct execution of the CDT requires not only intact perceptual abilities but also a correct functioning of the constructive abilities, of verbal comprehension, knowledge and understanding of the numerical system, and abstract thinking. We used the Trail-Making Test (TMT parts A and B) [28] to assess psychomotor speed and alternated attention. In part A, the subject has to connect, in the proper order, twenty-five encircled numbers randomly arranged on a page. Part A requires the use of visual search processes and assesses psychomotor speed. In Part B, the subject has to draw a line to connect alternating numbers and letters, starting with number 1 and letter A, in rising sequence. Part B requires switching/cognitive flexibility and assesses alternated attention. Differently from the other tests in our battery, TMT scores represent the time spent to complete the task; therefore, the higher the scores, the worse is the performance.
The scores of all the aforementioned neuropsychological tests were adjusted for age, sex, and educational level. All the tests have good psychometric properties, are validated in the Italian population, and normative data are available [20,[24][25][26][27][28].
Clinical and functional variables. New York Heart Association (NYHA), Brain Natriuretic Peptide (BNP), left ventricular ejection fraction (LVEF), left ventricular end diastolic diameter (LVEDD), left ventricular end diastolic volume (LVEDV), tricuspid annular plane systolic excursion (TAPSE) are clinical variables considered to describe CHF sample. LVEF and Log-BNP were also considered in statistical analysis for their clinical predictive value [1].
The 6-minute walking test (6MWT) is considered an adequate submaximal test and is commonly used to measure the functional exercise capacity in individuals with CHF. It is a selfpaced exercise test that entails measurement of distance walked over a span of 6 minutes, that is better tolerated and more reflective of daily activities than other maximal exercise tests [17].
Data collection
All patients were admitted to an inpatient rehabilitation program comprehensive of medical history and physical examination, evaluation of laboratory parameters, ECG, chest X-ray, color-Doppler echocardiogram, medical therapy optimization, exercise testing (6 minutes walking test, 6MWT), educational sessions, exercise training (cycle-ergometer and/or treadmill, leg ergometer, breathing exercises), psychological counselling, and metabolic evaluation with a personalized diet when needed.
The patients signed an informed consent for all procedures and explanations concerning the study. The study was approved by the Institutional Review Board and Central Ethics Committee of the ICS Maugeri SpA SB (CEC) (approval number: CEC N.927, 27/06/2013).
During the first week of admission, all enrolled patients filled in a socio-demographic form and then underwent psychological and neuropsychological assessment. The psychological and neuropsychological research assessment was performed in a dedicated room of the Psychology Unit of each Institute and divided in two sessions: 1) ACE-R, HADS and GDS and 2) FAB, CDT, TMT-A/B, semantic and word fluency that were administered the subsequent day in order to avoid an interference effect. Trained psychologists evaluated the patients according to standardized administration and scoring procedures. The patients were supported throughout the testing period (30' each session) to maintain motivation and to elicit the optimal level of performance; a break was always allowed if necessary. Furthermore, patients underwent a clinical psychological treatment: they received the results of their research assessment and they underwent a psychological interview according to the rehabilitation protocol and the specific needs of the patient.
Statistical analysis
A sample size of 67 patients was deemed to be necessary to disclose, with a power equal to 80% and a Type I error of 0.05, a difference between an expected Pearson correlation coefficient equal to 0.3 and a correlation coefficient equal to 0 assessing the relationship between cognitive screening (ACE-R, FAB, CDT, TMT, word and semantic fluency), psychological (HADS-A, GDS) and clinical/functional variables (log BNP, LVEF, age, 6MWT). We anticipated that during the study period (the year 2017) we would be able to enroll many more patients among all those hospitalized in the ICS Maugeri cardiac rehabilitative department.
Descriptive statistics are reported as mean ± standard deviation (SD) for continuous variables and as number (percentage) for discrete variables.
The association between pairs of variables (age, illness duration, Log-BNP, LVEF, 6MWT and ACE-R total score and subtests, FAB, TMT-A/B, CDT, word fluency and semantic fluency adjusted scores) was assessed by Pearson correlation coefficient r. The associations between functional status as assessed by the 6MWT and clinical (LVEF, Log-BNP), demographic (age, gender), neuropsychological (ACE-R Memory, ACE-R Fluency, FAB and TMT-A adjusted scores) and psychosocial variables (HADS-A and GDS) was assessed by multiple regression analysis. Multicollinearity was checked computing the variance inflation factor (VIF). Variables showing VIF value greater than 10 were considered potentially problematic.
All statistical tests were two-tailed and statistical significance was set at p<0.05. All analyses were carried out using the SAS/STAT statistical package, release 9.2 (SAS Institute Inc., Cary, NC, USA).
Results
In this multi-center cross-sectional observational study, 123 CHF patients aged � 65 years, in NYHA class II-IV, consecutively admitted for inpatient rehabilitation, were screened for inclusion. Of these, 23 patients were excluded for the following reasons: clinical exacerbation during hospitalization (n = 3), no Italian education (n = 3), visuo-perceptive deficits (n = 3), low subjective motivation, or refusal, to undergo the evaluation (n = 7), severe psychiatric diseases (n = 1), and severe cognitive impairment (MMSE�18.3) (n = 6). The final study population consisted of 100 patients. Table 1 reports the demographic, psychosocial, functional and clinical characteristics of the study sample. Moderate/severe anxiety symptoms were present in 16% of patients, and moderate/severe depressive symptoms in 24.5% of patients. Table 2 shows the frequency distribution of the neuropsychological assessment results. For each neuropsychological test, the results were divided into impaired, borderline and normal scores. Few patients obtained impaired (4%) or borderline (11%) scores at cognitive screening (ACE-R), while higher percentages of impaired scores can be found in the executive functions (FAB) scores (56%), in the psychomotor speed (TMT-A) scores (24.2%) and in the alternated attention (TMT-B) scores (51.5%).
At multiple regression analysis, among the set of variables considered potential predictors of functional status as assessed by the 6MWT (age, gender, ACE-R-Memory, ACE-R-Fluency, FAB, TMT-A, depression, anxiety, Log-BNP and LVEF), Log-BNP and TMT-A were identified as significant and independent predictors of functional status (Table 3). Multicollinearity, as assessed by the variance inflation factor was not a problem, with values ranging from 1.33 for ACE-R-F to 1.74 for TMT-A. Holding all other variables constant, worse values for the clinical variable Log-BNP (parameter estimate -41.1, p = 0.0024) and a worse performance on the task of psychomotor speed, assessed by the TMT-A (parameter estimate -0.89, p = 0.025), were associated with shorter distance walked at the 6MWT.
Discussion
Our study provides useful information concerning the psychological and cognitive screening in older and oldest-old CHF patients and an in-depth analysis of the cognitive impairments through a specific neuropsychological evaluation. In addition, it analyzed the relationship between cognitive performance and clinical, functional and psychological variables and it investigated which of the psychological, neuropsychological and clinical variables were independent predictors of the functional status.
Our psychological screening highlighted the presence of moderate/severe anxiety (16%) and depressive (24.5%) symptoms, in accordance with the existing data in the literature [3]. A brief psychological screening tool can help clinicians identify anxiety and depressive symptoms that might require specific counseling, also in older CHF patients. These data emphasize the need for a psychological intervention to treat emotional distress and support the process of adaptation related to suffering from a chronic disease [29].
For our neuropsychological assessment, we chose the ACE-R as a screening tool, because it enables a deeper analysis of global and specific cognitive functioning [30]. Particularly interesting were the data concerning the ACE-R subtests. In fact, the memory domain (including tasks of anterograde, retrograde memory, recall and recognition) and verbal fluency domain (phonological and semantic) showed a significant general decline due to deficits in executive functions and memory abilities, in line with the literature [31,32]. The impaired results of specific screening test for executive functions (56%) were in line with our findings from the ACE-R subtests: marked difficulties emerged that can be linked to procedural dysfunction, such as difficulty of conceptualization, mental flexibility, motor programming, sensitivity to Table 1 It is important to highlight that the well-known relations within neuropsychological, psychological and clinical variables were not considered because they were not the focus of the present study.
ACE-R total and CDT adjusted scores both have a negative relation with depression scores, meaning that patients with worse performance at neuropsychological tests presented a higher presence of depression symptoms. These data highlight that our sample's results are in line with the existing literature about the prevalence of emotional distress in people with cognitive impairment [33,34]. In fact, affective and emotional dysregulation are common in preclinical and prodromal dementia syndromes, often revealing in advance neurodegenerative changes and a progressive cognitive decline [35]. Other relations frequently described in literature can be found between cognitive impairment and clinical variables [36] between psychological/ neuropsychological factors and clinical variables [13] or between psychological/neuropsychological factors and functional variables [37,38].
Also in our study we found significant relations between ACE-R Fluency and Log-BNP and other interesting associations between 6MWT and both ACE-R Memory subtest and psychomotor speed (Fig 1). This link between the distance covered on the 6MWT, which decreases with age, could reflect the influence that a CHF condition can exert on attention, memory and motor efficiency of CHF patients, synthesizing a general cognitive-motor slowing [12,38]. This hypothesis finds further support in our regression model, where Log-BNP and psychomotor speed independently predicted distance covered at the 6MWT. This might indicate that clinically severe CHF patients with decreased psychomotor speed also have functional complications in the context of a general cognitive-motor slowing.
In a multidisciplinary and rehabilitative setting, this fact could be extremely challenging. Further research is needed to find a possible algorithm to predict patients' functional status, starting from both clinical status and the result of psychomotor speed evaluation (through a test, the TMT-A, that can be administered also at the patients' bedside) in older CHF patients who cannot undergo a 6MWT evaluation. Moreover, this statement acquires greater importance considering the significant impact that psychomotor speed has on CHF patients' selfcare [39]. An accurate psychological and neuropsychological screening is time saving and it requires a brief specific training. This kind of screening could play a pivotal role as it allows to implement an effective multicomponent and tailored rehabilitative intervention [18,29,40,41].
Strengths and limitations
Considering the strengths, it is a multi-center study, providing the first Italian ACE-R data on older and oldest old CHF patients. Moreover, it considers many different variables and their relationships by comparing psychological and neuropsychological aspects to clinical and functional ones and it provides interesting cues for future research on the results concerning psychomotor speed and functional status. As to limitations, the neuropsychological in-depth evaluation focused only on executive and attentional aspects, whereas a wide spectrum evaluation could be useful to identify the possible presence of other impaired cognitive domains in older CHF patients. Secondly, because of our small sample size, our conclusions may not be generalized to all CHF patients, in particular to CHF outpatients.
Conclusions
Based on our findings, more attention should be paid to the link between cognitive and emotional variables, functional status and CHF clinical data. Concerning the psychosocial aspects, a more in-depth psychological interview is recommended to further investigate the evidence from the screening tests in a wider psychosocial perspective and to support patients who have emotional distress. Consistent with our findings, besides cognitive screening, an evaluation of psychomotor speed could be useful to further investigate and to predict CHF older patients' functional status when they are unable to perform the dedicated functional test (6MWT).
In conclusion, a psychological and neuropsychological screening can be useful means to obtain individualized information about the patient's cognitive and emotional status and, where necessary, to guide the choice of subsequent in-depth investigations, in order to better tailor rehabilitative intervention.
Acknowledgments
We would like to thank Alessandra Ianni of the Psychology Unit of Tradate (VA) for her valuable collaboration. | 2020-07-04T13:05:42.002Z | 2020-07-02T00:00:00.000 | {
"year": 2020,
"sha1": "ee476fe87ffd89d7d77d9d42a13eeb970f777f2e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0235570&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1ff3dbe28c454903fd5be12ff51839a2caa4b243",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233793578 | pes2o/s2orc | v3-fos-license | The Development of a Four Tier-Based Diagnostic Test Diagnostic Assessment on Science Concept Course
A learning success could be seen from the material mastery and conceptual understanding inculcation obtained by the students. To find out how students’ misconceptions occur, it could be done by developing a diagnostic assessment. This research aims to develop and describe the instrument validity of the four-tier diagnostic test for diagnosing misconceptions by the Primary School Teacher Education program of Universitas Muria Kudus on Science Concept course. This research applies the Research and Development design. It consists of a preliminary study, diagnostic test development, and diagnostic test validation. This article discusses the development and four-tier diagnostic test instrument validation results. The developed instrument consisted of 40 question items. They were Physics and Biology materials. The validation stage involved content and face validities. The expert judgment results showed an average score of 93.09, with very valid criterion. Thus, the four-tier based diagnostic test was valid and could be implemented. The reliability calculation obtained the r-count score = 0.698, while the r-table score = 0.514. The r-count is higher than r-table, then the instrument was reliable. It showed that the instrument was valid and reliable and could be implemented for the students of Primary School Teacher Education. Based on the validation result, the Four-Tier based diagnostic test was valid and reliable to apply for diagnosing the students’ misconceptions in science concept course. Faculty of Universitas Muria Kudus that allowed us to conduct this research. Thanks to the executive team for assistance.
Introduction
A learning success could be seen from the material mastery and conceptual understanding inculcation obtained by the students. As educator candidates, conceptual understanding internalization is an important matter to be mastered by the students of Primary School Teacher Education program. Identifying students' misconceptions are important. It is because they are the primary school teacher candidates and will share their conceptual understanding for the next generations. The science conceptual understanding mastery becomes the success indicator of the science concept of course learning. Based on the observation on the following course, Science Application, the science conceptual understanding masteries of the Primary School Teacher Education program students were still low. Several students had difficulties in applying the science concept into their tasks to make manipulative props for primary school learners. The students had difficulties in explaining and connecting the material concepts and their implementation as the basis of the manipulative prop arrangements. It indicated that most students had difficulties in learning science materials. The researchers assumed there were several students having misconception or even did not understand the concept. Sometimes the students could not provide a correct explanation about the science concept, especially about Biology or Physics. Preliminary research had been done, it was about the conceptual [1]. It found the experimental group's conceptual understanding was better than the control group. The t-test and N-gain test results showed the experimental group behaviours. The experimental group, taught by PjBL during the Science Application course had an N-gain score with a moderate criterion.
On the other hand, the control group, taught by direct learning, had an N-n-gain score with a low criterion. It showed that several students' conceptual understandings had problems when specific interventions were not applied. It is supported by the findings of Fakhriyah et al [2]. The findings showed that the Primary School Teacher Education students' scientific literacy skills obtained a percentage of 66.2%, at the nominal level. Meanwhile, on the other hand, 33.8% of students were at the functional level. The data showed the students who had the concept to connect science with other disciplines could write a scientific term. However, it was found the still had misconceptions or incorrect concepts.
On the other hand, 33.8% of students remembered the theory and explained the concept correctly, but they had limited concept and difficulties to connect the concept to their arguments. A lack of conceptual mastery consistently will influence students' further learning process effectiveness [3]. The students construct concepts gradually. When it is inappropriately constructed; or it deviates from the original concept, then students will have difficulties to construct further concepts. Therefore, it is important to find out their difficulties in understanding the concepts they have already known and understood. Then, it could be continued by analysis and formulating solution. Initial identification of student misconceptions based on cluster analysis shows that 38.1% of students had misconceptions [4].
The occurring misconception was due to a mismatch between the understood explanation and the experts' agreed concepts. The occurring daily life experience becomes their basis for constructing their concepts based on their rationales. These rationales, actually, need to be ensured. The occurring misconception remains consistently in learners' mind to interpret a concept into conception or into a fact [5]. Tayubi [6] argues that when the constructed intuition of learning is incorrect, it will be difficult to correct it. It is due to such coincident matter consistently allows the incorrect concept to be the basis for the learner. When such misconception is allowed, it will become a hereditary mistake. It also hinders students from being creative. It is strengthened by Keshavaraz [7]. He argues that misconception is an incorrect individual who understood the concept of expert theories. When misconception is allowed, it hinders the students' learning achievements. Many factors influence misconception. They are such as the initial concepts of the students, teachers, environment, languages, textbooks, or reading sources [8], [9], [10], [11].
Many applicable ways could be applied to measure students' misconceptions. One of them is a diagnostic test. A diagnostic test was an alternative to measure the students' misconceptions on Science Concept course. It was applied at the beginning and the ending of learning. It functioned as a standard measurement worked by the students. It could also provide accurate descriptions about the experienced misconception on certain materials. The students' mistakes while answering these diagnostic questions would be the basis of their lack of understanding of certain materials. It could also be used as the basis of their mindset in sharing the responses of the incorrect answers [12].
The diagnostic test instrument development had been promoted by other researchers. It consisted of two, three, and four-tier multiple choices. Each level has different strengths and weaknesses. Chandrasegaran et al. [13] applied a two-tier diagnostic test to access or identify the scientific conceptions of Taiwanese learners. On the other hand, Tan et al. [14] applied it to access the conceptual understanding of ionized energy from Singaporean learners. Tsui and Treagust [12] applied it to evaluate the teachers' arguments dealing with the genetic field. Caleon and Subramaniam [15] developed and applied the three-tier diagnostic test to access the conceptual understanding of wave on Singaporean learners. Cetin-Dindar and Geban [16] also used it to access the concept of acid and base of the senior high school learners. This current research develops a four-tire diagnostic test. The fourtier test is a development of the three-tier diagnostic test. This development adds the students' levels [15] state that the developed level of confidence is within a range of one until six. The four-tier test has several better features such as 1) the lecturers could differ the levels of confidence on the learners' answers; therefore, it could investigate the strength and weakness of the students' concepts, 2) the lecturers could diagnose the experienced misconception of the students better, 3) the lecturers could determine the required materials to be further discussed, and 4) the lecturers could use it as a suggestion to determine learning that could decrease the students' misconceptions. The diagnostic test is a test to accurately find out and ensure the weakness and the strength of learners in certain courses [3]. Heretofore, this misconception condition has been measured by the appropriate instrument. The diagnostic test was developed to identify the weakness and strength of students in understanding conceptual science materials. The test implementation had the purpose of improving the subsequent learning process and motivating them to learn [17], [18]. Therefore, the diagnostic test is expected to be able to describe the students' skills and to determine in what concepts the students have misconceptions.
Therefore, it needs a diagnostic test development which is capable of measuring the students' misconceptions on science concept course. This development could provide suggestions to improve and to overcome any students' misconceptions or incorrect conceptions. By developing a four-tier diagnostic test, the students' science conceptual understanding would be facilitated and could be measured. This research aims to develop and describe the instrument validity of the four-tier diagnostic test for diagnosing misconceptions by the Primary School Teacher Education program of Universitas Muria Kudus on Science Concept course.
Method
This research applies the Research and Development design. It consists of a preliminary study, diagnostic test development, and diagnostic test validation. It is in line with Research and Development design characteristics within an educational context. It functions as a method that works systematically to a problem solution. Then, this problem solution is realized and tested [19]. The preliminary stage consisted of unachieved learning target identification. Then, it was continued by determining the misleading concept sources. From the preliminary study, the researcher determined the main discussion that had misconceptions and incorrect understandings. Then, in the development stage, the question forms were arranged and applied by using a reasoned-multiple choice. It was also complemented by levels of confidence to answer or express the reasons. During providing the reasons, it was entailed by levels of confidence. This matter revealed that the multiple-choice instrument could reveal higher thinking skill levels due to broader mastery level varieties [20]. The type of multiple choice questions is able to get students to answer questions carefully because there are choices where the choices contain distractors [21].
The next step dealt with diagnostic test question rubric arrangements, question writing, and question reviews. The questions were completed by assessment criteria and direction to work on them. It was strengthened by Putri et al. [22]. They found that the test question arrangement required detailed instruction by working on them. The developed diagnostic test consisted of 40 items. They were divided into 20 Biology and 20 Physics content items. All of them were taught in science concept course. These could be seen in Table 1.
Theme
Question Number
The Bloom Taxonomy Domains
The natural sustainability 9, 10, 11 C3, C4, C5 The human body system (health, nutrition, digesting system, respiratory system, blood flow system, nerve system, and movement system). 12 After developing the diagnostic test instrument, its quality, appropriateness, and reliability were analyzed. Therefore, a validation sheet of the instrument was arranged and used for the experts. The used results of validation were content and construct validities. Three experts validated the instrument. They were two Physics experts and a Biology expert. Besides that, in these stages, the question item analysis had purposed to measure the reliability, the discrimination power, and the difficulty level.
Finding and Discussion
This test development and arrangement could be applied to measure the experienced misconceptions of the students on the science concept. Of course, once the test had been validated by experts in terms of its reliability with quantitative calculation. The validation results were from three material experts of biology and physics. It is strengthened by Siswanto [23]. He revealed that the applicable criteria for validation required two independent groups or individuals in constructing a test by using the same specifications. Thus, this research results had been in line with the content validity criteria. The applied validation sheet contained information about content clarity and accuracy (based on the given indicators), grammar, relevance, communicative question sentence formulation, and clear instruction to work on them. The experts assess the validity of the contents of the diagnostic test items developed based on the validation sheet. The assessment used is a rating scale where the score is 1 if the question is not suitable and needs improvement overall, a score of 2 if the question is quite appropriate but needs a lot of improvement, a score of 3 if the question is appropriate but needs a little improvement, and a score of 4 if the question is suitable without any improvement can be used for research. After that the data were analyzed descriptively based on the criteria in Table 2 The validation results showed the diagnostic test instrument was valid, as shown in Table 3. Thus, the instrument could be used to measure the experienced misconceptions by the students. The instrument was revised based on the experts' comments. These revisions for each question were: a) the scientific naming and writing system for questions with scientific names. These scientific names had to be written based on the binomial nomenclature writing system. b) The length of the presented options in the multiple-choice should have estimated the given time. c) It was also important to differ each topic of material so the misconception could be clearly measured. d) The misconception causes should also be found by applying additional data. e) Need to add a distractor to the answer choices based on material that students do not understand. The characteristics from the develop test instrument had been valid based on the experts. They were 1) the test questions had been developed based on scientific literacy aspects; 2) the material content scope that became the basis of broader question item arrangements, and 3) the Google form test sheet version had already had test direction to do.
From the experts' suggestions, the instrument was revised and its question items, dealing with Biology aspects that had not been in line with the principles of the scientific naming system, were adjusted. Indrawan [24] argues that all Biology experts in the world have agreed with the flora or fauna naming standard system. Their names are written in Latin language or scientific names. Thus, all scientific names had to be adjusted. Tsalasatunnisa [25] strengthened that binomial nomenclature could mediate students to understand all organisms in the world.
Besides, dealing with multiple-choice options, a multiple-choice question should only have equal phrase lengths of its options. It is to avoid probability for students to merely guess the answers. Thus, it will make them thinking of the answers based on their knowledge and understanding. Alternative parallel answers, same-length answers, logical distractors are practical requirements in compiling multiple choice questions [26]. The quality of a question is also determined by the functioning of the distractors. This distractor contributes to the distinguishing power of questions and also the difficulty level of test because it can distinguish between high-ability students and low-ability students [27], [28], [29]. This distractor can also be useful for exploring student misconceptions [30].
In the developed test instrument, specific diagnostic test measurement of a material topic or discussion had not been made. The reason was to find out the understood material distributions of the students. Thus, any misconception could be noticed. Even so, this test instrument could differ the students' skills and understanding. It is in line [31]. They found that excellent test items should be able to identify the students who had mastered certain materials to those who had not.
The revised test instruments were shared to a group of students that had already had science concept background knowledge. The validation was continued by reliability analysis. It used the Alpha Cronbach and discrimination power. The discrimination power was measured from the correct and incorrect answer proportion of the students. On the other hand, the question levels of difficulty were analyzed by comparing the correct answers of the students and the obtained total scores. Based on the question item analysis, the questions were reliable, so they could be applied. The complete After carrying out the development stages and ensuring the instrument's validity and reliability, then this instrument could be used for students to measure their misconceptions. Instrument validation was important to obtain standardized and reliable instruments. It is in line with Siswanto [23]. He stated that question items with content validities could direct students to demonstrate the required skills and competences for the sake of the learning objectives. A good test has a balanced level of test difficulty, as well as the distinguishing power of questions which is very important to use to determine the allocation of students who have high and low abilities [32]. The greater the difficulty or the more difficult the question is, the ability to differentiate between students with high abilities and low abilities [33]. Fariyani et al. [34] argues that evaluative instrument development that could detect misconception is important to be developed. If students had misconceptions and they remained in them, it could hinder the students from studying the subsequent materials.
From the content and construct validities, this instrument had been valid and reliable to use. The developed instrument had met the characteristics demanded by [35], started from 1) test design to detect learners' difficulties, 2) the test development based on misleading sources or possible difficulties, and 3) the existence of reason provision to avoid guessing habits.
This test development is important to detect students' misconceptions. This type of four-tier diagnostic test is able to reveal more detailed misconceptions because in each student's choice there is a level of confidence that follows so as to avoid students from speculating. This four-tier diagnostic test can measure misconceptions in more detail [36]. The students who had misconceptions would have troubles to accept new knowledge [37]. Such misconceptions when they were allowed to be instilled longer could mislead them. They would probably assume such concepts were the correct ones. They would tend to apply their prior concepts rather than the latest concepts they obtained. Therefore, it was important to find out whether the students had misconceptions and which parts the students experienced it. Thus, it would allow teachers to follow up and to suppress these misconceptions.
Conclusion
The four-tier diagnostic test instrument development was done by promoting a preliminary study, a development stage, and validation. The four-tier diagnostic instrument consisted of four answer levels, confidence levels of the answers, four main options, and confidence levels to choose the reasons. The test instrument had been deemed valid and reliable. It is ready to be applied and to measure the misconceptions for the Primary School Teacher Education students. | 2021-05-07T00:04:01.452Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "c7d2980b5204dc01569553118b326c5590c10d2c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1842/1/012069",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "051d6187d89279d5989d960e301de1e8a0c2ad40",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
16383090 | pes2o/s2orc | v3-fos-license | The Mystery of Flavor
After outlining some of the issues surrounding the flavor problem, I present three speculative ideas on the origin of families. In turn, families are conjectured to arise from an underlying preon dynamics; from random dynamics at very short distances; or as a result of compactification in higher-dimensional theories. Examples and limitations of each of these speculative scenarios are discussed. The twin roles that family symmetries and GUT's can have on the spectrum of quarks and leptons is emphasized, along with the dominant role that the top mass is likely to play in the dynamics of mass generation.
I THE QUESTION OF FLAVOR
Flavor is an old problem. I. I. Rabi's famous question about the muon: "who ordered that?" has now been replaced by an equally difficult question to answer: "why do we have three families of quarks and leptons?" Although qualitatively we understand the issues connected to flavor a lot better now, quantitatively we are as puzzled as when the muon was discovered.
When thinking of flavor, it is useful to consider the standard model Lagrangian in a sequence of steps. At the roughest level, neglecting both gauge and Yukawa interactions, the Standard Model Lagrangian L o = L SM (g i = 0; Γ ij = 0) has a U(48) global symmetry corresponding to the freedom of being able to interchange any of the 16 fermions of the 3 families of quarks and leptons with one another. If we turn on the gauge interactions, the Lagrangian L 1 = L SM (g i = 0; Γ ij = 0) has a much more restricted symmetry [U(3))] 6 corresponding to interchanging fermions of a given type (e.g. the (u, d) L doublet) from one family to the other. When also the Yukawa interactions are turned on, L 2 = L SM (g i = 0; Γ ij = 0), then the only remaining symmetry of the Lagrangian is U(1) B × U(1) L . In fact, because of the chiral anomaly [1], at the quantum level the symmetry of L 2 is just U(1) B−L .
The above classification scheme serves to emphasize that there are really three distinct flavor problems. There is a matter problem , a family problem and a mass problem. The first of these problems is simply that of understanding the origin of the different species of quarks and leptons (i.e. why does one have a ν c L and a u c L state?). The second problem is related to the triplication of the quarks and leptons. What physics forces such a triplication? Finally, the last problem is related to understanding the origin of the observed peculiar mass pattern of the known fermions.
The usual approach when thinking about flavor is to try to decouple the above three problems from one another. Thus, for example, one assumes the existence of the quarks and leptons in the Standard Model and asks for the physics behind the replication of families. Although it is difficult to argue cogently on this point, it is certainly true in the examples which we will discuss that the matter problem seems to be unrelated to the question of family replication. Indeed, quite often one also assumes the reverse, namely, that the family replication question is independent of the types of quarks and leptons one has. In fact, it is possible that there is other matter besides the known quarks and leptons and that this matter is also replicated. Certainly, even in the minimal Standard Model there is other matter besides the quarks and leptons, connected to the symmetry breaking sector. This raises a host of questions including that of possible family replication of the ordinary Higgs doublet. One knows, empirically, that this cannot happen if one is to avoid flavor changing neutral currents (FCNC) [2]. However, some replication is needed if there is supersymmetry, but the two different Higgs doublets needed in supersymmetry are connected with different quark charges and need not replicate as families.
The above remarks suggests that there are some perils associated with trying to seek the origin for family replication independently from that of the quarks and leptons themselves. Nevertheless, that is the approach usually taken and the one I will follow here. Similarly, one also usually tries to disconnect the problem of mass from that of matter and family. That is, one generally assumes the existence of the three observed families of quarks and leptons, and then tries to postulate (approximate) symmetries of the mass matrices for quarks and leptons which will give interrelations among the masses and mixing parameters for some of these states.
This approach usually involves some kind of family symmetry and is sensible provided that: i) There is some misalignment between the mass matrix basis and the gauge interaction basis for the quarks and leptons. Only through such a misalignment will there result a nontrivial mixing matrix: V CKM = 1.
ii) The family symmetries of the mass matrices are broken (otherwise V CKM ≡ 1) either explicitly or spontaneously. Furthermore, if the break-ing is spontaneous, it must occur at a sufficiently high scale to have escaped detection so far.
Although the origin of flavor remains a mystery, I want to discuss here three speculative ideas for the origin of families. These ideas are realized up to now only in incomplete ways, in what amount essentially to toy models. Thus, for instance, the issue of family generation is in general disconnected from the question of SU(2) × U(1) breaking and, often, also from trying to explicitly calculate the Yukawa couplings. As a result, in all of these attempts at trying to understand flavor, the question of mass is approached from a much more phenomenological viewpoint. One guesses certain family or GUT symmetries, and their possible patterns of breaking, and then one checks out these guesses by testing their predictions experimentally. In all of these considerations, the top mass, because it is the dominant mass in the spectrum, plays a fundamental role.
In my lectures [3], I will begin by describing three speculative ideas for the origin of families. Specifically, I will consider in turn the generation of families dynamically; through short distance chaotic dynamics; and as a result of geometry. After this speculative tour, I will discuss briefly the issue of mass generation. In particular, I will illustrate the twin roles that family symmetries and GUTs can have for the spectrum of quarks and leptons. I will conclude by commenting on the profound role that the top mass is likely to have on the detailed dynamics of mass generation.
II GENERATING FAMILIES DYNAMICALLY
The underlying idea behind this approach to the flavor problem is that familiies of quarks and leptons result because they are themselves composites of yet more fundamental ingredients-preons. There is a nice isotope analogy [4] which serves to illustrate this point. Think of the three isotopes of Hydrogen as three distinct families. Just like the families of quarks and leptons, all three isotopes have the same interactions-their chemistry being determined by the electromagnetic interactions of the proton. Deuterium and tritium, however, have different masses than the proton because they have, respectively, 1 and 2 neutrons. Of course, the analogy is not perfect since 1 H and 3 H are fermions and 2 H is a boson! Nevertheless, it is tempting to suppose that the 3 families of quarks and leptons, just like the Hydrogen isotopes, result from the presence of different "neutral" constituents.
I will illustrate how to generate families dynamically by using as an example some recent work of Kaplan, Lepeintre and Schmaltz [4]. By using essentially the isotope analogy, these authors constructed an interesting toy model of flavor. Their simplest toy model is based on an underlying supersymmetric gauge theory based on the symplectic group Sp (6). The fundamental constituents in this model are 6 preons Q α transforming according to the fundamental representation of Sp(6) and one preon A αβ transforming according to the 2-rank antisymmetric representation. Such a theory has three families of bound states distinguished by their A αβ content, plus a pair of (neutral) exotic states. To wit, the bound states of the model are the 15 flavor states plus the two neutral exotic states The six Q α preons act as the protons in the isotope analogy. In principle, one could imagine having the SU(3) × SU(2) × U(1) interactions act on the Q α states, while the A αβ preons act as the neutrons. Furthermore, there is clearly a family U(1) F in the spectrum which counts the number of A αβ fields. Finally, one should note that, because of the supersymmetry, each of the states in Eqs. (1) and (2) contain both fermions and bosons.
Although the number of bound states per family (15) is encouraging, these states cannot really be the ordinary quarks and leptons (minus the right-handed neutrinos). It turns out that one cannot properly incorporate the SU(3) × SU(2) × U(1) gauge interactions with only 6 Q α preons. To do that, in fact, one has to at least triplicate the underlying gauge theory [4] from Sp(6) to Sp(6) L ×Sp(6) R ×Sp(6) H . Each of these Sp(6) groups has again six Q α and one A αβ preon. To obtain the desired quarks and leptons the Q preons are assumed to have the following SU(3) × SU(2) × U(1) assignments: Because of the preon group triplication, instead of having 15 F [i,j] bound states per family, one now has 45 such states. Per family, these states now include 16 states with the quantum numbers of the observed quarks and leptons, plus 29 exotic states which, however, sit in vector-like representations of the Standard Model group. Specifically, the quark doublet (u, d) L is a bound state of Sp(6) L ; u c L and d c L are bound states of Sp(6) R ; while the lepton states (ν, e) L , ν c L and e c L are bound states of Sp(6) H . Among the exotic states one finds as bound states of Sp(6) H two states with the quantum numbers of the Higgs doublets of a supersymmetric theory: . So, in this model, there is a natural family repetition of the Higgs states. Naively, this could cause problems with FCNC. It turns out, however, that when one calculates the dynamical superpotential of the theory [5] one can show [4] that there is a ground state where only one of the three families of Higgs states are left light. So, in fact, there are no FCNC problems.
This nice result is tempered by other troublesome features of the model which render it unrealistic-but not uninteresting. For example, to break the [U F (1)] 3 family symmetry of the model, it is necessary to introduce by hand some heavy fields (with masses µ > Λ-the dynamical scale of the preon theories) which serve to couple the preon groups together. The simplest possibility is afforded by having 3 such fields: α L β H with indices spanning 2 of the preon groups, interacting through a superpotential The a-term above ties the preon theories together, while the various b-terms serve to break the family symmetries. Although Eq. (4) is introduced by hand, integrating out the effects of the heavy v i fields gives effective Yukawa couplings of different strengths, much in the way originally suggested by Froggatt and Nielsen [6]. This is illustrated schematically in Fig. 1 for the Yukawa coupling of u c L with (c, s) L via the Higgs state H 2 of the third family-which is the only one which is assumed to get a VEV. 1 One finds [4] Although the various elements in the up-and-down quark mass matrices are hierarchial, unfortunately there is no resulting quark mixing since M u ∼ M d . This follows because the model has an unbroken global SU(2) symmetry at the preon level corresponding to the interchange of the (1, 1) −2/3 and (1, 1) 1/3 assignments in Eq. (3). Furthermore, for the lepton sectors there is a dynamically generated set of Yukawa couplings [5] which are typically unsuppressed. As a result, naively, one expects m τ ≫ m t . Both of these results make the [Sp(6)] 3 model as presented above unrealistic. By further complicating the model, Kaplan, Lepeintre and Schmaltz [4] are able to obtain both a non-trivial CKM matrix and re-establish the top as the heaviest bound state. However, these "improved" models are not particularly attractive and represent, more than anything else, a "proof of principle". In addition, even after these problems are resolved, the models still lack mechanisms for breaking SU(2) × U(1) and supersymmetry, features which must be understood to make contact with reality.
These negative remarks should not obscure the considerable achievement of these dynamical models for understanding the origin of flavor. Families in these models 1) In the model [4] the lightest family has the most A αβ fields-c.f. Eq. (1). arise as a result of hidden degrees of freedom in some underlying confining dynamics. Furthermore, the presence of heavy excitations in this same dynamics can result in hierarchial patterns of Yukawa couplings, once all family symmetries are explicitly broken. Unfortunately, it is difficult to see how one can obtain real evidence for these kinds of schemes, barring the discovery of some of the exotic bound states they predict-in the example discussed, the T 2 and T 3 states or the vector-like partners of the quarks and leptons.
III FAMILIES FROM SHORT-DISTANCE RANDOM DYNAMICS
A radically different scheme for the origin of families has been proposed and elaborated by Holgar Nielsen and his collaborators [7]. The basic idea that Nielsen has put forth is that there exist both order and chaos at very short distances. He imagines that at scales much smaller than the inverse of the Planck mass there is actually a lattice structure of scale length a ≪ 1/M Planck . However, both the dynamics on the lattice as well as the structure of the lattice is random. In particular, the lattice is amorphous with sites at random positions. Furthermore, characteristic of the random dynamics, the interactions on each of the links are governed by different groups, with the groups varying from link to link.
Remarkably, even starting from these very general assumptions, one can arrive at some conclusions. Generally, one naively would imagine that no group could survive the random dynamics. That is, that the gauge group will end up by breaking down spontaneously, producing supermassive fields of mass M ∼ a −1 ≫ M Planck . In fact, as Brene and Nielsen [8] showed, there are special groups G surv. on the links which survive the random dynamics-i.e., the associated vector bosons are massless. What Brene and Nielsen [8] showed is that the groups which survive must have a center which is non-trivial and connected. By taking values in the center the links are effectively gauge-invariant. However, the center cannot be simply the unit matrix because the random nature of the dynamics would then end up by averaging out the effects of all links. The connectedness of the center, finally, is necessary to insure that the Bianchi identities are satisfied. Specifically, it turns out that G surv. is a product of "prime" groups with a certain discrete group D prime , generated from the center, removed: From the above, it appears that Nielsen's random dynamics allows the Standard Model group to survive, with a restriction: Here the discrete group D 3 is given by powers of the center element h = e : In practice, this imposes a restriction on the matter states which are placed on the random lattice sites which fixes the hypercharge of the quarks relative to the leptons. Eq. (9) effectively imposes the familiar charge quantization, giving the quarks third-integral charges. This is a very nice result! In this scheme the origin of family replication occurs through what Bennett, Nielsen and Picek [9] call "confusion" in the random dynamic processes. This can be understood as follows. At some step in the random dynamics what survives is not simply the group G * SM but a number N F of copies of G * SM , each with one family of quarks and leptons. Subsequently, this product group collapses to its diagonal subgroup [G * SM ] diag . This collapse, through "confusion", results in N F replicas of a Standard Model family of quarks and leptons. Thus, schematically, family generation occurs in random dynamics when N F Standard Model surviving groups collapse: Bennett, Nielsen and Picek [9] try to estimate N F -the number of families-which arise from random dynamics confusion by making a number of assumptions. Although some of these assumptions are questionable, they are not unreasonable. First, Bennett et al. suppose that the lattice scale associated with the random dynamics is of order of the Planck scale: a = M −1 P . This allows the calculation of the coupling constants of the Standard Model group [G * SM ] diag from their low energy values via the renormalization group: Second, by identifying the gauge fields in [G * SM ] diag with the individual fields in each of the SM groups in Eq. (10), it follows that the individual couplings of each of the groups in the "confused" configuration G * SM × G * SM × . . . × G * SM is given by A knowledge of g conf i then provides an estimate for N F . What Bennett, Nielsen and Picek [9] assume is that with g * i being the mean field theory critical coupling for each of the groups in the Standard Model. This assumption guarantees that in the confusion stage there is no confinement of quark and lepton states at Planck length scales-a reasonable boundary condition.
The result for N F which follows from the three assumptions (11)-(13) are rather remarkable, given the spare theoretical framework! One finds [7] This result notwithstanding, however, it is not clear how one proceeds further in developing a consistent theoretical framework from random dynamics. For instance, it is totally unclear how through this scheme one induces the breakdown of the SU(2)×U(1) electroweak group at scales of O(100 GeV), or how one even generates the Yukawa couplings which can provide the quarks and leptons eventually with some mass.
IV A GEOMETRICAL ORIGIN FOR FAMILIES
Perhaps the most interesting way to get family replications is through the compactification of extra dimensions. One starts with a theory in d > 4 dimensions but then assumes that the extra dimensions somehow compactify, leaving a 4dimensional theory. The earliest example of such a theory was the 5-dimensional Kaluza-Klein theory of gravity [10], which when compactified to 4 dimensions gave rise, in addition to gravity, also to electromagnetic interactions. More modern examples are superstring theories [11] which are known to be consistent only in d > 4 dimensions, but where again the extra dimensions can compactify leaving an effective 4-dimensional theory.
It is quite easy to understand how one can generate families in these types of theories. The general idea was first sketched out by Wetterich [12] and Witten [13] in the early 1980's. Consider chiral fermions in a d-dimensional space-time. Perhaps the best known example of family replication using these ideas is the one considered by Candelas, Horowitz, Strominger, and Witten [14] involving the Calabi-Yau compactification of the d = 10 heterotic superstring. This string theory [15] has an associated E 8 × E 8 gauge symmetry and is supersymmetric. The chiral fermions in the d = 10 theory are gauginos of one of the E 8 groups (the other E 8 acts as a hidden sector), sitting in the 248 dimensional adjoint representation. 4 Candelas et al. [14] assumes that the 10-dimensional space of the theory compactifies down to d = 4 Minkowski space times a 6-dimensional Calabi-Yau space, whose principal property for our purposes is that it possesses an SU (3) holonomy. This means that the chiral zero modes in K-those that obey the constraint equations (17) one sees that, after Calabi-Yau compactification, the 4-dimensional chiral matter involve fermions in either the 27 or 27 reprentations of E 6 . So, in general, one expects to have n F 27 plus δ (27 + 27) states in the spectrum. The numbers n F and δ are related to topological indices characteristic of the Calabi-Yau space K which compactified. In particular, Candelas et al. [14] showed that n F -the number of families-is connected to the Euler number of K: Note that, in this example, the families one obtains have the right stuff. The 27dimensional representation of E 6 when decomposed in terms of its SO(10) subgroup contains the 16-dimensional representation, appropriate for a family of quarks and leptons, plus a 10 and a singlet. The 10 itself, since the theory is supersymmetric, contains the two needed Higgs doublets, which in this case also come in family repetitions. In principle, the δ(27 + 27) states (as well as the 10 and 1) are vectorlike, and one can imagine these states getting masses of the order of the compactification scale-presumably of O(M P ). So in this example, the light states are just n F replications of the chiral quarks and leptons! Connecting family replication to the geometry of a compact space is a beautiful idea. Furthermore, there is another advantage. Through compactification, Yukawa couplings are naturally produced, arising from the fermion-gauge field interactions in d > 4 dimension along the gauge field components in the (d − 4) compact dimensions. Unfortunately, however, one cannot in general compute these couplings 4) Majorana fermions exist in d = 2 mod 8 dimensions.
explicitly. Nevertheless, often one can infer some useful symmetry restrictions among the Yukawa couplings in these schemes [16].
In my view, obtaining families from compactification is the most appealing solution to the origin of the mysterious repetitions we see in nature. It is not, however, easy to arrive at the correct theory. Basically, even believing that superstrings are the right theory, we still do not understand how to choose among the many possible compactifications available for these theories, since we have no idea of what is the underlying physics principle that drives the compactification. At the same time, we are also ignorant of how these schemes can give rise to terms which break supersymmetry and eventually SU(2) × U(1). Until such problems are solved, these ideas will just remain ideas which are appealing but untested.
V NAVIGATING THROUGH THE MASS MAZE
Even if one were to eventually understand the origin for families and their matter content, the mystery of flavor will not be solved until one is able also to decipher the physics which leads to the peculiar mass spectrum of quarks and leptons. Lacking a complete theory, most physicists have taken a very pragmatic approach to the mass generation problem. Basically, what has been assumed is that this problem is essentially decoupled from that of families and matter. Therefore, it makes sense to pursue a quite phenomenological strategy to get some insights into the problem of mass.
Following the lead set by some early work of Weinberg [17] and Fritzsch [18], the strategy has been to assume that the mass matrices for the fermions have certain "textures", imposed on them by some underlying symmetries. These textures, in turn, allow one to derive some interesting "predictions" which can then be compared with experiment. Typically, one obtains in this way certain interrelations among the quark mixing angles and the quark masses-relations which go beyond the standard model.
Perhaps the most famous "prediction" of this type of approach is the following formula for the Cabibbo angle, expressed as a function of quark mass ratios: Equation ( which display a texture zero in the 11 matrix element. Given that Eq. (20) is rather successful phenomenologically, the natural question to ask in this context is the underlying reason for the appearance of the texture zero in Eq. (21).
The appearance of texture zero or other interrelations between the elements in the quark and lepton mass matrices are generally assumed to arise at some high scale M where new physics connected with mass generation comes into play. In models where the breakdown of SU(2) × U(1) is dynamical like Technicolor [19], the scale M is generally assumed to be not too far from the TeV scale. However, in general, one has to be careful with FCNC induced through the process of mass generation [20] and one must appeal to dynamical properties of the underlying theory [21] to avoid contradiction with experiment. The resulting theories are quite complicated [22] and, as a result, many physicists think it more likely that the scale M connected with mass generation is likely to be of order of the Planck or GUT scale. In what follows, I shall concentrate only on this latter possibility and discuss two different, but complementary, mechanisms which can provide mass matrices with interesting textures: family symmetries and GUTs.
I will illustrate the first of these possibilities by discussing briefly a model introduced by Ibañez and Ross [23], which makes use of a U(1) F family symmetry. 5 In the Ibañez-Ross model, the quarks and antiquarks of each generation have the opposite U(1) F charge, while the two Higgs bosons H 1 and H 2 of the model carry twice the U(1) F charge of the third generation: As a result of this symmetry, clearly the quark mass matrices only have a non-zero 33 element: This provides a reasonable starting point for model building.
To proceed, Ibañez and Ross [23] need to introduce both a way to break the U(1) F symmetry and some interactions which physically will serve to generate the integenerational mass splittings. They accomplish the second point by imagining that at some high scale M some SU(2) × U(1) singlet fields θ andθ, with U(1) F charges of +1 and -1, respectively, acquire some effective interactions with the quark and Higgs fields. How these effective interactions come about need not be specified, 5) In the literature, there are many models which use a U (1) symmetry as a family symmetry [24], starting from the original paper on flavor textures by Froggatt and Nielsen [6]. but the U(1) F symmetry will fix their form. Ibañez and Ross essentially make use of the Froggatt-Nielsen [6] mechanism we illustrated earlier with the [Sp(6)] 3 preon model. For example, there will be a U(1) F preserving effective interaction among the u and t quarks and the two scalar fields H 2 and θ of the form Ibañez and Ross [23], in addition, assume that the U(1) F family symmetry is itself spontaneously broken at high scales by VEVs of the θ andθ fields. Once θ acquires a VEV a term like (24) becomes an effective Yukawa interaction, which can give rise to small corrections to the mass matrix (23) if ǫ = θ M ≪ 1. Of course, within this approach, one is not able to predict precisely these corrections. Nevertheless, if one assumes that the proportionality constants in the Froggatt-Nielsen terms are of O(1), the magnitude of the different matrix elements will be governed by powers of ǫ, reflecting the original U(1) F symmetry. In the case of the Ibañez and Ross model, for example, the modified up-quark mass matrix under these assumptions takes the form 6 Not only is this mass matrix hierarchial if ǫ ≪ 1, but there are interesting interrelations among the matrix elements; for example In general, the detailed comparison of a mass matrix like that in Eq. (25), which holds at a large scale M, with experiment is complicated by the evolution of the Yukawa couplings with energy. This evolution, for example, can change zeros in a mass matrix at a given scale into small, but non-vanishing, contributions at a different scale. This is easy to understand since, for example, a non-vanishing Yukawa coupling to quarks of the second generation can induce at one loop an effective Yukawa coupling to quarks of the first generation, provided there is also a non-vanishing Yukawa coupling between the first two generations.
The effect of the renormalization group evolution of the couplings is to further obscure possible mass matrix patterns. For instance, the mass matrix M u of Eq. 6) One can take α 1 + α 2 + α 3 = 0, without loss of generality (25) is connected to the Yukawa couplings Γ u via the familiar equation M u = Γu √ 2 H 2 . However, Γ u at one scale is different from Γ u at another scale, with the evolution between scales governed by the renormalization group. At one loop, this evolution is given by the equation: In the above, the coefficients a, b, c and G 2 u depend on the assumptions one makes on the matter content between the scale M and the scale µ. As a result, patterns set by physics at a high scale M ∼ O(M Planck ) at lower energies µ are smudged. In particular, the measured mass matrix at µ ∼ M Z is influenced by the assumptions one makes on the matter content in the region between µ and M-a region for which one has no information! Nevertheless, once one fixes this matter content through some model assumptions, one can either validate or exclude specific mass matrix models by comparing their, renormalization group evolved, predictions with precision electroweak data.
Rather than illustrating this for the model of Ibañez and Ross sketched above, I prefer instead to examine the predictions of a specific class of SUSY GUT models, based on SO(10) studied by Anderson, Dimopoulos, Hall, Raby and Starkman [25]. These models illustrate a second way in which theoretical input at a high scale may help fix the patterns of the quark and lepton masses and mixing we observe. These SUSY SO(10) models have texture zeros and matrix element interrelations set by SO(10), with hierarchies among these elements produced by insertion of higher dimensional operators much along the lines of what transpired in the Ibañez and Ross model. The best model of [25] with three texture zeros has the following Yukawa couplings: The parameters A, B, C and E have the following hierarchy Although these and other results provide an acceptable fit to the low-energy data, a recent analysis by Blacek, Carena, Raby and Wagner [27] has shown that these models do not fit the data as well as models with just one texture zero, like the Lucas-Raby model [28].
The Lucas-Raby model adds two non-zero matrix elements to the Yukawa coupling of Eq. (28), with strength D ∼ O(C). Specifically one has: Table 1 displays a comparison of the fits provided by these two models of all the extant low-energy data. As can be seen, perhaps not surprisingly, the data clearly favors the Lucas-Raby model with all predictions within one standard deviation from the data. In contrast, the best of the Anderson et al. [25] models has four observables about 3σ away from the data: I should remark that, besides fitting the extant data, these models are predictive. Once all the parameters (A-E and the phases) in Eqs. (28) and (31) are fixed from the global fit, one can extract further information from these ansatze, both on the CKM matrix as well as on the spectrum of Higgs and supersymmetric states. For example, the Lucas-Raby model predicts the following values for the parameters associated with the CKM unitary triangle, so important for studies of CP violation in the B system [28]. sin 2α = 0.96 ; sin 2β = 0.52 ; sin γ = 0.93 ; ρ = −0.125 ; η = 0.32 (32) The angles α, β and γ will be soon measured and Eq. (32) will be tested experimentally. At the same time, the Lucas-Raby model also predicts that the lightest Higgs bosons are a pair of CP odd and CP even states nearly degenerate with each other with mass around 74 GeV-a prediction which should be tested already by LEP 200.
VI LESSONS FROM THE TOP MASS.
Because m t ≫ m i , many of the important features of the Yukawa matrices are crucially dependent on how the top coupling behaves. Theoretically, rather than considering the physical mass for the top measured by the CDF and DO Collaborations [29] M t = (175 ± 6) GeV, it is more useful to consider the running mass 7 The running mass is directly related to the diagonal Yukawa coupling of the top This coupling, keeping only the dominant 3rd generation couplings in Eq. (27) obeys the RG equation [30] dλ t (µ) The coefficients a i , c i , as we discussed earlier, depend on the matter content of the theory. For instance, for the SM one has a t = 9 2 ; a b = 3 2 ; a τ = 1 ; c i = 17 20 ; 7) We use the same convention as Table 1 Knowing the value for λ t (m t ) one can compute the value for λ t (µ) at any scale µ by using the RG equation (35). Because a t > 0, for µ large enough eventually λ t (µ) → ∞. The location of this, so called, Landau pole [31] is theory-dependent. As we shall see, µ Landau is an uninteresting scale in the SM, but for the MSSM µ Landau ∼ M Planck -a result which may be quite significant. At any rate, it is worthwhile to explore these differences in a bit more detail [32].
For the Standard Model, because one has only one Higgs boson, one has H 2 ≡ H ≃ is easily found to be Here η(µ) and I(µ) are functions determined by the running of the gauge coupling constants with b i ≡ 41 10 , − 19 6 , − 7 and one finds From Eq (39) one sees that the Landau pole for the SM model occurs at a value of µ where The situation is quite different if there is supersymmetry. Because there are now two Higgs doublets, λ t (m t ) is not fixed by m t (m t ) but depends also on the ratio of the two Higgs VEV, tan β. One has H 2 = There are two interesting regions for tan β. The first of these has tan β ∼ O(1), in which case one can again neglect λ b with respect to λ t in Eq. (35). In the second region tan The form of the solution for λ 2 t (µ) in these two cases is again given by Eq. (39). However, here the running of the gauge couplings which enter in η(µ) and I(µ) is governed by a different set of coefficients b i and c i , appropriate to having supersymmetric matter above the weak scale. In particular, b i = 33 5 , 1 , − 3 . Given Eqs. (44) and (45), it is clear that the Landau pole will occur at the same place in both cases, provided that in the first case sin β = 6 7 (tan β ≃ 2.45). In what follows, therefore, we concentrate only on the second case, corresponding to Yukawa unification of couplings.
The Landau pole in this case occurs at Using the MSSM coupling evolution, it is easy to check that such values for I obtain when µ Landau ∼ O(M Planck ). Indeed, the error on λ t is large enough not to permit a more accurate determination. In fact, it is much more sensible to turn the argument around. If there is supersymmetry and λ t becomes strong around M Planck , then at low energy λ t will be driven to an infrared fixed point at λ t (m t ) ≃ 1 8 This is illustrated by Fig. 2, calculated using the two-loop MSSM RGE equation for tan β = 2, where the "focusing" effect at low scales of couplings which are strong around µ = M Planck is clearly demonstrated.
The results displayed in Fig. 2 are perfectly consistent with having the ratio m b /m τ = 1, as SO(10) unification suggests, at scales of O(M Planck ). An analysis similar to the one we did for Eq. (38) relates this ratio at a scale of m t to that at the unification scale M X ∼ M Planck : Hereη(µ) is a quantity similar to η(µ), detailing the running of the coupling constants in the quark to lepton mass ratio: with the coefficientsĉ i = − 4 3 , 0 , 16 for the MSSM. The ratio [η(M X )/η(M X )] 1/2 ≃ 1.68 is above the experimental ratio m b (m t )/m τ (m t ) = 1.58 ± 0.08, suggesting that λ t (M X ) ≫ λ t (m t ). That is, the top coupling is stronger at the GUT scale than at low energy, much as indicated in Fig. 2.
The upshot of this discussion is that the assumption that supersymmetric matter exists above the weak scale gives a consistent picture, with a large top Yukawa coupling at the Planck scale being driven by an infrared fixed point to a value λ t (m t ) ∼ 1. This behavior obtains in two regimes of tan β. Either tan β ∼ O(1) and λ t is the dominant coupling. Or λ b ∼ λ t and tan β is large. The second possibility is natural in the SO(10) models discussed earlier where all quarks and leptons of one family are in the 16-dimensional representation. Furthermore, at least intuitively, having a large Yukawa coupling at the Planck scale fits in well with the ideas that families are generated either dynamically or through geometry in supersymmetric theories.
VII CONCLUDING REMARKS
In my opinion, one probably will not be able to unravels the mystery of flavor without some new experimental information. In particular, I believe that ascertaining whether or not low energy supersymmetry exists will have a profound impact on this question. The discovery of low energy supersymmetry would, of course, provide a tremendous boost for superstring theories. At the same time, it would also signal the death knell of the random dynamics ideas of Nielsen. These ideas, if one is to believe in them, require that there should be a real desert up to M Planck , with no physics beyond the Standard Model between the weak and the Planck scale.
If supersymmetry is found, perhaps it is sensible to imagine that some of the ideas discussed in the previous section are true. That is, that there is indeed a large Yukawa coupling of top at energies of O(M Planck ), which results in the mass of the top being determined essentially by the infrared fixed point of the Yukawa evolution equations. Furthermore, it is easy to imagine then that the quark and lepton mass spectrum is a result of a combination of a broken family symmetrywhich sets up the hierarchy among the masses-and of a GUT-which interrelates the quark and lepton mass tapestries.
Even in this very favorite circumstance, however, it will be difficult to get real evidence for the origin of flavor. Is it due to dynamics or to some primordial compactification? Perhaps the tell-tale sign will emerge from the discovery of some exotic states, besides the quarks and leptons and their superpartners. In fact, the most characteristic signals of models for flavor is the inevitable presence of exotic states. Recall the exotic T 2 and T 3 states in the Sp(6) model, or the extra 10 ⊕ 1 states in the 27 produced through a Calabi-Yau compactification. In this respect, I should note that certain exotic states seem to be quite generic. In particular, the presence of extra (3, 1) −1/3 states is very natural.
On a more pedestrian level, our undestanding of flavor and mass will be aided by a continuous experimental (and theoretical) refinement of the values for the quark and lepton masses and mixing parameters. Precise values for these parameters are crucial if one wants to sort out alternative tapestries, signalling different origins for flavor. Eventually, it is going to be important to know that V cb = 0.038 rather than 0.040! | 2014-10-01T00:00:00.000Z | 1997-12-18T00:00:00.000 | {
"year": 1997,
"sha1": "28a24dc4361773604cce855924e88e0cb4e87394",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9712422",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1fc4f0d9160f2c0f45d8e76c134caa22dd4ff482",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
59348265 | pes2o/s2orc | v3-fos-license | Iranian Version of Cancer-Related Fatigue Questionnaire: Construction and Validation
Background: Patients with cancer experience various physical and psychological complications during treatment. Fatigue is a common and often disabling medical symptom in patients with cancer. Objectives: It is necessary to have a reliable and valid tool to examine cancer-related fatigue in adults with cancer. Methods: This descriptive study was conducted on 150 patients with cancer that had referred to Shahid Sadoughi Hospital (Yazd/Iran). Data were collected by a researcher-made questionnaire that was designed for fatigue assessment. The reliability was determined using the Cronbach’s alpha and test-retest method. Factor analysis was used in SPSS/21 software to verify construct validity. Results: Facevalidityandcontentvaliditywasconfirmedthroughanexpertpanel. Accordingtoexperts’ suggestions,unnecessary items were removed and required changes were made in the questionnaire. According to the results of factor analysis, this questionnaire has three categories including: Daily activities and general problems (ten questions), sleep problems (nine questions), andmentalstatesandemotions(fivequestions). Cronbach’salphawasmorethan0.8foralldimensionsandwas0.93fortheentire scale. Intra-class correlation coefficient (ICC) was in the range of 0.84 to 0.92; also, ICC was 0.92 for the total questionnaire and was closetooneforalldimensionsof thequestionnaire. Inaddition,thetotalmeanfatiguewas53.44 ± 16.61,consideringthetotalscore of 100. There was a significant difference between total mean fatigue and gender, job, economic status, and type of cancer. Conclusions: This study shows that cancer-related fatigue questionnaire can be used as a tool with validity and reliability at all research levels.
Background
Cancer is a serious health problem in developed and under-developed countries. Cancer incidence rate is increasing in children and adults in the global population (1). Cancer is the third cause of mortality in Iran after coronary heart disease and accidents (2).
Adult and pediatric cancer patients experience physical and psychological effects during treatment. Fatigue is a common and serious symptom for cancer patients. The effect of fatigue on the quality of life of cancer patients may be negative (3) and may influence the patient's ability to complete treatment (4). Studies show that 90% of patients with cancer experience fatigue during the treatment process; and more than 50% of patients report fatigue even after the end of treatment (5). The study of fatigue in cancer patients is a global issue for researchers. Cancer-related fa-tigue is an important issue that increases stress and anxiety in patients and caregivers and is a common symptom in many types of cancers, which is often overlooked and not treated (6). Fatigue in patients with cancer is more severe and stable than healthy people and does not improve with sleep and rest. Cancer-related fatigue affects the ability and function of the patient in daily activities, delays the patient's treatment, and in some cases, leads to a reduction in the survival of the individual (7). Fatigue is a multidimensional concept with several definitions: Physical dimension (lack of energy and need for rest), cognitive dimension (deficits in concentration and attention), and emotional dimension (lack of motivation or interest) (8).
persistent, subjective sense of physical, emotional, and/ or cognitive tiredness or exhaustion related to cancer or cancer treatment that is not proportional to recent activity and interferes with usual functioning" (9). The cause of CRF has not been well-known until now. A better understanding of possible factors affecting fatigue in cancer patients may lead to the development of a structured intervention for patients. Carnitine deficiency in patients with cancer may promote the risk of chronic fatigue, which may be caused due to reduction in the use of long-chain fatty acids in energy metabolism (10), enhanced metabolic requirements, and treatment with medications that disrupt the metabolism of carnitine (3).
Considering that fatigue is considered as a multidimensional problem (sensory, physiological, affective and behavioral manifestations) (11), healthcare staff must have a complete understanding of fatigue to provide better services to reduce fatigue in patients with cancer in clinical settings. Therefore, it is necessary to have a reliable and valid tool to examine all aspects of cancer-related fatigue in adults with cancer. The best strategy for managing fatigue is identifying signs of fatigue. Therefore, before any treatment intervention, signs related to fatigue should be identified. There are a wide range of factors associated with cancer, such as socioeconomic factors (gender, level of education, occupation, etc.) and clinical features of the disease (stage of disease, type of cancer, treatment method, etc.).
Objectives
The aim of this study was to construct a reliable and valid questionnaire to examine cancer-related fatigue in adults with cancer.
Study Setting and Sample
This was descriptive-cross sectional study performed in Yazd. Patients, who participated in this study were patients with cancer that had referred to Shahid Sadoughi Hospital (Yazd/Iran) for the treatment of cancer. The sample size was calculated as 141 (based on CI = 95%, SD = 4.25 based on a similar study (12) and error estimation of the mean of 0.7). The sample size was increased to 150 patients due to the possibility of missing cases. Sampling was conducted based on the convenience sampling (availability sampling) method.
Inclusion criteria in this study included: (1) Confirmation of cancer in patients by the doctor, (2) beginning of the process of treatment, (3) age range over 18 years, and (4) consent to participate in this study. The patients, who failed to answer all the questions were excluded from this study. Researchers completed the questionnaires by a faceto-face interview with patients at Shahid Sadoughi Hospital.
Instrumentation
Data were collected by a researcher-made questionnaire. In this study, a questionnaire was designed to examine cancer-related fatigue. The questionnaire consisted of 24 items that were grouped to three categories: Daily activities and general problems (ten questions), sleep problems, (nine questions) mental states, and emotions (five questions). A four-point Likert-type range was used for scaling (0 = never, 1 = sometime, 2 = usually, and 3 = always).
The stages of the questionnaire design were as follows: (1) Study on definitions of cancer-related fatigue, (2) reviewing a number of questionnaires about fatigue, such as multidimensional fatigue inventory (MFI) and fatigue severity scale (FSS), (3) gathering the factors affecting cancer-related fatigue, (4) gathering a set of questions about cancer-related fatigue, (5) random distribution of questions in the questionnaire and grading scale, (6) assessing the face and content validity of the questionnaire, (7) collecting data by a face-to-face interview, (8) assessing the construct validity of the questionnaire, and (9) measuring the reliability of the questionnaire.
Statistical Analysis
The SPSS version 21.0 was used for Statistical analyses. The statistical analysis included descriptive statistics, ttest, analysis of variance (ANOVA), and Pearson's correlation. The researchers used face validity (according to experts' suggestions), content validity (calculating content validity ratio (CVR) and content validity index (CVI)), and construct validity (using factor analysis) to determine the validity of the questionnaire. The reliability of the questionnaire was determined using the Cronbach alpha coefficient and the test-retest method.
Ethical Considerations
This article was the result of a research project num-
Assessment of Validity
Face validity was examined through suggestions of the expert panel. The panel of experts comprised of individuals with expertise related to this study. Each section of the questionnaire was allocated to a relevant specialist and they were asked to offer their suggestions for the clarity and content of the questions. Also, they were asked to give a score to each question (1 = accept, 0 = reject). According to the experts' suggestions for the questions, unnecessary items were removed and required changes were made to the questionnaire. In addition, the clarity, relevance, and simplicity of questions were verified by the expert panel.
Content validity was assessed by calculating the CVR and CVI. For this purpose, the questionnaire was provided to 10 specialists and they were asked to choose one of the three options for each question, including "necessary", "useful but not necessary" and "not necessary"; then CVR was calculated for each question. All professionals choose the "necessary" option for all the questions. Therefore, according to the number of specialists and the Lawshe's formula, the CVR for all questions was obtained as one.
Then, CVI was calculated for all questions. The results showed that CVI for the entire questionnaire was 0.89. The results showed that face validity and content validity of the questionnaire was proper. In this phase, 27 questions were entered in the questionnaire. Finally, a four-point Likert-type scaling was used for scoring procedures (never = 0, sometimes = 1, usually = 2, always = 3). The questionnaire ranged from 0 to 100. In this study, the factor analysis method was used to test the construct validity of the questionnaire. The results of the factor analysis, dimensions of the questionnaire and questions of each dimension are shown in Table 1. After performing factor analysis, three questions were removed from the questionnaire. The final questionnaire had 24 questions. According to the results of factor analysis using principal components analysis, this questionnaire had three categories including: Daily activities and general problems (ten questions), sleep problems (nine questions), mental states, and emotions (five questions).
Assessment of Reliability
Researchers used two methods to assess the reliability of this questionnaire including: (1) Cronbach's alpha coefficient method and (2) test-retest. According to Table 2, Cronbach's alpha coefficients were more than 0.8 for all dimensions of the questionnaire. The alpha coefficient was 0.93 for the entire scale. The highest coefficient was for dimensions of daily activities and general problems. The dimension of sleep problems had the lowest coefficients.
The second method to test the reliability of the questionnaire was test-retest. For this purpose, initially, the questionnaire was completed by 40 patients. After four weeks, the questionnaire was completed again by the same patients. According to the result, ICC was in the range of 0.84 to 0.92; ICC was 0.92 for the total questionnaire. The ICC was close to one for all dimensions of the questionnaire, which indicated that reproducibility was high for all dimensions of the questionnaire ( Table 2).
Association Between Fatigue and Clinical and Demographic Information
In this study, the average age of participants was 55.03 ± 9.38 years and the average duration of the cancer was 12.45 ± 6.94 months. The results showed that the total mean of fatigue was 53.44 ± 16.61 of the total score of 100.
Furthermore, the mean of each dimension of the questionnaire were as follows: The mean of daily activities and general problems was 24.93 ± 9.1 from the total score of 44, the mean of sleep problems was 16.79 ± 6.9 from the total score of 36, and the mean of mental states and emotions was 11.53 ± 3.2 from the total score of 20. The results showed that 8.82 of patients had mild fatigue, 63.9% had moderate fatigue, and 27.2 % had severe fatigue. There were significant differences between the total mean of fatigue and gender (P = 0.015), job (P = 0.01), and economic status (P < 0.001).
Discussion
In this study, researchers designed a comprehensive questionnaire for investigating fatigue in cancer patients. Face validity of the questionnaire was assessed by professionals. Based on their suggestions about the clarity and content of suggestions, necessary changes in the questionnaire were made. Factor analysis method is one of the most common ways for measuring construct validity. Based on the factor analysis method, construct validity was confirmed and the dimensions of the questionnaire were extracted. Factor analysis, using principal components analysis, indicated that cancer-related fatigue questionnaire is a multidimensional instrument. Dimensions of the questionnaire included: Daily activities and general problems, sleep problems, mental states, and emotions. Other questionnaires that were previously used to evaluate fatigue were multi-dimensional tools. The questionnaire designed by Bektas and Kudubes' study had four dimensions including: General problems sleep problems, treatment problems, and cognitive problems (13). Chronic fatigue syndrome (CFS) has three dimensions including physical, emotional, and cognitive dimensions (14). The multidimensional fatigue inventory (MFI) also has five dimensions, including general fatigue, physical fatigue, mental fatigue, reduced motivation, and reduced activity (15). Results of the present study about the multidimensional nature of the questionnaire were consistent with other studies (14,16,17). This shows that fatigue is not a one-factor problem, and its complexity requires a multi-factor ques-tionnaire.
Determining the reliability of a questionnaire is one of the most popular methods in designing a questionnaire. Although having reliability is not a sufficient condition, it is required. In medical science, researchers use Cronbach's alpha coefficient for measuring the reliability of scales. If Cronbach's alpha is close to one, it shows that the internal consistency between questions is greater; and questions have high homogeneity (18). In this study, Cronbach's alpha was close to one for all dimensions of the questionnaire. Cronbach's alpha for this questionnaire was in an excellent range (α ≥ 0.9). This indicates that the reliability of the questionnaire was acceptable.
Another method for assessing the reliability of the questionnaire in this study was test-re-test. One of the 4 Zahedan J Res Med Sci. 2018; 20(12):e69187. most common methods for ensuring the stability of an instrument over time is test-retest correlation. In other words, in this way, the same questionnaire is completed by the same people at different times. Then, the correlation between the result is examined and the consistency of the questionnaire is evaluated using the inter-class correlation coefficient (ICC) (18). The results showed that ICC obtained in this study was close to one for all dimensions. This showed that the questionnaire has a high repeatability. This result is similar to a study conducted in the field of fatigue in Turkey (1). The results showed that fatigue was moderate in more than half of the patients. These results are consistent with the findings of some other studies (8,19). In the study of Safaee et al. (20). 78% of patients with cancer experienced fatigue and in the study of Aston et al. (21) 68% of patients had different degrees of signs and symptoms of fatigue. The continuation of the disease and the application of different treatment methods and even the side effects of anticancer drugs may reduce the body's physical capacity and increase fatigue with the onset of treatment in the patient.
Conclusions
This study showed that cancer-related fatigue questionnaire has an acceptable and suitable validity and reliability. Based on the results of this study, other researchers can use this valid and reliable questionnaire to investigate fatigue in patients with cancer aged 18 years and older. Also, considering that fatigue was an important and widespread problem in patients with cancer in this study, training of fatigue reduction techniques by the treatment team members is one of the effective measures to cope with fatigue in patients and their family members. | 2019-01-30T14:07:53.582Z | 2019-01-06T00:00:00.000 | {
"year": 2019,
"sha1": "ce9fa5d2a5f41b8e9544e24788b9c5a63cbe10b9",
"oa_license": "CCBYNC",
"oa_url": "https://zjrms.kowsarpub.com/cdn/dl/1cc535ba-16f0-11e9-aa7d-8b9f7f5ab363",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "81df81111a0c65ff8e80e43cec814603f19e6dfb",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
67853963 | pes2o/s2orc | v3-fos-license | A Framework for Remote Monitoring System
Remote monitoring system becomes an important facility to support observation activities for various natural disasters. In many incidents of natural disaster such as volcano eruptions, the available monitoring systems installed closely to disaster area were damaged due to extreme condition raised by the event. The temperature of disaster site could suddenly increase to hundred degrees of Celsius, drowned in a water flood or even trapped in a toxic heating gas. Therefore, it is important to have observation facility that is installed far away from disaster area. This research is an exploratory study to develop the framework for remote monitoring system. It includes hardware requirement and algorithm definition that cover system lenses and a set of image processing algorithm. The framework delivers a promising preliminary result towards the effort for remote monitoring system development.
Introduction
The last decade becomes the witness of natural disasters hitting Indonesia continuously. Floods, landslides, hurricanes, earthquakes, volcanic eruptions, forest fires, even tsunami wave came and went. Various sources noted that the total losses caused by natural disasters has reached the value of trillions of rupiah and thousands of deaths. In addition to being influenced by natural factors, the damage also influenced by the unpreparedness of anticipating and managing natural disasters. It is closely related to the lack of early detection and observation facilities that has been installed and operated. Many cases demonstrate the absence of early detection facilities and amenities observation led to substantial material losses and claimed many casualties, like the tsunami in Aceh at 2004 and in Cilacap at 2006.
For facilities that have available observations, some cases even show damage to the equipment installed in the disaster area, such as when the eruption of Mount Kelud in 2007. Damage to the tool is primarily due to the extreme conditions of the disaster area such as the emergence of natural gas heat as well as the surface temperature of the earth. Therefore, the facility of early detection and observation that installed at a considerable distance from the disaster site yet delivers a good observation result become indispensable.
Meanwhile, the need for remote facility of monitoring system is increasing by nowadays. This is triggered by the magnitude of the benefits that can be generated by this facility, primarily as a tool for observing catastrophic incident or natural phenomena. This system could be used to reveal a variety of natural phenomena that need timely and long-range observation [1,2]. Such natural phenomena are often found in Indonesia, for instances are the migration of groups of birds crossing the oceans and the development of a variety of wildlife, or even the happening of various natural disasters which have claimed many casualties as described above.
Motivation
The Indonesian archipelago, located between the edge three plates of the earth namely the Pacific, Eurasia, and Australia plate. This country has also about 60 volcanoes that are still active, 17,000 islands, vast forests, and inhabited by diverse flora and fauna. The geographic conditions make the region of Indonesia has a rich collection of flora and fauna, has a nice view, but is susceptible to natural disasters such as volcanic eruptions, forest fires, hurricanes, tsunamis, earthquakes, and others. Therefore, in relation to the exploration of natural resources and the efforts to minimize vary losses which can be caused by these natural disasters, the facility that can be used to monitor a variety of natural phenomena in Indonesia is urgently required. The system must be able to be used to observe terrestrial objects over long distances considering many natural phenomena that cannot be monitored from close range.
Nowadays various attempts have been made by researchers to build digital monitoring system such as afforded by Yao et al. [3,4] and Shirvaikar [8]. However, the existing monitoring system is capabe to only monitor terrestrial object in the order of hundreds of meters with blurred output quality and has narrow view angle. When it is compared with the distance required to monitor natural disaster such as volcano eruption, the existing condition is still far below the expectation. Although some systems have been built to reach the object in longer distance in the field of astronomy, such as the Hubble and several large observatories [9], such systems is not intended to observe terrestrial object in detail. Therefore, research activities to establish remote monitoring system with a capability to reach terrestrial objects in distant with good detail quality are indispensable.
Related Work
Remote monitoring system is an application of imaging that aims to retrieve data from a distant event in the form of pictures or visual data. Some factors that affect the performance of these systems have widely been known by researchers such as the focus, exposure, and enlargement of images [10,11]. To monitor an object in close range, focus, lighting, and image magnification can be handled with ease. So, to produce good quality of output for any objects of interest that are located closely to the monitoring facility is trivial. But not for distant objects, since objects that are located in distant would have small size when it is viewed from a monitoring point. Besides, the amount of intensities of the constituent components from a distant object is also relatively small compared to the objects that are closely located [5]. This leads to the difficulties to manage focus, exposure and image magnification for distant objects. Therefore, it is not surprising that a lot of effort to build a remotely digital monitoring system facing problems for recording visual data [4,8,11].
Various additional methods have also been proposed by researchers to solve the above problems, such as digital enlargement [6,7], parameter calibration image recording [8,11], and image restoration [4]. Nevertheless, the result of these methods still contains many weaknesses such as less sharpness of the output, increasing noise in visual data, and last but not least increasing complexity of the system. Meanwhile, Lintu and Magnor [5] has a unique approach towards solving this problem. The monitoring results that are unfavorable to distant objects is replaced by a set of images that are stored and have been available in the database. This method seems to become a breakthrough in the field of remote monitoring system, especially when the approach is associated with learning distant objects, for instance in the field of astronomy to study the structure of the galaxies. However, this approach is not appropriate to be implemented for terrestrial objects due to limitation of the system that is unable to display the output in real time, particularly when the object to be monitored constantly changing in timely basis. Few examples of these events are the activities to monitor natural events such as the migration of wild life animals or the happening of natural disasters.
Framework Development
Development of a framework for remote monitoring system has been conducted by considering the following aspects:
Data Acquisition
The parameter used to acquire visual data from real objects is the focus, magnification (zoom), lighting, and sensor size. The focus is used to collect all the light from a real object to the observation point. Focus is governed by a system of lenses. Enlargement is used to increase the size of captured images. Enlargement is affected by the longest focal distance divided by the shortest focal distance that can be reached by the lens system. The greater the result of this division, the stronger the magnification can be achieved. Lighting unit is used to ensure that captured objects emit enough light to be shaped into a digital image. In practice lighting unit is regulated by two mechanisms, namely the aperture size and the shutter speed. Aperture is the size of the hole that passes light to the lens system. The larger the hole, lighter intensities would be passed to a set of light sensor to shape the image. Meanwhile the shutter speed is the speed of opening and closing the curtain on the aperture. The faster a shutter is operated would generate dimmer digital image due to less light passed the lenses system. Sensor is a device used to capture and convert light into digital data in the form of images. The size of sensor affects the smoothness of image. Larger sensor would produce more delicate and higher resolution image.
Image Quality
To measure the quality of images quantitatively can be done by forming a histogram and see the spread of pixel values contained in an image. Values measured using this method is the value of color distribution, where a good digital image will show a clear difference in the distribution of colors for the main object and the background. In addition, a digital image quality can also be measured using a light intensity parameter, namely the amount of incoming light energy at the time the object is recorded. This is apparent from the degree of brightness of the image. To perform the measurement of light intensity of a digital image is done by changing the condition of the image into a single color, usually done by changing the color to gray level or referred to the grayscale image. Another way is by measuring the spatial resolution of the image, which measures the number of pixels making up the digital image. More number of pixels contained in digital image would show more detailed condition of real objects.
Image Zooming
Efforts to increase the size of digital image have been made by researchers such as research conducted by Morse and Schwartzwald [7], Yao et. al [3,4] and Sajjad et al [6]. Morse and Schwartzwald perform digital zoom using interpolation method by applying the level set to return the pixel value into its original condition. This method has only been tested on a low magnification, which is two to four times zooming and therefore it does not meet the need to enlarge the remote object which is at a distance of hundreds to thousands meters. Digital image magnification for long distances up to hundreds of meters is made by Yao et al [3,4]. This is done by arranging the optic device in hardware. However, the system displays blurred and trembling output. Other effort as performed by Sajjad et al [6] is only capable of displaying a mechanism with small magnification levels, i.e. four times magnification and has a shortage for full-color image enlargement.
Framework Design
Design framework that is done in this activity are shown in Figure 1, which consists of two stages as follows: The first stage is the data acquisition phase, which begins with preparing the platform for producing digital images mainly taken from a distance of hundreds to thousands of meters. Then the results of data acquisition would shape the characteristics of the developed remote monitoring system by comparing with the digital image taken from a short distance. The second stage is the main part of the study, namely by processing and enhancement of digital image from a distance using a variety of digital image processing algorithms for the purpose of enlarging the size of the image, so that objects acquired from hundreds to thousands of meters away from he observations point can be displayed properly. The first step that must be done is to determine the distribution of the color using the histogram to determine the difference between the existences of the main object with the background. This process if followed by the sharpening process of the main object by increasing the contrast value of the digital image. This is done by the interpolation process to enlarge the dimensions of the main object and the filtering process to clarify the edges of the main object. Result of the developed framework is depicted in Figure 1. Figure 1. Developed framework for remote monitoring system.
Experiment & Discussion
Few experiments have been conducted to a partial component of the framework proposed above. The intention is to reveal any obstacle restricting the framework from reality. By employing a half top part of the framework for capturing distance objects compared to its close counterpart, some facts can be disclosed such as illustrated in Figure 2. Experimental results depicted in Figure 2 show the difference presentation of object captured from a long range and from a common distance of pocket camera. The differences include histogram and frequency domain presentations, in which some components are missing from the picture obtained from a distance. These facts disclose the cause why distance object lost its detail compared to its short-range counterpart.
Conclusion
The framework to build remote monitoring system has been developed in this study. It consist of two major stages namely lenses system and image processing. The first stage is to acquire digital image captured from real objects using three well-known parameters i.e. focus, aperture and shutter speed that are produced from integrating the focal length of lenses system and the size as well as the quality of image sensor. This stage is done in an image capturing device that equipped with lenses system. Result of captured image is then supplied to image processing to precede image with a set of algorithm which include focus sharpening, noise removal using a set of digital filters, detection of object boundary based on edge detections and image magnification algorithm based on interpolation. The algorithms also include histogram equalization and detection of spatial resolution in order to support the optimal focus obtained from distant object. The framework produced by this study become a promising preliminary result to further developed remote monitoring system based on visual data processing. | 2018-12-29T09:02:26.563Z | 2016-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "fd0a50389f2bd8cb355dc406de65ae9bb111dba0",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/190/1/012042/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a9ef8695cddeee16f9ad6bdccc0e4338e1c259a0",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Geography",
"Environmental Science"
]
} |
259125751 | pes2o/s2orc | v3-fos-license | Circadian Variation of Peripheral Blood Cells in Horses Maintained in Different Environmental and Management Conditions
Simple Summary Circadian rhythms promote mammals’ temporal organization as an evolutionary mechanism of adaptation. It is well-known that the physiological status and well-being of domestic animals may be influenced by endogenous and exogenous parameters. The aim of the present study was to evaluate the circadian rhythm of the blood cell count and leukocyte subpopulations in horses maintained in different housing conditions during the four seasons. All hematological parameters and leukocyte cells showed a different trend influenced by housing conditions and seasons. All hematological parameters showed a daily rhythmicity during spring in horses housed in a loose box and paddock. Lymphocytes and neutrophils showed a daily rhythm in horses housed in loose boxes during spring and summer and in paddocks during winter. Abstract The aim of our study was to analyze circadian rhythm of the hematological profile of horses housed in a loose box and paddock during the different seasons (spring, summer, autumn, and winter). Blood samples were performed every 4 h for 48 consecutive hours. Red blood cells (RBCs), hemoglobin (HGB), hematocrit (HCT), white blood cells (WBCs), platelets (PLTs), and leukocyte subpopulations (neutrophils, basophils, eosinophils, lymphocytes, and monocytes) were analyzed, and, at the same time, environmental conditions were recorded. A statistically significant effect of housing conditions (p < 0.0001) was observed on all hematological values except for WBC during winter and for neutrophils (p < 0.0001) during spring and autumn. A statistically significant effect of season (p < 0.0001) was found for RBC, HCT, and PLT and for all leukocyte cells (p < 0.0001) except for basophils. The single Cosinor method revealed a daily rhythm of hematological parameters during spring in both groups, and a daily rhythm for lymphocytes and neutrophils was observed during spring and summer in horses kept in a loose box and during winter in horses housed in a paddock. Our results revealed that the response of the immune system is regulated by circadian physiology. Knowledge of the periodic temporal structure of mammals should be considered when evaluating animals’ adaptation to temporizations imposed by the environment.
Introduction
The physiological and psychological ability of the animal to cope with environmental changes is a crucial requirement for the maintenance of homeostasis and animal health and welfare. The inability to cope with environmental pressures can lead individuals to experience stress and potential negative impacts on their health and well-being [1]. Climatic mutation, caused by seasonal changes, represents a stressor factor that may affect animals' homeostasis [2]. The annual cycle of changing day length (natural photoperiod) provides a reliable environmental cue to determine the time of year. This temporal information is used in domestic animals to guarantee environmental adaptations over the seasons in order to anticipate environmental stressors and, accordingly, ensuring the maintenance of biological functions promoting reproduction and survival [3,4]. Environmental factors are known to act as synchronizers of biological rhythms in mammals, and, in particular, circadian rhythms promote mammals' temporal organization as an evolutionary mechanism of adaptation [5,6]. This animals' physiological capability, promoted by the circadian system, is an integral component of homeostasis that interacts with the immune system to guarantee survival [7,8]. The mammalian circadian timing system is composed of many individual clocks, and these are synchronized by a central pacemaker, which coordinates their physiology and behavior following a daily cycle by synchronizing the cell with its environment [5]. Circadian clocks are hierarchically organized, with the central pacemaker located in the (SCN) in the brain and subordinate clocks in almost all peripheral tissues which generate the rhythmic phenomena, including behavior of physiological activities and hematological and hematochemical parameters [8,9]. The circadian rhythm of the hemato-immune system seems to be synchronized by two clocks: the first is exogenous, based on environmental immune stimuli; and the second is endogenous, located in the suprachiasmatic nucleus (SCN), promoting seasonal-change reactions against environmental changes by modulating the dynamic of blood components in horses [10][11][12]. Relative knowledge of hematological profile is an important tool to obtain information about the health status of domestic animals that can be used as an index to monitor physiological and pathological conditions in horses, as it reflects specific changes in animals' body system [11,13,14]. The hematological profile, including the number of red blood cells, together with the direct erythrocyte's parameters (hemoglobin and hematocrit), blood coagulation, and immunity parameters, represents an important tool in the determination of physiological changes occurring in animals [11,15]. Thus, the number of circulating leukocytes and the percentage of each leukocyte type show daily rhythmic variations in peripheral blood in humans and mice [16][17][18]. Is well established that immune function is often affected by seasonal changes in domestic animals and regulated by circadian rhythms to set the time of multiple processes controlling the immunological surveillance and response to infection [17][18][19][20]. Therefore, few studies have been carried out concerning the circadian rhythms of hematological parameters and, in particular, concerning leukocyte activity possibly influenced by seasonal changes in horses [21][22][23][24]. The objective of this study was to evaluate the circadian rhythm of the hematological profile and the leukocyte count modifications during the different seasons in horses housed in a loose box and paddock in order to check how seasonal changes and different housing conditions may influence the health status and physiological profile of domestic horses under natural environmental conditions.
Animals
This study was conducted according to the European Directive 2010/63/EU, with current Italian legislation regarding animals' protection involved for scientific purposes. The protocol of this study was reviewed and approved in accordance with the standards recommended by Ethical Committee of University of Messina (06/2022). All horses were from the same private horse-training center in Sicily (Italy; Latitude 38 • 7 N; Longitude 13 • 22 E), under a Mediterranean climate, following a weekly show-jumping training program with two days of rest. The training was suspended during the experimental period. A total of 10 clinically healthy Italian Saddle Horses (4 not pregnant and not lactating mares and 6 geldings) that were aged between 10 and 15 years old and with a mean body weight of 445 ± 30 kg were enrolled in this study with the owner consent. Animals were divided into two equal groups maintained under a natural photoperiod. One group, including 2 mares and 3 geldings, was housed in individual loose boxes (3.50 × 3.50 m) in the same stable, without blankets. All individual loose boxes were equipped with a window of 1.50 × 1.50 m, usually manually opened during the day, closed at night on cold periods, and constantly open during warm weather; a grid placed in the front wall of the loose boxes allowed horses to interact. The other group, including 2 mares and 3 geldings, was housed in individual paddocks (500 m/horse with trees to shelter from the sun and rain on a rocky sand-based soil) without blankets. Before the experimental period, each subject underwent a clinical examination (measurement of body temperature, heart rate, respiratory rate, fecal consistency, appetence, and hematological and chemical profile) to monitor its health status and exclude animals with injuries. Each horse was free from internal and external parasites (regularly treated every three months), regularly subjected to Coggins tests (one time a year), and vaccinated against influenza and tetanus. The annual vaccinations were performed three months prior to the experimental period, and the last one was postponed until the end of the experiment [25,26]. All horses were fed three times a day (6:30, 12:00, and 19:00). All animals received the same diet based on a good-quality hay and maintenance concentrates individually. Water was available ad libitum in both groups. During the experimental period, thermal and hygrometric records and ventilation were monitored in both groups by using a multiparameter probe (Testo 400). Based on the formula adapted from Thom (1959), the temperature humidity index (THI) was evaluated: THI: (0.8 × ambient temperature) + {[(relative humidity/100) × (ambient temperature-14.4)] + 46.4} [27]. . Blood samples were collected via a catheter (FEP G14; 13.5 cm) introduced into one of the jugular veins and secured in place with a suture and bandage. An extension tube was attached to the catheter to facilitate blood sampling into vacuum tubes containing ethylenediaminetetraacetic acid (EDTA); immediately, a blood smear was performed from the EDTA tubes. After air-drying, the slides were stained using the the May-Grünwald stain, which consists of successively applying two neutral stains, the May-Grünwald mixture (1902) derived from the Romanowsky mixture (1891) and the Giemsa mixture (1904). On preparations fixed by rapid drying, we highlighted, in particular, the basic or acidic character of the cytoplasm and the granulations of the leukocytes. The microscopic analysis of blood films was performed by using an optical microscope (Nikon Eclipse e200, Nikon Instruments Europe BV, Amsterdam, The Netherlands) at 1000× magnification, with oil. Leukocyte identification and counting were performed on all samples, using a manual 100-cell differential count to identify neutrophils, basophils, eosinophils, lymphocytes, and monocytes on each blood film. All blood samples were refrigerated at 4 • C and analyzed for complete blood count within 2 h. Hematological parameters (red blood cells (RBCs), hemoglobin (HGB), hematocrit (HCT), white blood cells (WBCs), and platelets (PLTs)) were evaluated by using an automated hematology analyzer (HeCo Vet C, SEAC, Florence, Italy).
Statistical Analysis
Before the start of the experimental protocol, inter-subject variabilities were computed as the standard deviations of the means. For each hematological parameter, the standard deviations of the means for each of the 2 days across the seven subjects were applied as the measure of inter-subject variability, excluding endogenous influence [28]. Data were normally distributed (p > 0.05, Kolmogorov-Smirnov test) and reported as mean ± standard deviation (SD). An unpaired Student's t-test was applied to investigate possible statistical differences of environmental conditions between the loose box and paddock groups during the experimental period. A General Linear Model (GLM) was applied to the hematological values (RBC, HGB, HCT, WBC, and PLT) and leukocyte cells (neutrophils, basophils, eosinophils, lymphocytes, and monocytes) to evaluate the effect of day, time of day, housing conditions, and seasons. The periodic phenomenon was analytically evaluated by the application of a trigonometric statistical model to each obtained value at each time-series measurement in order to assess the main rhythmic parameters (mesor, amplitude, acrophase, and robustness) by means of the single Cosinor procedure [29]. A Factorial ANOVA was applied to rhythmic parameters to establish the effect of day, time of day, housing conditions, and seasons. Bonferroni's test was applied for post hoc comparison. A p-value < 0.05 was considered to be statistically significant. Data were analyzed using the statistical software STATISTICA 7 (StatSoft Inc., Tulsa, OK, USA).
Results
During the experimental period, environmental conditions were monitored and the temperature-humidity index was calculated by following the normal seasonal pattern for the Mediterranean area, as shown in Table 1. The mean ambient temperature, THI, and relative humidity did not show statistical differences between groups during the four seasons. The ventilation parameter was statistically higher in horses housed in the paddock than in horses housed in the loose box (p < 0.001). All data followed the physiological range of hematological parameters in horses [30]. In the absence of inter-subject variability of the investigated parameters, the application of GLM to the recorded values showed a statistically significant effect of housing conditions (p < 0.0001) on all hematological values during winter, except for WBC (p = 0.63), and a statistically significant effect of season (p < 0.0001) was found for all studied parameters. No statistically significant difference was observed between the time of day and between two days of monitoring for hematological parameters. A statistically significant influence of housing condition (p < 0.0001) was found for neutrophils, and a statistically significant effect of season (p < 0.0001) was observed for all leukocyte cells, except for basophils (p = 0.29). Figures 1 and 2 show the results of Bonferroni's post hoc comparison. No statistically significant effect for leukocyte cells was observed between the time of day and two days of monitoring. A different daily rhythmic behavior was observed for the studied parameters, as shown in Table 2. A daily rhythm was observed for all hematological parameters during the spring season in the loose-box group and during the winter in the paddock group. Lymphocytes and neutrophils showed a daily rhythm in loose-box horses during the spring and summer and in the paddock group during the winter period. No daily rhythm for hematological values was found during other seasons in both groups. The Factorial ANOVA on hematological parameters and leukocyte population showed no statistical differences between the time of day, two days of monitoring, housing condition, and season on Mesor, amplitude, acrophase, and robustness. Table 1. Mean values ± SD of ambient temperature, relative humidity, ventilation, and temperaturehumidity index (THI) expressed in their conventional unit, together with their statistical significances recorded during each season for box and paddock groups. Lowercase letters indicate the statistical differences between groups.
SPRING
Ambient temperature ( • C) 16 Table 1. Mean values ± SD of ambient temperature, relative humidity, ventilation, and temperaturehumidity index (THI) expressed in their conventional unit, together with their statistical
Discussion
Seasonal variations influence animals' physiological response and hematological profile [31,32]. A very similar ambient temperature value between summer and autumn and between spring and winter may be observed on the assumption that the measurements were taken at the beginning of each season (solstices and equinoxes). Since the summer solstice was a quite windy period (Table 1), it is possible for the temperature to remain mild considering the temperature fluctuations between day and night. Moreover, the horses housed in the loose box were not in direct contact with the sun, and the horses maintained in paddocks usually resided in the ventilated shade during the hottest hours.
Hematological parameters varied between horses housed in a loose box vs. horses housed in a paddock. The periodic changes observed may be the result of different factors, such as the influx of some younger formed elements, the distribution between the circulating and the marginal cell compartments, and the distribution between different tissue and organs, which may be themselves rhythmic [33]. Animals accustomed to living according to the Mediterranean temperature will adapt their thermoregulation mechanisms, so group differences may be due to excessively cold/hot conditions to which horses were subjected to, referring to the temperature range to which they were normally exposed to in this area. For horses housed in a paddock, the RBC value was significantly higher in the summer than in the spring, in accordance with Satué et al. [10]. As for the RBC values, HCT was significantly higher in the summer period than during other seasons in horses housed in the loose box, and it was significantly higher in the summer than the winter in the paddock group, confirming the close relation between the HCT value and RBC count, in contrast to Mirzadeh et al.'s (2010) findings [34]. These variations were associated with metabolic acclimation during environmental changes. Higher temperatures occurring during summer may be associated with our results. Higher temperatures activate thermoregulatory mechanism in mammals with a decreasing in body fluids as an adaptive response to heat stress. The increase in the HCT value is associated with the RBC one during summer, thus demonstrating the real increase in both hematological parameters [10]. HGB values reported a statistically significant decrease during winter compared to other seasons in horses housed in a loose box, possibly associated with an increase in the energy and protein requirements during cold weather. Platelets statistically increase their values during the autumn period compared to other seasons in both groups. PLTs are important components of the thrombotic process, and the seasonal variability of stroke and other vascular events could reflect the seasonal variability of platelets, together with a more proinflammatory blood environment in humans [35]. Horses kept in paddocks are naturally and constantly exposed to many thermal stressors, such as the direct solar radiation, ambient temperature, high humidity, and rainfall during the day, and they must alter their behavior and physiology in order to restore homeostasis [31]. Our results showed a statistical seasonality effect in relation to the WBC and its subpopulations. The circannual cycle in cell-mediated immunity has been described in humans and canine species. In particular, circannual variation has been described in the relative number of circulating B and T lymphocytes [33]. The main causes of seasonal variations in hematology may be attributed to climatic changes or day length. All the environmental temperatures recorded were within the thermoneutral zone for horses; the influence of season on WBC could probably be due to the different photoperiod [36]. A statistical increase in lymphocytes during summer compared to other seasons was observed in both groups. These results were probably determined by different environmental changes and influenced by possible subclinical infections. Significant increases in neutrophils' count were obtained in spring compared to other seasons and for horses housed in a loose box, and they were higher in summer than in winter for the paddock group. Eosinophil values were higher during summer compared to other seasons in the paddock, where external triggering agents for dermatitis, allergies, or parasites are less controlled for compared to the loose box, in accordance with previous studies [37]. The housing system and microclimatic conditions affect the physiological status and different blood parameters in the animal body and have important effects on animals' circadian rhythms [38]. The environment in which horses are kept influences the ability to keep their thermal status constant, which is related to their thermal characteristic and regulatory physiological mechanism [39,40]. Based on the present results daily rhythm were found for hematological parameters (RBC, HGB, HCT, WBC, and PLT) during the spring season in horses housed in a loose box and paddock with a nocturnal acrophase, as previously demonstrated by other studies [4,6]. Spring is considered to be a milder season compared to the others, in which the ambient temperature more closely reflects the horse's thermoneutral zone in both housing conditions [5]. Leukocyte subpopulations displayed different rhythmic behaviors. Lymphocytes and neutrophils showed a circadian diurnal rhythm during spring and summer for horses housed in a loose box, as previously demonstrated by Shina et al. (2019) but in contrast with other findings in which circulating leukocytes showed a nocturnal rhythm in bovines, mice, and humans, justifying these differences to species factor, different blood sampling period, or different management conditions [2,23,41]. No daily rhythm was observed for monocytes, basophils, and eosinophils during the four seasons in loose box and paddock. Daily changes in blood leukocyte counts have been attributed to a rhythmic cell distribution between the peripherical tissues and circulating blood compartments and to a rhythmic influence of new cells [2]. Although all leukocyte populations contribute to the circadian rhythm, our results showed that lymphocytes and neutrophils were the main characters to that oscillation. Lymphocytes' circadian rhythm showed a diurnal acrophase during early morning in spring that is delayed to the afternoon during summer for horses housed in a loose box. A similar behavior was observed for a neutrophil population expressing a diurnal acrophase during the early morning in the spring season and a diurnal rhythmicity with an acrophase during the afternoon in the summer, with a high robustness of rhythm. This rhythmic distribution reflects the immune system physiology during the active phase of the horse, where it is the most probable for the antigen to enter, requiring an important energy expenditure [13]. Therefore, lymphocytes migrate from blood to lymphoid tissues when the entering of the antigen is most likely to occur. Accordingly, during the resting period, the number of leukocyte cells has minimal values [3,23]. During the early morning, the autonomous nervous system and the neuroendocrine system have been shown to modulate leukocyte physiology, supporting the concept that circadian timing is an important aspect of hypothalamic-immune communication in humans [42]. A daily rhythm of lymphocytes and neutrophils was observed for horses housed in a paddock during the winter season, confirming that seasons and different environmental parameters may influence the hematological rhythm in horses. While excessive heat causes the disruption of the rhythm, a cold environment stabilizes it but does not affect the total count, as the cells are already balanced, as in our case.
Conclusions
In light of our results, we can conclude that hematological parameters show a different circadian rhythmic behavior in horses housed in a loose box and horses housed in a paddock during the four seasons.
This finding contributes to the knowledge about the impact of the management conditions on the physiological status of horses. During the four seasons, a different response of the immune system is regulated by the circadian physiology, influencing the horse's well-being. This study offers chronobiological support to understanding the adaptation of horses to temporizations imposed by the environment. The knowledge of the periodic temporal structure of horses makes it possible to understand its functions, being very useful in the management and prevention diagnosis, as well as allowing us to detect the consequences that environmental changes may have on temporal organization. Informed Consent Statement: Informed consent was obtained from all animal owners involved in the study.
Data Availability Statement:
The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-06-11T05:14:33.022Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "9413b9c50752678e05b6f55da6d5e4011d756ca9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ani13111865",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9413b9c50752678e05b6f55da6d5e4011d756ca9",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
7028035 | pes2o/s2orc | v3-fos-license | Palisaded Encapsulated Neuroma of the Tongue Clinically Mimicking a Pyogenic Granuloma: A Case Report and Review of Literature.
Palisaded encapsulated (solitary circumscribed) neuromas (PENs) are relatively common intraoral neurogenic tumors, which occur most frequently on the hard palate. Herein, we describe the clinicopathological characteristics of a palisaded encapsulated neuroma of the tongue. This tumor was an exophytic sessile mass measuring 0.3× 0.4 cm with rubbery consistency on the anterior one-third of the dorsum of the tongue. The tumor was excised under the impression of a pyogenic granuloma (PG). No recurrence was reported at 12 months postoperatively. Histopathological examination showed a well-circumscribed mass that composed of interlacing fascicles of spindle cells. The cells were S-100 positive. The nuclei, showing parallel orientation within the fascicles, were wavy and pointed and showed no sign of mitotic activity. Giemsa staining revealed no mast cells within the stroma.
INTRODUCTION
In 1972, Reed and colleagues described a distinctive neural tumor as PEN. Palisaded encapsulated neuromas clinically manifest as solitary, firm, non-pigmented, dome-shaped nodules on the face of adult patients [1]. Palisaded encapsulated neuroma is known as a benign tumor of the facial skin, and is rarely found in the oral mucosa [2]. Microscopically, the tumors are characterized by moderately cellular, fascicular proliferation of spindle cells that show some areas of parallel nuclei [3]. A bundle of nerves interposed between the Schwann cells typically aggregates in palisades and is identified by S-100 protein immunohistochemical stain [4]. An alternate designation i.e. solitary circumscribed neuroma (SCN) was proposed by Fletcher in 1989 [5]. Irrespective of the nomenclature, PEN/SCNs are considered as reactive hyperplastic processes. In 2010, Koutlas and Scheithauer considered PEN/SCNs as relatively common true neuromas of the skin or mucosa [6]. As a peripheral nerve sheath tumor, PEN accounts for only 0.04 to 0.05% of oral biopsy specimens. Other peripheral nerve sheath tumors are neurofibroma, schwannoma (neurilemmoma), mucosal neuroma associated with multiple endocrine neoplasia III, nerve sheath myxoma and granular cell tumor [7]. In the mouth, PEN is mostly found on the hard palate and maxillary labial mucosa as a small, superficial and usually painless nodule. The lesion is frequently diagnosed between the 5 th and 7 th decades of life, with equal sex predilection. The cause is uncertain; although trauma is presumed to play an etiological role [8]. A preferred treatment for PEN is conservative local surgical excision [8]; although gross total resection has been recently claimed to be the treatment of choice [6].
CASE REPORT
A 48-year-old man was referred to the Oral Medicine Department of Babol Dental School, complaining of a tongue mass persisting for one year. The lesion was painless, but sensitive to hot food and drink. His medical and family history was unremarkable. Physical examination revealed an exophytic sessile mass measuring 0.3× 0.4 cm with rubbery consistency on the anterior one-third of the dorsal surface of the tongue (Fig. 1). Clinically, the overlying mucosa was depapillated and had increased vascularity. Under the impression of a PG, excision of the mass was done and no recurrence was reported at 12 months postoperatively. Histopathological sections showed an encapsulated mass within the connective tissue, composed of interlacing fascicles of spindle cells that were consistent with Schwann cells. The cells showed a positive immunohistochemical reaction to S-100 protein (Fig. 2). The nuclei, showing a parallel orientation within the fascicles, were characteristically wavy and pointed, with no significant pleomorphism or mitotic activity. There was scant fibrous stroma among these nests. The overlying epithelium was atrophic and no rete ridges were seen (Figs. 3 and 4). Giemsa special stain revealed no mast cells within the stroma, ruling out the differential diagnosis of neurofibroma. The microscopic examination of the mucosal neuroma shows nerve bundles in various sizes surrounded by normal connective tissue, which are not usually seen in PEN [11]. A traumatic neuroma is not a true tumor, yet it develops as a proliferation of neural tissue that is caused by injury to a peripheral nerve. Traumatic neuromas are usually associated with pain, ranging from pain on palpation to a constant severe pain [12]. Substantial histomorphological differences exist between PEN/SCN and traumatic neuroma. These include the presence of perineural cells surrounding individual microfascicles, the greater abundance of interstitial collagen, mucoid matrix and myelin components, and the more orderly parallel arrangement of axons in traumatic neuroma [6].
DISCUSSION
Several microscopic criteria have been introduced to differentiate PEN from schwannoma. Antoni A, organized spindle cells in palisaded whorls, and Antoni B, haphazardly distributed neoplastic cells, are two common patterns which are often found during histopathological examination of schwannomas. Other microscopic criteria include Verocay bodies and the more definite palisading in the nuclei than that in PEN [13].
Contrary to the latter, it is extremely difficult to microscopically differentiate neurofibroma from PEN, especially when an incisional biopsy has been performed. The absence of a marked fibrous capsule and the irregular arrangement of the neoplastic cells are the main differential clues seen in neurofibroma as compared with PEN. The significant presence of mast cells, usually observed among tumoral cells of the neurofibroma, is also detectable using histochemical or immunohistochemical staining methods [8]. As Regezi and colleagues stated, PEN/SCN may be misdiagnosed clinically once identified somewhere in the mouth other than the palate [11]. This may be an obvious clinical impression since intraoral PEN/SCNs are mostly found on the hard palate [3,8]. Conversely, the tongue involvement comprises less than 8% of PEN/SCN cases [6]. As for this case, the PEN resembled a PG on the dorsal surface of the tongue (a common site for neurilemmoma and neurofibroma, but not for PEN) [14]. An erythematous lesion in a less commonly affected site rarely happens to be a PEN. Besides, the tongue is a potential site for PG. Regezi et al. have reported that PG is most commonly seen on the attached gingivae, tongue, lower lip and buccal mucosa [11]. The age of patient may be an important clinical parameter when the list of differential diagnoses of a lesion is formulated [15]. Our patient was 48 years old, which was close to the recently reported average age for PG patients (52 years). Pyogenic granuloma usually occurs in patients older than 39 years, with equal gender distribution [16]. Soft tissue enlargements of the oral cavity often present a diagnostic challenge because a diverse group of pathological processes can produce such lesions. Pyogenic granuloma is among the most common entities responsible for causing soft tissue enlargements [17]. Pyogenic granuloma can manifest as a painless smooth or lobulated mass with a surface that bleeds quite easily. Because of their high level of vascularity, young PGs are red, whereas older lesions are more collagenized and appear pink or normal colored [18]. Clinically, oral PG occurs as an exophytic lesion manifesting as small, erythematous papule on a pedunculated or sometimes sessile base [19]. Pyogenic granuloma arises in response to various stimuli such as chronic low-grade irritation, traumatic injury and hormonal factors [20]. However, the effect of female hormones on oral PG was questioned by Bhaskar and Jacoway since they found lesions both in males and females with no sex predilection [21].
CONCLUSION
Palisaded encapsulated neuromas may be misdiagnosed clinically once they appear somewhere in the oral cavity other than the palate. A PEN arising on the dorsum of the tongue may mimic a PG with similar clinical morphology. Even the patient's age may be misleading. On the other hand, the dorsal surface of the tongue is a common site for neurilemmomas and neurofibromas. Peripheral nerve sheath tumors must therefore be included in the list of differential diagnoses for a PG-like lesion on the tongue. | 2017-04-09T09:02:43.822Z | 2015-07-01T00:00:00.000 | {
"year": 2015,
"sha1": "12904efca02cdf6d64069dc330f42c64f94e5a50",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5ee887039fb738124b668c8e41837142905628a6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119245631 | pes2o/s2orc | v3-fos-license | The domain of stability of unnatural parity states of three unit charges
We investigate the domain of masses for which a state with total orbital momentum L=1 and unnatural parity $P=+1$ exists for the Coulomb systems $(m_1^+,m_2^-,m_3^-)$.
I. INTRODUCTION
There is an abundant literature on Coulomb systems, in particular, ions made of three unit charges, (m ± 1 , m ∓ 2 , m ∓ 3 ). For the ground state with total angular momentum and parity L P = 0 + , some particular configurations have been studied in great detail, and the map of the stability domain has been drawn, when the masses m i are varied. For a review, and references, see, e.g., [1].
The study of this 0 + ground state indicates a number of interesting features. There is a variety of situations: deep binding as for H 2 + , instability as for (p,p, e − ), and states at the edge of binding, such as Ps − (e + , e − , e − ) or H − (p, e − , e − ). The stability is reinforced when two particles become identical (m 2 = m 3 ). Schematically, the system has two components: a (m + 1 , m − 2 ) cluster with a loosely attached third particle, or (m + 1 , m − 3 ) cluster weakly linked to the second particle; when the thresholds become identical, the two components have maximal interference to build a compound system. For a fragile system such as H − , the Hartree-Fock picture fails. No variational wave function f (r 12 )f (r 13 ) gives an expectation value lower than the threshold energy. One needs either a explicit r 23 -dependence in the wave function, or a breaking of factorization, as in the famous Chandrasekhar wave function [2], (1.1) in which the particle identity is first broken and then restored by a counterterm, a strategy sometimes named "unrestricted Hartree-Fock" [3]. In natural units, the H(1s)+e − threshold for H − is at −0.5, the above wave-function gives an energy of about −0.513, and the elaborate calculations about −0.528. So it is not surprising that there is only one bound state below this lowest threshold, as rigorously demonstrated by Hill [4]. However, the attention was later paid on the positive-parity state where each electron is a p-wave, coupled to form a total angular momentum L = 1. Such a state with L P = 1 + cannot decay into H(1s) + e − as long as radiative corrections and spin effects are neglected. Its effective threshold consists of H(2p) + e − , at −0.125. Variational calculations, first at * Electronic address: jean-marc.richard@lpsc.in2p3.fr arXiv:0907.2592v1 [physics.atom-ph] 15 Jul 2009
FIG. 1:
Representation of a (m + 1 , m + 2 , m + 3 ) Coulomb system from its normalized inverse masses . Some special cases are also shown.
Oslo [5] and confirmed elsewhere (see, e.g., [6,7]), give an energy of about −0.12535, just below the threshold. Grosse and Pittner [8] have shown that this is the only unnatural-parity state of H − with this type of stability. But higher configurations of unnatural parity exist, for instance by attaching a positron to a two-electron system of unnatural parity [9].
Starting from H − and exchanging the masses leads to H 2 + , whose ground state is deeply bound and excitation spectrum very rich, and not surprisingly, includes a stable 1 + state. However, Mills [10] investigated the case of equal masses, i.e., Ps − , and found the 1 + state to be unbound. More precisely, he studied the 1 + state for any configuration (M + , m − , m − ) and got stability everywhere except for This estimate was confirmed by Bhatia and Drachman [11]. It is the aim of the present note to check the domain of stability for (M + , m − , m − ) found in [10,11] (without trying to challenge their accuracy), and to extend the study to the case of unequal masses for the negative charges. It is organized as follows. In Sec. II, we summarize the rigorous results on the stability domain. The variational method and the results are presented in Sec. III, and Sec. IV is devoted to some conclusions.
II. PROPERTIES OF THE STABILITY DOMAIN
We consider the Hamiltonian and focus on the lowest L P = 1 + state. By scaling, each charge can be set to unity, and one can impose that the inverse masses α i = 1/m i obey α 1 + α 2 + α 3 = 1. Thus, as done for the 0 + ground state [1,12], each physical system with m i > 0 can be represented as a point inside an equilateral triangle, and the inverse masses α i are proportional to the distances to the sides. See Fig. 1.
In this representation, the frontier between stability and instability has the following properties: • It is symmetric with respect to the vertical axis, due to 2 ↔ 3 exchange, • In each of the half-triangles limited by the symmetry axis, the instability domain is convex.
• The instability domain including, say, the point A 2 with inverse masses (0, 1, 0) is star-shaped with respect to A 2 . (A semi-straight line starting from A 2 enters at the most once a stability domain until it reaches the symmetry axis.) • If an energy E = E th (1 + ) , The proof is the same as for the 0 + ground state [1,12]. However, in this latter case, its was also demonstrated that any configuration with m 2 = m 3 has at most one bound state [4], so that the entire stability domain was connected, and clustered near the symmetry axis. This is not the case of L P = 1 + , as no stable state exists for equal masses [10,11]. Hence, the stability domain includes two separate islands, one around H 2 + , and another around H − .
A. Trial wave function
For scalar states, a generalization of (1.1) is where x = r 23 , . . . join the particles and x = |x|, . . . are the relative distances. After integration over the trivial angular variables, one is left with integrating over x y z dx dy dz, submitted to the triangular inequality. The matrix elements can be deduced from the generic function , (3.2) and its derivatives. For a L P = 1 + state with projection L 3 = j, a trial function is the superposition w, w, w), . . ., and even assume (u, v, . . .) in a kind of progression, in order to simplify the minimization over the non-linear parameters, without a significant loss of accuracy. This is similar to the strategy used, e.g., by Kamimura et al. when handling the expansions over Gaussians [13].
B. H − -like states
For H − , the energy is found to be E −0.1253, as in the literature. For other symmetric configurations (M + , m − , m − ), we obtain stability for M/m > ∼ 2.39, in good agreement with Mills's estimate (1.2). The width of the stability domain is very narrow. For an infinitely massive proton, we find stability only for m 3 /m 2 < ∼ 1.006 (if m 3 ≥ m 2 ). This is very close to the estimate based on (2.3), as expected for such a small width. For instance, the 1 + state does not exist for the very exotic (p, π − , µ − ) system.
D. Summary
The domain of stability contains at least the areas displayed in Fig. 2. The frontier is almost straight, so the convexity of the instability domain is weakly pronounced. The spike around H − is very small and extremely narrow, and shows up only after magnification. For comparison, the domain of stability of the ground state 0 + is also shown.
IV. OUTLOOK
The case of unnatural-parity states of three unit-charge particles displays an interestingand rare -example of discontinuous stability domain, where stability disappears and shows up again, when some constituent masses are varied continuously and monotonously. This means that the transition from a molecular type of binding (H 2 + ) to a halo-type of binding (H − ) involves some more fragile intermediate dynamics of stability. | 2009-07-15T14:02:21.000Z | 2009-07-15T00:00:00.000 | {
"year": 2009,
"sha1": "eaa586da00b54cd1d9dbf27ada2201a5ed81dca4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0907.2592",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "eaa586da00b54cd1d9dbf27ada2201a5ed81dca4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
270961005 | pes2o/s2orc | v3-fos-license | Evidence-based practice in traditional persian medicine (TPM): a stakeholder and social network analysis
Background The utilization of complementary and alternative medicine (CAM) is experiencing a global surge, accompanied by the adoption of national CAM policies in numerous countries. Traditional Persian medicine (TPM) is highly used as CAM in Iran, and the ongoing scientific evaluation of its interventions and the implementation of evidence-based medicine (EBM) encounters various barriers. Therefore, comprehending the characteristics and interactions of stakeholders is pivotal in advancing EBM within TPM policies. In this study, we utilized both classical stakeholder analysis and social network analysis to identify key stakeholders and potential communication patterns, thereby promoting EBM in TPM policy-making. Methods A cross-sectional nationwide stakeholder analysis was conducted in 2023 using snowball sampling. The interviews were carried out using a customized version of the six building blocks of health. Data were collected through semi-structured interviews. Stakeholders were assessed based on five factors (power, interest, influence, position, and competency). The connections and structure of the network were analyzed using degree, betweenness, closeness centrality, and modularity index to detect clusters of smaller networks. Results Among twenty-three identified stakeholders, the Ministry of Health and Medical Education (MOHME) and the Public were the most powerful and influential. The Iranian Academy of Medical Sciences was the most competent stakeholder. Social network analysis revealed a low density of connections among stakeholders. Pharmaceutical companies were identified as key connectors in the network, while the Public, supreme governmental bodies, and guilds acted as gatekeepers or brokers. The MOHME and Maraji were found to be high-ranking stakeholders based on four different centrality measures. Conclusion This study identifies powerful stakeholders in the network and emphasizes the need to engage uninterested yet significant stakeholders. Recommendations include improving competence through education, strengthening international relations, and fostering stronger relationships. Engaging key connectors and gatekeepers is essential for bridging gaps in the network. Supplementary Information The online version contains supplementary material available at 10.1186/s12906-024-04564-5.
Introduction
The utilization of Traditional and Complementary Medicine (T&CM), a subset of Complementary and Alternative Medicine (CAM), has seen a remarkable global surge.In 2018, 88% of WHO member countries reported its use.Over the years, there has been a significant increase in the number of countries with national T&CM policies, rising from 25 in 1999 to 98 in 2018.Moreover, since 2005, there has been a significant improvement in the regulation and registration of herbal medicines in the WHO Eastern Mediterranean Region.Nine of its 21 nations now have national T&CM policies, and 12 have laws and regulations related to T&CM [1].
In the Middle Eastern region, Iran boasts a rich history of traditional medicine, with Traditional Persian medicine (TPM) standing out as a prominent example.Given the increasing global trend of CAM adoption, it is crucial to examine the current state of TPM in Iran, especially in light of the country's efforts to integrate evidence-based medicine (EBM) into its healthcare system.Although TPM has a long-established history in Iran, the scientific evaluation of its interventions is a recent development, requiring further progress.
Evidence-based medicine (EBM) involves using the current best evidence to make decisions about individual patient care [2].Although EBM has been considered for implementation in patient care in many medical science areas in Iran [3], barriers to its implementation exist.These barriers encompass inadequate facilities [4], research-related issues, and problems with coordination and motivation [5].
Integrating EBM into every aspect of the health system is crucial, but its integration into CAM practice is particularly significant due to the high demand and the relatively low level of scientific evaluation of CAM interventions.Recognizing this need, changes have been implemented in the healthcare system.In 2007, TPM was officially integrated into the Iranian healthcare system, with major medical universities establishing Schools of Persian Medicine and allied divisions.This allowed doctors of medicine and pharmacy to specialize in Persian medicine and traditional pharmacy at the Ph.D. level [6].
The widespread use of TPM in Iran, alongside conventional medicine [7][8][9][10].has led to the participation of various groups in delivering traditional treatments.These include unlicensed therapists and natural remedy shops, drawn by financial advantages.Legally-approved service providers encompass TPM clinics, specialized private offices, and traditional and herbal drug manufacturers [6].Despite the existing controversies, CAM interventions are frequently utilized in Iran, with up to 75% of outpatients [11] and over 50% of cancer patients relying on at least one CAM method [12].Medicinal plants, TPM, hydrotherapy, and music therapy are common treatment methods [7,13].However, studies indicate that fewer than 12% of CAM users receive guidance from approved therapists, raising concerns about the potential misuse of CAM [6,7].Non-EBM approaches to CAM can also be detrimental to health [14], and the unclear identification, contamination, and adulteration of medicinal plants are among the disadvantages of such approaches [15].Examples of harm resulting from non-EBM evidence-based medicine include hepatic failure and hospital admissions due to borage (Echium amoenum) misuse [16].
Over the past two decades, the number of published articles on EBM in Iran has increased.However, few studies have examined the existing policies and policy processes.Therefore, the need to improve the approach to EBM in medical sciences remains necessary, particularly with the growing use of TPM.Considering that decision-making processes are influenced by stakeholder characteristics, understanding these stakeholders, their features, and how they interact with and influence each other is crucial for facilitating the use of EBM [17].
This study aimed to identify the key stakeholders in TPM, their characteristics, and their relationships in Iran, where TPM was the most common CAM practice.Using social network analysis (SNA), we visualized and analyzed stakeholder connections and communication patterns to identify key stakeholders and their roles in the policy process, as well as potential communication gaps.
Design
Our cross-sectional study was conducted in Iran between 2022 and 2023, and it involved a stakeholder analysis conducted in three phases.Firstly, a list of potential stakeholders was generated by reviewing available literature and information on the Internet.Secondly, semistructured interviews of 45 min to 1 h were conducted with 24 individuals (see Supplementary Table 1), using a framework modified from the six building blocks framework [18] to categorize different aspects of TPM practice at various levels (see Supplementary Table 2).Finally, interviewees were asked to assess the stakeholders based on five factors (power, interest, influence, position, and competency), and the stakeholders with the highest strengthening international relations, and fostering stronger relationships.Engaging key connectors and gatekeepers is essential for bridging gaps in the network.Keywords Stakeholder, Evidence-based practice, Traditional persian medicine, Social network analysis scores were included in the network analysis.A panel of experts who were previously interviewed defined the connections between key stakeholders.
Sampling and data collection
We recruited informed stakeholders of TPM using convenience and snowball sampling methods.Interviews continued until data saturation was achieved.The interviews began with a description of the study objectives, after which experts were asked, "Who are the main actors involved in the decision-making and policy-making processes of organizing evidence-based traditional Persian medicine in Iran?".Using the modified tool, the interviewees also were questioned about various aspects of TPM practice at different levels (see Supplementary Table 2).Subsequently, the interviews were transcribed, and a list of individuals and organizations mentioned was generated.Similar or subordinate individuals and organizations were consolidated by an expert panel.Different subordinates of the Ministry of Health and Medical Education (MOHME) were analyzed separately due to their critical roles in Iran's health system.The final stakeholders were evaluated by an expert panel consisting of specialists in TPM, conventional medicine physicians, and health policymakers.Stakeholders were assessed based on power, interest, influence, position, and competency.Each stakeholder received a score from 0 to 10, with 0 indicating the lowest level of the feature and 10 indicating the highest.A score of 0 represented complete opposition to the policy, 5 indicated neutrality, and 10 signified complete support for the policy.Position scores were recalculated to account for highly opposing stakeholders (|Position-5|×2), and those with the highest scores were included in the SNA.The expert panel subsequently rated the connections of the final stakeholders using a four-level scale: no connection, weak connection, moderate connection, and strong connection.
Definitions and rationale
The five factors were selected after an extensive literature review on stakeholder analysis, as they are vital for identifying and comprehending the roles and behavior of stakeholders in healthcare policy-making and implementation [17,19,20].Competency was also taken into consideration, prompted by the research team's suggestion and recurring issues raised during interviews, in response to the potential challenge of insufficient competence within our specific context.So overall, In this study, we assessed stakeholders using five factors, which we defined as follows.The broad definition of power is the ability of stakeholders to influence policy or program implementation.We break down this broad definition into its dimensions using the following criteria: power, influence, and competency.This study defines "power" as the amount of resources a stakeholder possesses and their capacity to mobilize them.Influence is defined as a stakeholder's ability to exert power over other stakeholders.Competency refers to the technical and professional skills and knowledge required for a stakeholder to fulfill their role."Position" relates to a stakeholder's stance on a specific policy, which can range from active support to active opposition, with varying degrees of neutrality in between [21]."Interest" represents a stakeholder's motivation for the policy [19].We define a stakeholder connection as an actual channel for transmitting messages from one stakeholder to another.
Rigor and trustworthiness
To enhance the rigor and trustworthiness of our findings, we followed the Guba and Lincoln approach, which involved considering criteria such as credibility, confirmability, dependability, transferability, and authenticity [22,23].To address these criteria, we implemented various strategies throughout the study, including peer debriefing to provide an external check on the research process for credibility, involving multiple authors and gathering a list of existing stakeholders from multiple data sources (Searching relevant documents, laws, regulations), using theoretical framework to gather a comprehensive list of stakeholders (modified tool) of the for dependability, utilizing maximal variation sampling for transferability, member checking by contributors for confirmability, and incorporating citations from nearly all individuals for authenticity.
Social network analysis
SNA was used to analyze connections among stakeholders in this study [24].The fundamental concept underlying SNA is that network connections and their structure are significant and can be independently analyzed, irrespective of individual stakeholder characteristics [25].Centrality measures, which reveal the structural importance of a stakeholder within a network, are the most frequently employed metrics in SNA.We employed degree, betweenness, and closeness centrality measures (see Supplementary Table 3).Furthermore, we utilized the modularity index to partition the network into clusters of smaller networks based on their structural attributes [26].
We used Stata software (Version 17, Stata Corporation, College Station, Texas, USA) and Microsoft Excel (2016) for statistical analysis and creating figures.Additionally, we employed the networkx package in Python [27] and Gephi for SNA.
Results
We identified 74 stakeholders through interviews and ranked them based on five parameters.The final list of the most important stakeholders consists of 23 organizations or groups involved in decision-making and policy development related to evidence-based TPM in Iran (refer to Table 1).
The highest power reported was the Ministry of Health and Medical Education (MOHME) and the public.The supreme governing bodies (SGB), judicial and enforcement system (Judicial), pharmaceutical companies (PhC), and insurance companies were considered middle to high-power stakeholders in promoting evidence-based practice (EBP) of TPM.
Our analysis found SGB and the Iranian Academy of Medical Sciences (IAMS) to have the strongest influence on other stakeholders, with IAMS being the only highly influential and interested stakeholder competent in promoting EBP of TPM.WHO and IAMS had the highest competency, followed by PhC, SGB, and the Supreme Council of the Cultural Revolution (SCCR), reported as a medium to a highly competent organization for promoting EBP of TPM (Table 1).Roughly 20% of stakeholders were opponents to promoting EBP in TPM.Public, insurance, and non-health-related governmental organizations such as SGB, SCCR, parliament, Vice Presidency for Science and Technology (VPST), judicial, Ministry of Industry, Mine and Trade (SAMT), The Islamic Republic of Iran Broadcasting (IRIB), Insurances, Public, and other international centers active in complementary medicine were neutral to this policy.The lowest interest was in the Judicial, SAMT, parliament, and public.Most stakeholders within the MOHME subdivision were supportive of promoting EBP in TPM, except for the Vice-Chancellery Table 1 Power, position, interest, influence, and competency of stakeholders for Health.The universities of medical sciences had the highest power and influence, while the overall competency of the MOHME subdivision was medium or lower (Table 1).
The highest mean of influence (4.78) and power (4.4) and the lowest mean of interest (4.03) were found in stakeholders with neutral positions.Stakeholders who supported the policy of EBM in TPM had the highest interest (8.33) (Table 2).MOHME was a highly interested and powerful policy supporter, and IAMS was the most influential supporter (Fig. 1).
Three clusters of stakeholders were identified in a network analysis using a modularity algorithm, each represented by distinct colors in the network figures (see Supplementary Table 4).The modularity score for this network was 0.217.The stakeholders in the first cluster (modularity class = 0) were supporters, with a score of 7.25 for the policy.The second (modularity class = 1) and third (modularity class = 2) clusters were characterized as neutral, with a score of 5.29, and opponents, with a score of 3.44, respectively (see Table 4; Fig. 2).
Discussion
This study aimed to identify stakeholders involved in implementing the policy of EBM in TPM and evaluate their power, influence, interest, position, and competency.The analysis revealed that AIMS and MOHME are influential and powerful supporting players who should be given priority for engagement and communication.SNA showed that the network density among
Classical stakeholder analysis
Two primary supporting stakeholders were AIMS and MOHME, both of which held significant influence and power, respectively.The classical stakeholder analysis has revealed that stakeholders falling into the "player" category (influential, powerful, and interested) should receive the highest priority for engagement and communication, as they possess the significant potential to impact the project's success or decisions.In our study, we did not identify any key opposing stakeholders, which may be attributed to the disorganized and individually oriented nature of the opposing network, a lack of transparency among opponents, or the inability to access information related to the opposing network.
Our study found that many powerful and influential stakeholders, such as the SGB, Judiciary, Parliament, the public, and insurance providers, were either neutral or uninterested in supporting the policy of EBM in TPM.Other published stakeholder analyses related to health policies in Iran also identify the Parliament, MOHME, the Judiciary, insurance providers, and UMS as powerful or influential stakeholders [28][29][30][31].The unwillingness of powerful and influential stakeholders to participate is a major challenge, which has also been demonstrated in other stakeholder analyses in Iran [28,29].These mentioned stakeholders, often referred to as 'Context-setters' (influential or powerful but not interested), should be informed and engaged, as their decisions and actions can significantly impact the project or decision.We found that the public is a powerful stakeholder; this result contrasts with some other stakeholder analyses, which considered the public as a less powerful, passive stakeholder [28,29].We believe that the public's power stems from their financial resources, driven by the widespread need for and usage of TPM services [7,11,13].
Our findings suggest that the MOHME held significant influence, displayed interest, and provided support for this policy.MOHME possesses both structural and human resources related to TPM.These resources include institutions such as the Office of Traditional Persian Medicine (OTPM), several Traditional Persian Medicine Research Centers (TPMRC), and Traditional Persian Medicine Specialists (TPMS), many of whom have medical backgrounds.Given the presence of these resources within MOHME, it appears that this stakeholder may not have taken substantial action and has not prioritized this policy.Consider that the highest level of competency among supporters was identified in WHO-GCTIM as an international stakeholder and AIMS as a national organization.Our study also revealed another existing challenge: key supporters like MOHME, IRMC, CP, TPMS, and TPMRC exhibit medium to low levels of competency.Competency levels for MOHME's subordinates were even lower.
Social network analysis
The network's density was relatively low when compared to other stakeholder networks in health-related policy in Iran [28].This suggests that critical stakeholders in this policy were not well connected.The density of the second cluster (the neutral cluster) appears to be the highest in the network.Therefore, the density in the clusters of supporters and opponents is lower than the overall estimated value.Another challenge in the network of supporters for this policy is the lack of connections and collaboration among key stakeholders in this field.We identified three clusters of stakeholders in this network, which we can categorize as highly influential, uninterested, and neutral stakeholders.The network structure of these stakeholders supports the findings from classical stakeholder analysis.We propose naming these clusters as supporters, neutrals, and opponents based on their positions.In a social network, a structural hole is a gap where two contacts or groups are not directly connected, and brokers can bridge this gap by connecting them [32].We found significant structural holes that could impact collaboration and the policy process [33].It appears that AIMS could bridge the structural gap between the supporter and neutral clusters.There is a lack of a stronger connection between AIMS and IRMC, MOHME and IRMC, AIMS and Parliament, AIMS and S&C, and TPMS and CP, as well as between international stakeholders (GCTM-WHO and CAMIC) and TPMRC.Although studies have shown that increasing connections between nodes (reducing structural holes) reduces flexibility and innovation, reducing structural holes can lead to decreased cooperation [33,34].It seems that trying to establish these connections and relying on AIMS and the public brokerage role can help advance policy.
The SNA reveals that PhC was a node with high betweenness and closeness centrality; these nodes are often referred to as key connectors, bottlenecks, or hubs of the network [35,36].These nodes can control the flow of information and resources within a network.The IRIB also had high closeness but relatively low betweenness centrality, indicating that this stakeholder was essential for local communication within a specific cluster in the network.Other stakeholders, such as Parliament, IRMC, and Quacks, can be considered local communicators within their respective clusters.The Public, SGB, and guilds had relatively high betweenness and relatively low closeness centrality; these nodes are often known as gatekeepers or brokers in a network.These nodes serve as critical connectors between different parts of the network, and their removal could result in network fragmentation [35,36].The Public, being the target population and final user of the service, plays a role in connecting different parts of the network to each other.International stakeholders (CAMIC and GCTM-WHO) were found to be peripheral nodes due to their low centrality measures.CAMIC includes institutions such as the Ministry of Ayush in India, Hamdard Universities, and other Unani medicine organizations.Overall, MOHME and Maraji were identified as high-ranking stakeholders based on four different centrality measures (Supplementary Table 5).
What should we do?
Based on the results of the stakeholder analysis, here are some recommendations for promoting EBP in the field of TPM: Enhancing Competency: A deficiency in competency has been identified in various health-related stakeholders, including MOHME, IRMC, CP, TPMS, and TPMRC.To advance the adoption of EBP within the field of TPM, it is imperative to develop a competency map for each key stakeholder.Furthermore, training interventions and awareness campaigns should be initiated to augment their knowledge, attitudes, and implementation of EBP.Achieving this goal necessitates the cultivation of shared competencies among all healthcare professionals.Specifically, we recommend that MOHME bolster its structural and human resources through this process, thereby fostering the creation of reliable evidence to support TPM practices.
Address managerial competencies: Stakeholders related to managerial competencies, such as leadership, change management, and financial management, require more attention.This attention will impact not only this policy but any program or policy.Therefore, addressing these managerial competencies is essential to promote the EBP of TPM.
Encourage connections and collaboration: The absence of connections and collaboration among key stakeholders in the realm of TPM policies has also been identified as a weakness.Promoting connections and collaboration among these stakeholders, including MOHME, IAMS, CP, IRMC, and international partners, can assist in establishing a more robust network of supporters and advancing EBP.
Persuade powerful and influential stakeholders and advocate for policy with significant stakeholders: The study revealed that numerous powerful and influential stakeholders, as well as key connectors in the network, maintain a neutral position regarding this policy.Therefore, it presents an opportunity for proponents of this policy to persuade these stakeholders, including AIMS, the public, MOHME, insurance, the judiciary, Parliament, SGB, and universities of medical sciences (UMS).
Addressing opposition and threats: The opposition from religious institutions and clergy to this policy poses a significant threat that must be dealt with.Administrative corruption has also been reported as a challenge in Iran [37,38], and PhC may oppose the policy due to their financial interests.Therefore, it is essential to address these threats to promote the EBP of TPM.
Utilize strengths: MOHME has been recognized as a powerful, interested, and supportive stakeholder for this policy.Leveraging MOHME's strengths, including the OTPM, numerous TPMRCs, and TPMS, can significantly contribute to the promotion of EBP within TPM.
Seek Assistance and Form Alliances: Seek assistance from influential and powerful stakeholders, such as AIM and the public, to act as intermediaries in advancing the policy.Additionally, establish relations with GCTM-WHO (a peripheral node) to leverage their expertise and maintain engagement with local communicators.
It seems that MOHME should revise the regulatory policies of IFDA for herbal drugs to encourage the PhC to invest exclusively in scientifically proven effective herbal medicines.We also recommend that MOHME and its allies promote this policy to influential, powerful, neutral, and less interested stakeholders.Another aspect of the policy should center on legally combating quackery through the influence and power of IRMC, the judicial system, and the enforcement system.
Limitations
Accessing the opponents' and quackery network was a complex task, and we were unable to completely grasp their perspective in this study.Therefore, we recommend a more focused investigation of the quackery network.Stakeholder positions may evolve over time, as evidenced by recent developments involving the SCCR's secretary, underscoring the time-dependent nature of cross-sectional stakeholder analysis.
Conclusion
This study reveals the presence of several influential and disinterested stakeholders within the network.The support network presents favorable opportunities as well as certain challenges for policy implementation.To tackle these challenges, various actions can be taken, such as advocating for the policy to uninterested yet significant nodes in the network, improving competence through educational interventions, strengthening international relations, and harnessing existing strengths.
Fig. 1 A
Fig. 1 A -Interest/Influence, B -Power/Position, each consists of four quadrants of players (IAMS, SCCR, MOHME), context setters (SGB, Parliament, SAMT, the public, judiciary), and several stakeholders in subjects and the crowd
Fig. 2
Fig. 2 Visualization of the network of stakeholders: The size of nodes in each figure suggests the stakeholder's five factors (Power, Interest, Influence, Position, Competency); the color of each stakeholder corresponds to the cluster to which it belongs
Table 2
Sum and means of interest, influence, competency, power over position
Table 3
Network parameters of stakeholders of evidence-based practice of traditional persian medicine
Table 4
The mean value of stakeholder features in different clusters of the network found by the modularity index | 2024-07-05T06:17:17.477Z | 2024-07-03T00:00:00.000 | {
"year": 2024,
"sha1": "0fb060abfb05c284ff38c60ca35f20b69a17c354",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12906-024-04564-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "83bb1d62089fabb1fa23c6ae074ee25c9a3287d6",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18769255 | pes2o/s2orc | v3-fos-license | Liquid-liquid-solid transition in viscoelastic liquids
Liquid-liquid-solid transitions (LLST) are known to occur in confined liquids, exist in supercooled liquids and emerge in liquids driven from equilibrium. Molecular dynamics (MD) simulations claim many successes in forecasting the phenomena. The transitions are also studied in the framework of thermodynamics based methods and minimalistic models. In here, the proposed approach is derived in the framework of continuum and includes spatial and temporal dynamic heterogeneities; the approach is meant to capture the material behavior at small scales. We conjecture that the liquid-like and solid-like behaviors are dissimilar enough for the two to be governed by different constitutive relations. In this way, we gain additional degree of freedom, which is found essential when predicting the transitional phenomena. As a result, we derive the LLST criteria for liquids in equilibrium, during steady flow and at transient conditions. Lastly, we forecast short-lived LLSTs in human blood during cardiac cycle.
Liquid-liquid-solid transitions (LLST) are known to occur in confined liquids, exist in supercooled liquids and emerge in liquids driven from equilibrium. Molecular dynamics (MD) simulations claim many successes in forecasting the phenomena. The transitions are also studied in the framework of thermodynamics based methods and minimalistic models. In here, the proposed approach is derived in the framework of continuum and includes spatial and temporal dynamic heterogeneities; the approach is meant to capture the material behavior at small scales. We conjecture that the liquid-like and solid-like behaviors are dissimilar enough for the two to be governed by different constitutive relations. In this way, we gain additional degree of freedom, which is found essential when predicting the transitional phenomena. As a result, we derive the LLST criteria for liquids in equilibrium, during steady flow and at transient conditions. Lastly, we forecast short-lived LLSTs in human blood during cardiac cycle. E nvision a thin film of a molecular liquid exhibiting one or a combination of transitions: the liquid-liquid transition (LLT), where a semi-organized molecular structure is formed (liquid II); the liquid-solid transition (LST), in this case the liquid is transformed into a solid-like material and, lastly, the liquid-liquid-solid transition (LLST) defined as a combination of the two. When controlled, these transitions can be utilized in micro-electro-mechanical systems (MEMS), joint lubrication in biology and in micro-fluidics. Suspensions exhibit similar behaviors and the transitions play a role in paints, inks, cosmetics, pharmaceuticals and food 1 . There are also dense suspensions such as corn starch which exhibit jamming transitions and, in some cases, could allow you to run on their surface without sinking.
The question is: What happens during these transitions? Liquids brought to the transitional regime display collective (temporarily and spatially synchronized) motion of molecules and particles. Consequently, relaxation times are much longer from those in bulk [2][3][4] , viscosity increases by many orders of magnitude and, when sheared, confined liquids may experience smooth, stick-slip or chaotic responses [5][6][7][8][9] . The collective motion triggers dynamic heterogeneities with a supermolecular length ranging from a few up to ten molecular distances [10][11][12][13] . Heterogeneities of a similar kind are observed in various suspensions 14 and, among them, in blood subjected to a transient flow 15 . We also know that strong stimuli drive viscoelastic liquids to the transitional state. An impactactivated solidification has been observed in dense suspensions 16 and an aligned motion of molecules is detected in liquids subjected to shock 17 .
Molecular dynamics simulations provide excellent insight into the transitional phenomena [18][19][20][21] . The simulations are followed by thermodynamics-based models 22,23 and minimalistic models [24][25][26] ; the latter two are suitable for the replication of the observed behaviors. Also, the transitions are studied in the framework of non-Newtonian fluid dynamics 27,28 . As we noted earlier, the transitions emerge in over-constrained liquids where molecules lose their ability to move freely and, consequently, are forced to act in a collective manner.
In here, the transitional regime is the regime of our interest. Our assertion is that a continuum-level approach can be useful in forecasting the transitions so long as the approach is brought close enough to the molecular scale. In our case, this is done by accounting for the relevant spatial and temporal fluctuations known as the dynamic heterogeneities 29 . Also, we assume that the substance in the liquid-like and solid-like states is different enough for the behaviors to be governed by independent constitutive equations. By comparing the two we derive criteria for the liquid-solid and liquid-liquid transitions in equilibrium and at steady state. The analysis prepares us for a more challenging task, namely the prediction of the LLSTs during transient flow processes. We illustrate the later in the example of blood subjected to an idealized cardiac cycle.
Results
Dynamic heterogeneity. Let's consider a viscoelastic liquid, where the monitored particle moves from its initial position {X k } to a less than optimal position {x k }, Fig. 1. The ''optimal'' position refers to the thermodynamically most favorable (mean) positin {z k }. From this point of view, a collective motion of a few atoms in monatomic liquids (transit) can be considered a deviation from the expected trajectory 30 . In liquids, the current position of a molecule often diverges from the optimal position, but the trajectory must be physically admissible, i.e. it must be consistent with the constraints imposed by the conservation laws. In the framework of continuum, we expect to find the material particle (particle and its surroundings) in an acceptable position determined by the equations of motion Ls ij =Lx j~r _ u i , where the components of the current stress are s ij , the particle acceleration is _ u i~L u i =Lt and, as usual, mass density is r. Any deviation from the thermodynamically optimal trajectory [ X k ?x k f g versus X k ?z k f g ] triggers perturbations in stress. Consider stress tractions plotted on a surface normal to the direction of flow {n k }. The tractions in the optimal and actual positions may not be the same s z ik n k =s ik n k and the difference is responsible for stress fluctuations Stress in the reference (optimal) position is s z ik and in the actual position {x k } is s ik . Using divergence theorem, the stress perturbations centered about s z ik become ds The material length l c is understood as the dominant length and it captures the relevant spatial stretch of the stress gradient. Next, the stress gradient is replaced by the inertia term taken from the equations of motion, while volume V 0 is reduced to a material point V 0 ?0 ð Þ. As a result, the perturbations are simplified to ds f ij~l c s ik,k n j~r l c € c ij . We assume that the tensor ds f ij is symmetric and, then, we have _ c ij~ui n j zu j n i À Á =2. However, the symmetry restriction does not need to be enforced. The stress fluctuations are incorporated into the constitutive description of the liquid and the solid. In the transitional regime, the liquid is prone to shear and may experience changes in mass density. This behavior is described by the where strain rate is _ e ij~L u i =Lx j zLu j =Lx i À Á =2. In this relation, elastic matrix is C ijkl and g is viscosity. Viscosity is determined by averages over a spectrum of relaxation times 31 . Usually, the viscous term in (2) is based on stress deviator alone. In this case, shear stress and pressure are viscous quantities 32,33 . Also, the elastic matrix includes contributions of bulk and shear moduli. The last term in (2) captures the contribution of the stress perturbations. The perturbations ds f ij are added to the mean stress denoted as s ij . In the constitutive relation, the parameter R M e~u s l c =u k resembles the Reynolds number and, for this reason, we call it the material Reynolds number. We introduce the number for reasons discussed later. In here, kinematic viscosity is u k~g =r and u s is sound velocity. When the tensors ds f ij and _ c ij are not symmetric, we expect the nonsymmetry would trigger perturbations in flow. The rate of mechanical work performed by the material is s ij u i,j~_ G L z 2y L zR M e s ij n j _ u i =u s . The state function G L and the dissipation potential y L are G L~sij C {1 ijkl s kl =2 and y L~sij s ij = 2g ð Þ, respectively. We omit the contribution of heat flux. The flux R M e € c ij =u s in (2) may become a powerless quantity when the term s ij _ c ij is equal to zero. As suggested 34 , the powerless flux captures the contribution of hidden micro-scale dynamic events.
The liquid is said to be converted to a viscoelastic solid-like material. In the simplest circumstance, the solid follows the Kelvin-Voigt behavior and includes the contribution of the micro-inertia described in (1), thus The time span (t 2 t 0 ) is taken to include the relevant history of the perturbations. This means that the material retains a short memory of the past history but this memory fades away beyond (t 2 t 0 ) 31,35 . In The state function is F S~eij C ijkl e kl =2 and the dissipation potential becomes y S~g _ e ij _ e ij =2. With the use of normality rules 36 the dissipation potential captures viscous stress g_ e ij in (3). The contribution of the dynamic heterogeneity u i,j n j À Á r u i t ð Þ{u i t 0 ð Þ ½ u s =R M e is linked to the relevant change in particle momentum. Under certain conditions the responses produced by the liquid (Eqn. 2) and the solid (Eqn. 3) become indistinguishable. This is what we call the liquid-liquid transitional state (liquid II).
LST-near-equilibrium scenario. We place the liquid (2) into a small container. Walls of the container restrict the motion of molecules and, in this manner, contribute to the increase of viscosity. In equilibrium, the fluctuations represent the primary response of the substance and, therefore, we omit the inertia terms in (2) , where du z is the magnitude of the perturbations. In the next step, we determine the temporal contribution u t (t). It turns out that the expression for u t (t) is scale independent (regardless whether it is the nano-or mesoscale), but u t (t) is different in the liquid and the solid In here, the characteristic relaxation time is t 0~uk =u 2 s , while u s~ffi ffiffiffiffiffiffiffi ffi C=r p is sound velocity and, as before, u k~g =r is kinematic viscosity. The elastic constant is reduced to a single parameter C. A substance in the liquid and the solid state has very different properties and these properties become comparable only within the LST regime. The two expressions in (4) become identical when the material Reynolds number R M e is equal to one. We conclude that the liquid-solid transition emerges when The magnitude of kinematic viscosity increases as the size of the confinement becomes smaller 7,8,37,38 . As indicated, the transitional length in water, hexadecane, cyclohexane and other substances is in the range of six to ten molecular distances. When knowing sound velocity and viscosity (both measured at small scales), the predicted characteristic length l c~uk =u s is within the range observed in the experiments and predicted in molecular dynamics simulations.
LLT-steady flow. We have shown that spatial nano-fluctuations in equilibrium are Gaussian and become harmonic at meso-scale. It is suggested that steady shear makes the fluctuations non-Gaussian 39 and, then, the fluctuations (1) trigger the liquid-liquid transition. We begin by enforcing conservation of mass Lr=LtzL ru ð Þ=Lx~0 and momentum Ls=Lx~r_ uzruLu=x, where the Cauchy stress is s, velocity is u~Lu=Lt and displacement is denoted as u. All the variables are expressed in terms of moving coordinate system z~x{Dt, where the steady velocity is D. Consequently, we have u~u z ð Þ, s~s z ð Þ and strain is e~Lu=Lz. Stress is derived directly from the conservation laws and is s z ð Þ~{r 0 Du z ð Þ. In here, velocity is u z ð Þ~{DLu=Lz and r 0 is the initial mass density. As in the near equilibrium scenario, we predict the liquid-solid transition by comparing Eqns. 2 and 3; in here, both the relations include the contributions of the dynamic heterogeneities. The transition occurs when Note that at D~u s the two material numbers are equal R S e~R M e À Á and, consequently, the two criteria (5) and (6) become identical. The transitional liquid (liquid II) should exhibit the properties described by (2) and (3). From the solution presented in Methods A, flow patterns exhibited by the liquid II are limited to where u 1 and u 2 are constants. From (7) we see that the liquid II cannot be formed at R S e~1 . In all other situations R S e =1 À Á , the liquid II must follow the script defined in (7). There are four scenarios: (1) At R S e v1, the substance is in its solid state (pastes, wet sands, and other dense suspensions) exhibiting high resistance to flow (high viscosity). As a result, the material Reynolds number is small R M e v1 À Á . The solid-liquid conversion is accomplished by applying a standing wave designed according to the protocol (7). The most known phenomenon of this kind is soil liquefaction 40 .
(2) One may envision another situation, where a viscoelastic liquid R S e w1 À Á is subjected to the standing wave. In this manner, the liquid is forced to act as if it were a liquid II substance. Such experiments have been conducted 14,41 and show the formation of microstructural patterns. We are interested in predicting the aggregation and disaggregation of red cells in blood not only under the steady state but also during transient flow 42 .
(3) Exponential flow of liquid II occurs at R S e w1. Strongly driven liquids (under shock or impact) fit well the scenario. Often, it is assumed that shock wave Dwu s ð Þhas a sharp shock transition. In reality, the transition consists of several molecular layers of aligned molecules 15 which (we predict) are organized within a thin membrane. The membrane travels through the material with shock velocity. Impact loading is also known to trigger the liquid-liquid conversions 16 . (4) We predict that the liquid II behavior may emerge in liquids pushed from equilibrium and, then, allowed to relax according to the rules in (7). Supercooled liquids may fit the scenario, where a controllable decrease of temperature leads to an increase of viscosity, thus, affecting the substance's relaxation time 21 . We are not aware of any other experimental work done in this area.
LLST-short-lived transitions in blood during cardiac cycle. In simple terms, blood is a liquid tissue consisting of plasma and blood cells. On average 1 microliter of blood contains about 5 : 10 6 red cells. Thus, in larger vessels (diameter 2 mm or larger), the number of cells is large enough for blood to become a homogeneous viscoelastic liquid. Blood viscosity and elasticity strongly depend on the actual blood composition, flow rate, shape and size of the blood vessels. A steady flow is fastest at the center of the vessel and slowest near the wall. This non-uniformity is linked to the buildup of wall shear stress. A pulsatile cycle produces considerably more complex flow patterns 43,44 . It is observed that vessel segments with low wall shear stress and oscillatory changes in flow direction appear to be at high risk for the development of various diseases and among them atherosclerosis and thrombosis. Atherosclerosis affects the inner lining of an artery and is characterized by plaque deposits that block the flow of blood. Thrombosis is the formation of clots capable of obstructing the flow of blood. Factors that affect blood viscosity [45][46][47][48] are: hematocrit, red cell aggregation and deformation, plasma viscosity, concentration and size of low-density lipoproteins (LDL) and age. Our objective is to forecast liquid-liquid-solid transitions in blood during cardiac cycle 49 . In our idealized scenario, the vessel is a rigid tube with inner radius l e . The tube is filled with blood and the system stays at rest. The derivations are based on Lagrange description, where strains are small, times are relatively short tƒt 0 . The tube is large enough for the blood to remain in its liquid state R M e §2 À Á . Next, the tube is rapidly moved from rest along its axis and kept in motion at constant velocity u 0 , Fig. 2. Blood slippages along the walls are not allowed. Whole blood in the state (L) is the viscoelastic liquid (2), where the temporal and spatial fluctuations are considered important. The blood in the transitional state (T) exhibits the properties of both the liquid (L) and the solid (S). The solid-like behavior is described in terms of the ''local'' Kelvin-Voigt model, where the stress fluctuations are omitted. The liquid (L), liquid II (T) and solid (S) are glued together, Fig. 2. The boundary of the liquid r L with respect to the boundary of the solid r S in the presence of the liquid II r S {r L ð Þmust be optimal in terms of the rate of work performed by the system s W : u 0 ð Þ, where the wall shear stress is s W~s l e ,t ð Þ and s r,t ð Þis shear stress. Expressions for the particle velocity and stress in each state are presented in Methods B. Blood viscosity and elasticity are determined for whole-blood (hematocrit 38%), where u s~4 :3 : 10 {3 mm=ms and u k~3 :25 : 10 {3 mm 2 =ms. In each solution, the radius l e of the vessel is equal to the significant stretch l c of the stress gradient. As stated earlier, the vessel is rapidly moved from rest and kept in motion at constant velocity u 0~0 :12mm=ms. In terms of cardiac cycle this is the worst case scenario. The analysis is constructed for relatively large vessels, where the diameter is varying between three to six millimeters. These diameters correspond to the material Reynolds number R M e in the range of two to four. Rapid departure from rest converts the liquid (L) into the liquid II (T) and the solid-like material (S), Fig. 2. In all the studied cases, clock is set to zero when all particles across the tube start sensing the motion, while the particle velocity at the center of the tube is still equal to zero. This setup properly replicates the velocity distribution in the tube at the moment of the blood flow reversal 49 . In the first example R M e~2 À Á , at t 5 0 the entire vessel contains blood in the transitional (T) and solid (S) states, Fig. 2. Gradually, blood is converted back to its liquid form (L) and at t < t 0 the conversion is complete, where t 0~1 76ms. There is a moment when the transitional liquid (T) disappears and a sharp liquid-solid interface emerges r S~rL ð Þ. At this point, the L-S interface migrates toward the tube walls. Stress tractions along the L-S interface are satisfied but velocity becomes discontinuous causing slippages. Such slippages have been observed in vessels near the vessel walls in a plasma layer of the thickness about 45.8 mm 15,49 . In larger vessels R M e~2 :5,3,4 À Á , the layers T and S are smaller and the conversion process is faster (Fig. 3). An indirect support for the LLSTs in blood at transient conditions is offered in ref. 15. The presence of the transitional and solid-like blood near the vessel walls is a concerning factor. We should note that an increase of blood viscosity and/or measurable decrease of elasticity may further aggravate the problem. Often, blood viscosity is considered the unifying indicator of cardiovascular diseases. It seems that the material Reynolds number would be a better predictor of cardiovascular disease risk.
Discussion
There are two aspects of the work worth noticing. First, our continuum level approach is adapted for a small scale analysis. We accomplish this by incorporating the spatial and temporal stress fluctuations into the material's description. Second, we view the substance either as a liquid-like or solid-like material. Thus, we diverge from the approaches where the liquid-like, solid-like and the transitional behaviors are constructed within a single mathematical framework. In this manner, we gain additional degree of freedom which we find necessary when describing the transitional processes. Consequently, we predict LLSTs during active flow processes (steady or transient); where in some circumstances the substance is hard driven. There are several mechanisms by which the transitions occur. Our criterion for the liquid-solid transformation works for liquids in small confinements. We predict microstructural reorganizations in liquids stimulated by standing waves, as we show that the standing waves are responsible for triggering liquefaction in dense suspensions. A synchronized motion of molecules (or particles in suspension) is forecasted in liquids subjected to an impact and/or shock loading. Lastly, we indicate that the liquid-liquid transitions should occur in liquids pushed away from equilibrium and, then, are allowed to relax according to a controllable scenario.
q : There are two constants, namely u 0 L and u 1 L . Moreover, the velocity gradient in the center of the tube is always equal to zero _ e L 0,t ð Þ~0, where _ e L~L u L =Lr. The transitional flow is determined in terms of three constants u 0 T , u 1 T and u 2 T . Lastly, the solution for the solid is where u 0 S and _ e 0 S are constants. Boundary conditions for this problem are defined as follows: u S l e ,t ð Þ~u 0 ; u L l e ,0 ð Þ~u 0 ; u L 0,0 ð Þ~0; u S r S ,t ð Þ~u T r S ,t ð Þ; s S r S ,t ð Þ~s T r S ,t ð Þ; u L r L ,t ð Þ~u T r L ,t ð Þ; s L r L ,t ð Þ~s T r L ,t ð Þ: ðB:4Þ We have seven constants u 0 S ,_ e 0 S ,u 0 L ,u 1 L ,u 0 T ,u 1 T ,u 2 T È É and two time-dependent variables r L t ð Þ,r S t ð Þ f g . The LLT and LLST boundaries are determined from the criterion of least action L s W r L ,r S ,t ð Þ : u 0 ½ =Lr L~0 L s W r L ,r S ,t ð Þ : u 0 ½ =Lr S~0 : ðB:5Þ | 2018-04-03T01:55:19.277Z | 2013-02-22T00:00:00.000 | {
"year": 2013,
"sha1": "1ca11b0964bba20c91d44d416ea5d3349104471a",
"oa_license": "CCBYNCND",
"oa_url": "https://www.nature.com/articles/srep01323.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1ca11b0964bba20c91d44d416ea5d3349104471a",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
10499817 | pes2o/s2orc | v3-fos-license | Activation of 5-HT2A/C Receptors Counteracts 5-HT1A Regulation of N-Methyl-D-aspartate Receptor Channels in Pyramidal Neurons of Prefrontal Cortex
Abnormal serotonin-glutamate interaction in prefrontal cortex (PFC) is implicated in the pathophysiology of many mental disorders, including schizophrenia and depression. However, the mechanisms by which this interaction occurs remain unclear. Our previous study has shown that activation of 5-HT1A receptors inhibits N-methyl-d-aspartate (NMDA) receptor (NMDAR) currents in PFC pyramidal neurons by disrupting microtubule-based transport of NMDARs. Here we found that activation of 5-HT2A/C receptors significantly attenuated the effect of 5-HT1A on NMDAR currents and microtubule depolymerization. The counteractive effect of 5-HT2A/C on 5-HT1A regulation of synaptic NMDAR response was also observed in PFC pyramidal neurons from intact animals treated with various 5-HT-related drugs. Moreover, 5-HT2A/C stimulation triggered the activation of extracellular signal-regulated kinase (ERK) in dendritic processes. Inhibition of the β-arrestin/Src/dynamin signaling blocked 5-HT2A/C activation of ERK and the counteractive effect of 5-HT2A/C on 5-HT1A regulation of NMDAR currents. Immunocytochemical studies showed that 5-HT2A/C treatment blocked the inhibitory effect of 5-HT1A on surface NR2B clusters on dendrites, which was prevented by cellular knockdown of β-arrestins. Taken together, our study suggests that serotonin, via 5-HT1A and 5-HT2A/C receptor activation, regulates NMDAR functions in PFC neurons in a counteractive manner. 5-HT2A/C, by activating ERK via the β-arrestin-dependent pathway, opposes the 5-HT1A disruption of microtubule stability and NMDAR transport. These findings provide a framework for understanding the complex interactions between serotonin and NMDARs in PFC, which could be important for cognitive and emotional control in which both systems are highly involved.
Here we show that activation of 5-HT 1A and 5-HT 2A/C receptors in PFC pyramidal neurons regulates NMDAR channels in a counteractive manner by converging on the microtubule-based transport of NMDARs that is regulated by ERK. Given the importance of NMDAR and serotonin in mental processes under normal and pathological conditions, the complex regulation of NMDAR function by 5-HT 1A and 5-HT 2A/C receptors may provide a molecular and cellular mechanism underlying the role of serotonin in regulating cognitive and emotional behaviors.
EXPERIMENTAL PROCEDURES
Whole Cell Recordings-Whole cell current recordings of cultured PFC neurons employed standard voltage-clamp techniques as those we described previously (21,22). The external solution for recording NMDAR-mediated current contained (in mM): 127 NaCl, 20 CsCl, 1 CaCl 2 , 10 HEPES, 5 BaCl 2 , 12 glucose, 0.001 tetrodotoxin, and glycine 0.02, pH 7.3-7.4, 300 -305 mosM/liter. The internal solution contained (in mM): 180 N-methyl-D-glucamine, 4 MgCl 2 , 40 HEPES, 0.5 1,2-bis(2aminophenoxy)ethane-N,N,NЈ,NЈ-tetraacetic acid, 12 phosphocreatine, 3 Na 2 ATP, and 0.5 Na 2 GTP, 0.1 leupeptin, pH 7.2-3, 265-270 mosM/liter. Recordings were obtained with an Axon Instruments (Union City, CA) 200B patch clamp amplifier that was monitored by an IBM PC running pClamp 8 with a DigiData 1320 series interface. Electrode resistances were normally 2-4 M⍀ in bath solution. Following seal rupture, series resistance (4 -10 M⍀) was compensated (70 -90%). Attention was applied to monitor the series resistance, and recordings were stopped when a significant increase (Ͼ20%) occurred. The whole cell NMDAR-mediated current was evoked by NMDA (100 M) application for 2 s every 30 s with neurons held at Ϫ60 mV. Drugs were applied with a "sewer pipe" system. The array of drug capillaries was positioned a few hundred micrometers from the cell under recording. Solution changes were controlled by the SF-77B fast-step solution stimulus delivery device (Warner Instruments, Hamden, CT). Data were analyzed with AxoGraph (Axon instruments) and KaleidaGraph (Albeck Software). ANOVA was performed to compare the differential degrees of current regulation between experimental groups subjected to different drug treatment. Data are expressed as the mean Ϯ S.E.
Electrophysiological Recordings in Slices-To record NMDAR-mediated synaptic transmission, we performed the standard whole cell recording techniques in layer V PFC pyramidal neurons (21,23). Patch pipettes (5-9 M⍀) were filled with the following internal solution (in mM): 130 cesium methanesulfonate, 10 CsCl, 4 NaCl, 1 MgCl 2 , 10 HEPES, 5 EGTA, 2.2 QX-314, 12 phosphocreatine, 5 MgATP, 0.5 Na 2 GTP, 0.1 leupeptin, pH 7.2-7.3, 265-270 mosM/liter. PFC slices (300 m) were perfused at room temperature (22-24°C), with artificial cerebrospinal fluid was bubbled with 95% O 2 /5% CO 2 containing 6-cyano-7-nitroquinoxaline-2,3-dione (20 M) and bicuculline (10 M) to block ␣-amino-3-hydroxy-5-methyl-4-isoxazoleproprionic acid/kainite receptors and GABA A receptors, respectively. Neurons were observed with a 40ϫ water-immersion lens and illuminated with near infrared IR light, and the image was captured with an IR-sensitive charge-coupled device camera. All recordings were performed using a Multiclamp 700A amplifier. Upon application of negative pressure, the mem-brane was tightly sealed with resistance (2-10 G⍀). With additional suction, the membrane was disrupted into the whole cell configuration. The access resistances ranged from 13 to 18 M⍀ with 50 -70% compensation. Evoked currents were generated with a pulse from a stimulation isolation unit controlled by an S48 pulse generator (Astro-Med). A bipolar stimulating electrode (Fredrick Haer Company) was positioned ϳ100 m from the neuron upon recording. Prior to stimulation, neurons (clamped at Ϫ70 mV) were depolarized to ϩ60 mV for 3 s to fully eliminate the voltage-dependent Mg 2ϩ block of NMDAR. Age-matched saline controls were done side-by-side with drug-injected animals on each day of experiments. To minimize variations between slices, the stimulus with the same intensity was delivered by the stimulating electrode placed at the same location. Data analyses were performed with the Clampfit software (Axon Instruments).
Animal Treatment-Young male rats (25-28 days old) were administered intraperitoneally with the drugs as described in the text. For stereotaxic injection, rats were anesthetized with pentobarbital sodium (50 mg/kg intraperitoneal) and then mounted into a stereotaxic apparatus (David Kopf Instruments). Fluoxetine (fluox) (2 l, 0.34 mg/ml, dissolved in saline) were injected unilaterally into the PFC region using a Hamilton syringe (22-gauge needle) at a rate of 0.5 l/min. The coordinates of the lateral PFC used are 1.0 -1.4 mm lateral from midline, 2.2 mm anterior to Bregma, and 3.3 mm dorsal to ventral.
Measurement of Free Tubulin-Free tubulin from PFC cultures was extracted as described previously (21). Cultured PFC neurons (2 ϫ 10 5 cells/cm 2 , 14 days in vitro) in 3.5-cm dishes were washed twice at 37°C with 1 ml of microtubule stabilizing buffer containing (0.1 M MES (pH 6.75), 1 mM MgSO 4 , 2 mM EGTA, 0.1 mM EDTA, and 4 M glycerol). Cultures were then incubated at 37°C for 5 min in 600 l of soluble tubulin extraction buffer (0.1 M MES (pH 6.75), 1 mM MgSO 4 , 2 mM EGTA, 0.1 mM EDTA, 4 M glycerol, and 0.1% Triton X-100) with the addition of protease inhibitor mixture tablets (Roche Applied Science). The soluble extract was centrifuged at 37°C for 2 min, and the supernatant was collected. Equal amounts of total pro-tein were analyzed by Western blotting using anti-␣-tubulin (Sigma). The intensity of tubulin bands was quantitatively analyzed with Image (National Institutes of Health).
Immunocytochemical Staining-
The detection of surface GFP-NR2B (25) was performed as how we described before (21,22). Briefly, cultured PFC neurons were treated with various agents after transfection, and then fixed in 4% paraformaldehyde for 30 min at room temperature without permeabilization. After incubating in 5% bovine serum albumin to block nonspecific staining, cells were incubated with the anti-GFP antibody (1:100, Chemicon, Temecula, CA) for 1 h at room temperature. After three washes in phosphate-buffered saline, cells were incubated with a rhodamine-conjugated secondary antibody (1: 200, Sigma) for 1 h at room temperature. After washing in phosphate-buffered saline, coverslips were mounted on slides with Vectashield mounting media (Vector Laboratories, Burlingame, CA). Fluorescence images were detected using a 60ϫ objective with a cooled charge-coupled device camera mounted on a Nikon microscope.
The surface GFP-NR2B clusters were analyzed with ImageJ software. All specimens were imaged under identical conditions and analyzed with identical parameters. A 50-m segment of dendrite was selected from the equal distance away from the soma of four to six individual neurons. To define dendritic clusters, a single threshold was selected manually. Signal was counted as clusters when its intensity was 2-to 3-fold greater than the overall fluorescence on the dendritic shaft. Three to four independent experiments were performed. Quantitative analyses were performed blindly without knowledge of experimental conditions.
Biochemical Measurement of Surface Receptors-After treatment, PFC slices were incubated with artificial cerebrospinal fluid containing 1 mg/ml Sulfo-NHS-LC-Biotin (Pierce) for 20 min on ice. The slices were then rinsed three times in Trisbuffered saline to quench the biotin reaction, followed by homogenization in 300 l of modified radioimmune precipitation assay buffer (1% Triton X-100, 0.1% SDS, 0.5% deoxycholic acid, 50 mM NaPO 4 , 150 mM NaCl, 2 mM EDTA, 50 mM NaF, 10 mM sodium pyrophosphate, 1 mM sodium orthovanadate, 1 mM phenylmethylsulfonyl fluoride, and 1 mg/ml leupeptin). The homogenates were centrifuged at 14,000 ϫ g for 15 min at 4°C. 15 g of homogenates was removed to measure total proteins. For surface protein detection, 150 g of homogenates was incubated with 100 l of 50% NeutrAvidin-Agarose (Pierce) for 2 h at 4°C, and bound proteins were resuspended in 25 l of SDS sample buffer and boiled. Quantitative Western blots were performed on both total and biotinylated (surface) proteins using anti-NR1, anti-NR2B (Upstate Biotechnology, Lake Placid, NY), and anti-GABA A R 2/3 (Chemicon).
Activation of 5-HT 1A and 5-HT 2A/C Receptors Regulates NMDAR-mediated Ionic and Synaptic Currents in a Counter-
active Manner-We have previously found that activation of 5-HT 1A receptors reduces NMDAR currents in PFC pyramidal neurons (21). To examine the potential interactions between 5-HT 1A and 5-HT 2A/C receptors on NMDAR functions, we tested the impact of 5-HT 2 activation on 5-HT 1A regulation of NMDAR currents by preincubating PFC cultures with the 5-HT 2A/C agonist ␣-Me-5HT (20 M, 10 -30 min). Application of NMDA (100 M) elicited an inward current that was partially desensitized and was completely eliminated by the NMDA receptor antagonist D-aminophosphonovalerate (50 M). As shown in Fig. 1 (A and B), application of the 5-HT 1A agonist 8-OH-DPAT (20 M) reversibly reduced NMDAR currents (21.3 Ϯ 0.7%, n ϭ 16). However, in ␣-Me-5HT-treated neurons, the reduction of NMDAR current by 8-OH-DPAT was substantially attenuated (6.4 Ϯ 0.7%, n ϭ 8). ␣-Me-5HT alone did not significantly affect NMDAR currents (1 M: 4.0 Ϯ 0.8%, n ϭ 5; 20 M: 5.7 Ϯ 1.4%, n ϭ 7). Another 5-HT 2A/C agonist DOI also had no effect on NMDAR currents at low doses (0.05 M: 4.2 Ϯ 1.2%, n ϭ 5; 0.1 M: 4.0 Ϯ 2.5%, n ϭ 5), which are different from the inhibitory effect of DOI shown before (26). The discrepancy may be due to different experimental procedures in recording NMDA-induced currents. In a previous study (26), microdrops of NMDA (1 mM, every 15 min) were applied to PFC slices, which did not allow the accurate detection of peak NMDA currents because of the slow diffusion of the ligand. Moreover, it was not a pure postsynaptic preparation like dissociated neurons, which could have indirect effects due to changes in the circuit. Our results suggest that 5-HT 2A/C activation alone does not directly affect NMDAR currents but opposes 5-HT 1A regulation of NMDAR currents in PFC pyramidal neurons.
We next examined what would happen when both 5-HT 1A and 5-HT 2A/C receptors are co-activated by serotonin. If both signals oppose each other, then blocking one receptor activation would unmask the effect of the other. Thus, we tested the effect of serotonin on NMDAR-EPSC in PFC slices treated with or without 5-HT 2 antagonists. As shown in Fig . It suggests that 5-HT 2 receptor activation in response to serotonin masks part of the inhibitory action of 5-HT 1A receptors on NMDAR channels.
To assess whether the serotonergic regulation of NMDARs, which we found in vitro, is also occurring in vivo, we tested whether endogenous serotonin, via the activation of 5-HT 1A and 5-HT 2A/C receptors, could regulate NMDAR functions in a similar manner in intact animals. First, we examined NMDAR-EPSC in PFC slices from animals intraperitoneally injected with 5-HT 1A or 5-HT 2A/C agonists. As shown in Fig. 3A, the amplitude of NMDAR-EPSC was significantly smaller in animals intraperitoneally injected with the 5-HT 1A agonist 8-OH- Moreover, the amplitude of ␣-amino-3-hydroxy-5-methyl-4isoxazoleproprionic acid receptor-EPSC was unchanged by 8-OH-DPAT injection (saline: 113 Ϯ 11.8 pA, n ϭ 11; DPAT: 112 Ϯ 9.9 pA, n ϭ 10), suggesting that 5-HT 1A activation in PFC is specifically targeting postsynaptic NMDARs rather than presynaptic glutamate release.
Next, we examined NMDAR-EPSC in PFC slices from animals intraperitoneally injected with the serotonin re-uptake inhibitor fluoxetine (20 mg/kg) to elevate endogenous 5-HT levels at synapses. As shown in Fig. 3B, the amplitude of NMDAR-EPSC in fluoxetine-injected animals was significantly smaller, compared with saline controls (saline: 403. neurons, fluoxetine injection failed to affect NMDAR-EPSC in striatal medium spiny neurons (saline: 375 Ϯ 15.9 pA, n ϭ 8; fluox: 351 Ϯ 13.9 pA, n ϭ 10), suggesting the specificity of serotonergic regulation of NMDARs in PFC.
To further test the effect of 5-HT on NMDARs in PFC networks in vivo without affecting other circuits, we also performed stereotaxic injection of fluoxetine to PFC to elevate the local 5-HT levels. We found that the amplitude of NMDAR-EPSC in PFC pyramidal neurons localized at the proximity of injection sites was significantly smaller, compared with saline controls (saline: 401 Ϯ 10.1 pA, n ϭ 11; fluox: 162 Ϯ 13.2 pA, n ϭ 11, p Ͻ 0.001, ANOVA).
The Opposing Regulation of NMDA Receptors by 5-HT 1A and 5-HT 2 Receptors Depends on Microtubule Stability-Next, we investigated the potential mechanism by which 5-HT 2 receptors counteract the effect of 5-HT 1A on NMDAR currents in PFC neurons. Previously, we have found that 5-HT 1A disrupts NMDA receptor trafficking by destabilizing microtubule integrity (21). Thus, we examined whether microtubule dynamics is the convergent target of 5-HT 1A -and 5-HT 2A -mediated signaling. To test this, we compared the level of free (depolymerized) tubulin in PFC cultures subjected to 8-OH-DPAT treatment in the absence or presence of ␣-Me-5HT. As shown in Fig. 4 (A and B), application of 8-OH-DPAT (40 M, 20 min) caused a potent increase in free tubulin (1.7 Ϯ 0.3-fold increase, n ϭ 3, p Ͻ 0.001, ANOVA), however, this effect was significantly blocked by ␣-Me-5HT (20 M, 0.6 Ϯ 0.2-fold increase, n ϭ 3), indicating that 5-HT 2 activation could increase microtubule stability and prevent 5-HT 1A -induced microtubule depolymerization.
We then tested whether the counteractive effect of 5-HT 2 on 5-HT 1A regulation of NMDAR currents is due to the opposing regulation of microtubule dynamics by 5-HT 1A and 5-HT 2 receptors. As shown in Fig. 4 (C and D), bath application of the microtubule depolymerizer, colchicine (30 M), gradually reduced NMDAR-EPSC in PFC slices (38.0 Ϯ 2.9%, n ϭ 5, also see Ref. 22), mimicking and occluding the 5-HT 1A effect (21). However, this inhibitory effect of colchicine was largely attenuated in the presence of ␣-Me-5HT (20 M, 11.2 Ϯ 13.4%, n ϭ 6). These results suggest that activation of 5-HT 1A and 5-HT 2 receptors could regulate NMDAR currents in a counteractive manner by converging on microtubule dynamics.
Activation of 5-HT 2 Receptors Induces ERK Activation in Neuronal Processes-How could 5-HT 2 activation increase microtubule stability? Evidence has shown that activation of ERK stabilizes microtubule integrity (27,28). Moreover, our previous findings have suggested that 5-HT 1A reduces ERK activity, which in turn reduces MAP2 phosphorylation and microtubule stability (21). Thus, we speculate that 5-HT 2 receptors might activate ERK, leading to increased phosphorylation of MAP2 and its association with microtubules. To test this, we measured the activation of ERK1/2 in response to 5-HT 2A/C agonists with an antibody that recognizes activated EKR1/2, which are doubly phosphorylated at Thr-202/Tyr-204 in the activation loop of the kinases (29). As a positive control, we also treated neurons with glutamate, which was reported to activate ERK (30). As shown in Fig. 5A, application of ␣-Me-5HT (20 M, 3 min) or glutamate (100 M, 3 min) increased ERK phosphorylation, as compared with vehicle-treated neurons. Unlike glutamate treatment, the ␣-Me-5HT-activated ERK did not co-localize with TOPRO3 (a nucleus marker) staining, suggesting that 5-HT 2 -activated ERK is mainly targeted to the cytoplasm rather than nucleus. Among all experimental groups, total ERK levels remained unchanged (Fig. 5B). These data suggest that 5-HT 2 receptors induce ERK activation, which may oppose the down-regulation of ERK by 5-HT 1A receptors.
The -Arrestin-mediated Pathway Is Involved in the Counteractive Effects of 5-HT 2 Receptors on 5-HT 1A Regulation of NMDAR Currents-Next, we sought to identify the signaling mechanism underlying 5-HT 2 activation of ERK. Recently, it has been shown that some G proteins form a signaling complex with the multifunctional adaptor and transducer molecule, -ar-restins 1/2, which recruits and activates components of the mitogenactivated protein kinase cascades (31,32). To examine the possibility that 5-HT 2 receptors activate ERK through the -arrestin pathway, we transfected PFC cultures with siRNA against -arrestin1 or -2 to knock down their expression. As shown in Fig. 6A, the protein level of -arres-tin1 or -2 was selectively reduced by -arrestin1 siRNA or -arrestin2 siRNA, respectively, but not by a scrambled siRNA. The ␣-Me-5HTinduced activation of ERK1/2 was markedly blocked by knockdown of -arrestin1 or -2 (Fig. 6B). The total ERK level was not affected among all experimental conditions. It suggests that 5-HT 2 receptors activate ERK through a mechanism depending on -arrestins.
Because dynamin-dependent endocytosis of G protein-coupled receptor is required for G protein-coupled receptor/arrestin-induced ERK signaling (34), we further examined the role of dynamin in the counteractive effect of 5-HT 2 on 5-HT 1A regulation of NMDAR currents. PFC cultures were treated with a dynamin inhibitory peptide, which competes for binding to amphiphysin and hence inhibits the clathrin/dynamin-dependent endocytosis (35). As shown in Fig. 7 (A and B), treatment of PFC cultures with the membrane-permeable dynamin inhibi- tory peptide (50 M) markedly blocked the ␣-Me-5HT-induced ERK activation, whereas the membrane-impermeable dynamin inhibitory peptide or a cell-permeable scrambled control peptide was ineffective. Furthermore, in neurons injected with the dynamin inhibitory peptide (50 M), ␣-Me-5HT failed to counteract the 5-HT 1A reduction of NMDAR-EPSC (35.8 Ϯ 6.3%, n ϭ 5 (Fig. 7, C and D)). Together, these data suggest that dynamin-based 5-HT 2 receptor endocytosis is involved in 5-HT 2 activation of ERK and 5-HT 2 opposing of 5-HT 1A regulation of NMDAR currents.
Previous studies have suggested that Src-mediated tyrosine phosphorylation of dynamin is involved in G protein-coupled receptor-induced ERK signaling (31,36). To test whether Src kinase activity is required for 5-HT 2 activation of ERK and the counteractive effect of 5-HT 2 on 5-HT 1A regulation of NMDAR currents, we measured the effect of ␣-Me-5HT in PFC neurons treated with the Src kinase inhibitor, PP2. As shown in Fig. 8 (A and B), application of PP2 (20 M), but not the inactive analog PP3 (20 M), significantly blocked the ␣-Me-5HT-induced ERK phosphorylation. The total ERK level was not affected in cells subjected to different treatments. Moreover, in the presence of PP2, but not PP3, ␣-Me-5HT lost the ability to oppose 8-OH-DPAT reduction of NMDAR-EPSC (Fig. 8C: PP2: 32.7 Ϯ 3.7%, n ϭ 7; Fig. 8D: PP3: 8.8 Ϯ 3.3%, n ϭ 4). Taken together, these results suggest that Src kinase activation is involved in 5-HT 2 signaling.
Activation of 5-HT 2 Receptors Opposes 5-HT 1A Reduction of Surface NR2B Subunits in a -Arrestin-dependent Manner-If 5-HT 1A and 5-HT 2A/C regulate microtubule dynamics in a counteractive manner, it is possible that they may alter NMDAR trafficking on microtubules in an opposite way. To test this, we performed immunocytochemical experiments in cultured PFC neurons transfected with GFPtagged NR2B subunits (the GFP tag is positioned at the extracellular N terminus of NR2B). Surface NR2B receptors were detected with the anti-GFP primary antibody, followed by rhodamine-conjugated secondary antibody in non-permeabilized conditions. Consistent with our previous findings (21), application of 8-OH-DPAT (40 M, 5 min), caused a marked reduction in the number and size of surface NR2B clusters (Fig. 9, A and B), as compared with control neurons (cluster density: 43.6 Ϯ 1.7 clusters/50 m in controls versus 22.7 Ϯ 0.8 clusters/50 m in DPAT-treated cells; cluster size: 0.33 Ϯ 0.02 m 2 in controls versus 0.18 Ϯ 0.02 m 2 in DPAT-treated cells (Fig. 9G). Treatment with ␣-Me-5HT (40 M, 30 min) significantly prevented the 8-OH-DPAT-induced reduction of surface NR2B subunits (cluster density: 40.0 Ϯ 2.7 clusters/50 m; cluster size: 0.3 Ϯ 0.03 m 2 (Fig. 9, C and G)). The ␣-Me-5HT treatment itself did not affect the distribution of surface NR2B clusters (data not shown). The fluorescence intensity of surface NR2B clusters (average gray value per pixel) was not significantly changed in neurons subject to various treatments (Fig. 9G). The total amount of recombinant NR2B receptor (GFP channel) was unaltered.
Finally, we tested whether the change on NMDAR-EPSC amplitudes in PFC neurons by acute fluoxetine treatment of intact animals can be accounted for by the altered number of NMDA receptors on the cell membrane. Surface biotinylation experiments (23) were performed to measure levels of surface NR1 and NR2B in PFC slices from animals intraperitoneally injected with fluoxetine (20 mg/kg). Surface proteins were labeled with Sulfo-NHS-LC-biotin, and then biotinylated surface proteins were separated from non-labeled intracellular proteins by reaction with NeutrAvidin beads. Surface and total proteins were subjected to electrophoresis and probed with an antibody against the NR1 or NR2B subunit. As shown in Fig. 10 (A and B), the surface levels of NR1 and NR2B in PFC slices were significantly lower in animals exposed to fluoxetine (NR1: 70 Ϯ 3% of control; NR2B: 60 Ϯ 6% of control; n ϭ 4). Consistent with electrophysiological results, the fluoxetine-induced reduction of surface NMDARs was more prominent in animals co-injected with the 5-HT 2A/C antagonist ketanserin (NR1: 29 Ϯ 3% of control; NR2B: 30 Ϯ 1% of control; n ϭ 4) and was largely blocked in animals co-injected with the 5-HT 1A antagonist WAY-100635 (NR1: 82 Ϯ 5% of control; NR2B: 84 Ϯ 1% of control; n ϭ 4). The surface GABA A R  2 subunit level, as well as the total NR1 or NR2B level, in PFC slices was unchanged by any of these drug administrations. Taken together, these results suggest that endogenous serotonin regulates PFC NMDAR surface expression in vivo through 5-HT 1A and 5-HT 2A/C receptors in a counteractive manner.
NR1 and NR2 subunits have to be co-assembled before leaving the endoplasmic reticulum and being transported along microtubules in dendrites to synapses. Our previous study (21) shows that 5-HT 1A receptors primarily target NR2B subunit-containing NMDA receptors, consistent with NR2B being the cargo of the microtubule motor KIF17. With the application of fluoxetine plus ketanserin, the surface levels of NR1 and NR2B were reduced to a similar degree (surface NR2A was also reduced, data not shown). It suggests that this reduction likely results from endogenous NR1/ NR2B heteromers and NR1/ NR2A/NR2B triheteromers.
DISCUSSION
It is well known that many antidepressants and antipsychotics exert their actions by inhibiting serotonin reuptake and thus enhance serotonergic transmission (37,38). Because serotonin has multiple receptor subtypes, systemic elevation of serotonin can induce diverse physiological effects in neurons (2). The molecular mechanisms for serotonin to regulate cellular targets through different subtypes of receptors remain to be identified. We have previously found that 5-HT 2 and 5-HT 4 receptors are linked to regulate GABA A receptors (9, 39), whereas 5-HT 1A receptors are linked to regulate NMDA receptors in PFC neurons (21). The specific coupling of different 5-HT receptors to distinct ion channels allows serotonin to regulate multiple targets in a precise but flexible manner. Here, we provide evidence showing that the 5-HT 1A regulation of NMDARs can be modified by 5-HT 2A/C receptor activation, which provides a mechanism to fine-tune the effect of serotonin on NMDAR-mediated synaptic transmission and plasticity.
Under normal physiological conditions, which of the one or more 5-HT receptors in PFC pyramidal neurons that are activated by serotonin is determined by the serotonergic projection from dorsal raphe to cellular compartments rich in different subtypes of receptors (40). Agonist binding studies indicate that 5-HT 1 and 5-HT 2 receptors have different affinities (nanomo- lar versus low micromolar) to 5-HT, suggesting that 5-HT 1 receptor activation may play a dominant role in response to serotonin. Microdialysis studies have shown that synaptic concentration of serotonin can reach up to 6 mM (41), suggesting that sometimes all 5-HT receptors could be fully activated at synapses. Both 5-HT 1A and 5-HT 2A receptors are expressed at dendritic shafts and spines of PFC pyramidal neurons (42,43), where NMDA receptors are abundant, prompting us to speculate that both receptors may interact with NMDA receptors in synergistic or opposite ways. Our studies in PFC pyramidal neurons from both acute slices and intact animals treated with various 5-HT-related drugs indicate that blocking 5-HT 2A/C receptors unmasks the ability of 5-HT 1A receptors to reduce NMDAR currents, suggesting that 5-HT 1A and 5-HT 2A/C receptors converge on NMDAR channels in a counteractive manner. Similar to serotonin, it has been shown that, in response to dopamine, D 1 and D 2 receptors that are linked to distinct signaling cascades regulate cortical GABAergic inhibition in an opposing manner (44).
Our previous study has shown that activation of 5-HT 1A receptors suppresses NMDAR currents by reducing microtubule stability and the ensuing NMDAR trafficking along dendritic microtubules (21). In this study, we found that activation of 5-HT 2A/C receptor opposes the ability of 5-HT 1A to induce microtubule depolymerization, suggesting that microtubule dynamics and the microtubule-based NMDAR transport are regulated by 5-HT 1A and 5-HT 2A/C receptors in a counteractive fashion.
The stability of microtubules is regulated by different microtubule-associated proteins (MAPs) in distinct neuronal compartments. The phosphorylation state of MAP2, a dendrite-specific MAP, determines the ability of MAP2 to associate and stabilize dendritic microtubules (45). One key signaling molecule that regulates MAP2 phosphorylation is ERK (28). Our previous study suggests that 5-HT 1A activation suppresses ERK activity, which leads to the decreased MAP2 phosphorylation, MAP2-microtubule interaction, and microtubule stability (21). It is possible that 5-HT 1A and 5-HT 2A/C oppositely regulates ERK activity, thereby controlling microtubule stability in a counteractive manner. Consistently, our biochemical and immunocytochemical data show that 5-HT 2A/C activation potently enhances ERK activity in the dendrites of PFC cultures.
How does the 5-HT 2A/C receptor activate ERK? Phospholipase C and inositol 1,4,5-trisphosphate, two downstream molecules of classic 5-HT 2 signaling, are not involved in this regulation (data not shown). Recent studies have suggested that the scaffolding protein -arrestin, which binds to the third intracellular loop of certain G protein-coupled receptors, induces ERK activation (31,32). 5-HT 2A receptors are found to bind purified -arrestin (46). However, whether 5-HT 2A activates mitogen-activated protein kinase via -arrestin pathway is essentially unknown. Our data show that knockdown of -ar-restin1/2 not only blocks the 5-HT 2A/C -induced ERK activation but also eliminates the counteractive effect of 5-HT 2A/C on 5-HT 1A reduction of NMDAR currents. In addition, we found that the 5-HT 2A/C action is dependent on Src activation and clarthin/dynamin-mediated endocytosis of the receptor. Taken together, 5-HT 2A , via the -arrestin/Src/dynamin cascade, induces ERK activation to oppose the effect of 5-HT 1A on NMDAR functions.
Several mechanisms have been proposed for the regulation of NMDAR functions, including altering the phosphorylation state and biophysical properties of the channel (47,48) and changing NMDAR trafficking and channel numbers at the membrane (49,50). Our previous finding suggests that 5-HT 1A decreases surface NR2B clusters in a microtubule-dependent manner (21), consistent with the role of cytoskeleton-based transport on NMDAR insertion to the plasma membrane (51,52). Our immunocytochemical data show that the 5-HT 1A reduction of surface NR2B clusters is attenuated by pretreatment with a 5-HT 2A/C agonist, confirming that 5-HT 1A and 5-HT 2A/C oppositely regulate NMDAR trafficking. Moreover, 5-HT 2A/C opposes the 5-HT 1A reduction of surface NR2B clusters in a -arrestin-dependent manner, consistent with the -arrestin dependence of the 5-HT 2A/C action on 5-HT 1A regulation of NMDAR currents. Surface biotinylation experiments using PFC from intact animals subject to acute fluoxetine treatment have also confirmed the results found in cultured neurons.
Based on the experimental data, we speculate that, in response to serotonin, both 5-HT 1A and 5-HT 2A/C receptors localized in PFC pyramidal neurons are activated, which converge to regulate NMDAR trafficking and function in an opposite manner, by coupling to distinct signaling cascades and differentially affecting microtubule stability. This study may provide significant insights into the complex regulation of NMDAR-mediated synaptic transmission and plasticity by different serotonin receptors in the PFC network. | 2016-10-26T03:31:20.546Z | 2008-06-20T00:00:00.000 | {
"year": 2008,
"sha1": "6c1fd6cf2175bea432881dcef44f590967a439d6",
"oa_license": "CCBYNC",
"oa_url": "http://www.jbc.org/content/283/25/17194.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "29e36d355cbad8c003b7f905dd034953a10b67e9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
230644101 | pes2o/s2orc | v3-fos-license | Modeling and Monitoring Walnut (Juglans regia) Area & Production Based on Parametric and Non-Parametric Regression Models
Walnut (Juglans regia L.) is economically important tree species, highly valued for its timber and edible nuts (Pollegioni et al., 2017). Walnut has the age of hundreds of years and it grows up to the height of 25 to 30 meters. Native to Central Asia, Mediterranean climate with cold winters and mild summers is most suitable for this dry fruit. A walnut tree is harvestable after 4 to 6 years and reaches its full productivity by 11 or 12 years of age (Simpson, 2016). Rich in Vitamin E and Vitamin A and a perfect source of Omega 3 is considered best for the human brain.
Introduction
Walnut (Juglans regia L.) is economically important tree species, highly valued for its timber and edible nuts (Pollegioni et al., 2017). Walnut has the age of hundreds of years and it grows up to the height of 25 to 30 meters. Native to Central Asia, Mediterranean climate with cold winters and mild summers is most suitable for this dry fruit. A walnut tree is harvestable after 4 to 6 years and reaches its full productivity by 11 or 12 years of age (Simpson, 2016). Rich in Vitamin E and Vitamin A and a perfect source of Omega 3 is considered best for the human brain.
Jammu and Kashmir with its extensive potential of the production of temperate fruits supported by its vibrant geo-climatic conditions play marvelous role in the development of the walnut industry. The state is also well known for the production of International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 9 Number 10 (2020) Journal homepage: http://www.ijcmas.com Walnut (Juglans regia L.) occupies an important position in the horticulture industry of Jammu and Kashmir. It has the monopoly of producing excellent quality of walnuts contributing more than 90 per cent of Indian walnut production. The temperate climatic conditions favor its cultivation and offer Jammu and Kashmir an exceptional edge to super pass the other states in terms of walnuts. Being organic in nature (which is its USP), as no fertilizers or sprays are used on walnut plants and its yield, and high in nutrients with immense health benefits, Kashmiri walnut has seen growing demand and acceptability in the domestic and international market. The present study is an attempt to find past trends of walnut in Jammu and Kashmir using parametric, non parametric and semi-parametric regression methods. The performance of each method is compared using high value of R and low value of residual criteria. It is found that non parametric/semi parametric regression comes out to be a good fit for trend in walnut production in comparison to parametric regression. Even semi parametric spline is selected as the best fit model for trend analysis. It is inferred that the area under walnut cultivation in J&K is increasing from 1998-2017 and the productivity has also shown an increasing trend except for some years where the trend is found declining. numerous types of fruits like apple, cherry, pear, apricots and in dry fruits category it also produces walnuts (Taufique and et al., 2017) hence takes the lead in the walnut production in India. Walnuts became the viable horticulture industry in India since 1980's particularly in the valley of Kashmir (Pandey and Shukla, 2007). Jammu and Kashmir occupy almost 90 per cent share of walnut industry in India (Mir et al., 2016). Recently Jammu and Kashmir have been declared as an 'Agri-Export Zone' for 'walnuts' (Shah, R.A, 2017). Walnut produced in Jammu and Kashmir is purely organic as it is grown in the naturally induced conditions without the heavy doses of fertilizers and sprays. The aroma, flavor and taste of its excellent capacity make the walnut of Jammu and Kashmir one of the best in Indian and abroad as well.
The growth rates of crops are mostly estimated by the linear regression models. However, it might be the case that these models may not fit the data well. Under such conditions it becomes essential to apply nonparametric and semi-parametric regression, which is based on fewer assumptions. In last few years, nonparametric regression and semi-parametric regression technique for functional estimation has become increasingly popular as a tool for data analysis. These techniques impose only few assumptions about shape of function and therefore it is more flexible than usual parametric regression approaches. Smoothing techniques are commonly used to estimate the function non-parametrically (Härdle, 1990). Nonparametric regression models avoid restrictive assumptions of the functional form of the regression function. Semi-parametric regression model combine the components of parametric and nonparametric regression models, by keeping the easy interpretability of the former and retains some of the flexibility of the latter. Various scientists viz., (Chandran, 2004) has applied nonparametric regression to study the growth rates of total foodgrain production of India during the period 1987 to 2001. Teczan (2010) has studied the nonparametric regression technique to find out the growth rate trends of various crops. Sahu and Pal (2004) used nonparametric regression (Lowess) and semiparametric (spline) for modeling of pest incidences. Dhekale et al., 2017, employed the nonparametric regression model to study the trends of tea in India. Yasmeen et al., (2018Yasmeen et al., ( , 2019 employed non parametric and parametric regression models to study the trend of Area, Production and productivity of Apple and cherry in Kashmir. The current study is aimed to develop appropriate parametric and nonparametric regression models to fit the trends in area, production and productivity of Walnut in Kashmir.
Materials and Methods
For present study, to study the trends and growth rates, long term data for last 20 years pertaining to the area, production and productivity of Walnut is collected from Directorate of Horticulture.
The descriptive measures of central tendency and dispersions along with the simple and compound growth rates are used to explain the features of the data (Mishra et al., 2012).
Parametric Regression Models
To find out the path of the production process different parametric trend models are fitted. Among the fitted models, the best model is selected on the basis of their goodness of fit (R 2 ) value and significance of the coefficients. The dependent variable Y is area, production and productivity and independent variable X is the time points (years)
Non-parametric and semi-parametric regression models
The model considered here is of the form m is trend function which is assumed to be smooth and are i e random errors with mean zero and finite variance. Since there is no assumption of parametric form of function, this approach is flexible and robust to deviations from an assumed model form. To obtain an estimate of the mean response value at a point X, most of the smoothers are averaging the Yvalues of observations having predictor values closer to the target value X. The averaging is done in neighborhoods around target value. The main decision to be made in any of the smoothing techniques is to fix the size of neighborhood which is typically expressed in terms of an adjustable smoothing parameter or bandwidth. Intuitively, large neighborhoods will provide an estimate with low variance but potentially high bias, and conversely for small neighborhoods.
Lowess regression, introduced i Y th i) (<mi) (<m by Cleveland (1979), is obtained on the basis of the data points around it within a band of certain width.
The point xi is the midpoint of the band. The data points within the band are assigned weights in a way so that xi has the highest weight. The weights for the other data points decline with their distance from xi according to a weight function. The weighted least squares method is used to find the fitted value corresponding to xi, which is taken as the smoothed value. The procedure is repeated for all the data points. The spline method of estimation make use of the penalized least squares method (Simonoff, 2012), which balances the fitting of the data closely. The objective is to estimate m by means of a function that fits the data well and is as smooth as possible. A measure of smoothness of mis the integral of the square of its second derivative as i The first term is the sum of squares of the residuals; it provides a measure of how well the function m fits the data. The integral of the above equation is a measure for the roughness/smoothness of the function. The functions which are highly curved will result in a large value of the integral; straight lines result in the integral being zero. The roughness penalty, controls the emphasis which one wishes to place on smoothness. By increasing the value of, one places more emphasis on smoothness; as becomes large the function approaches a straight line. On the other hand, a small value of λ emphasizes the fit of m to the data points: as λ approaches a function that interpolates the data points.
Results and Discussion
The maximum growth rate is observed in production of walnut over the years, whereas the minimum growth rate is exhibited by area of the walnut.
The positive compound growth of production (0.078 per annum) reveals that there is no decrease in the production of walnut over the years with a maximum of 0.17 million kilogram and minimum of 0.04 million kilogram. Similarly, the simple growth rate (2.59 per annum) is observed in production indicates an increase in the production of walnut in Kashmir over the years (Table 1 and 2). This is due to the fact that a large area of land is being brought under agriculture we have noticed a compound growth rate of area (0.03 per annum) under walnut cultivation indicating that a large portion of the land is being utilized for the latter. Trend analysis of area, production and productivity
Parametric techniques
The linear models used here are the cubic model or the third degree polynomial model and the quadratic model or second degree polynomial model. The value of b3 for area is negative which indicates that area under walnut cultivation decreased in the last part of the cultivation period and the value of b1 and b2 being positive clearly indicates that there was an increase in the cultivation area. Further, the negative value of b1 for production is an indication of the decrease in the production during the initial period of the study and the positive values of b2 and b3 indicates an increase in the production.
Non-parametric and semi-parametric regression
Trend analysis of area, production and productivity using nonparametric (Loess) and semi-parametric (spline) regression are presented in the tables 3, 4 and 5. In Table 3 the value of R2 is 0.91 for Loess and 0.94 for Spline regression. The AICc, RMSE, MAPE, MAE, MaxAPE and MaxAE values comes out be small for Spline regression for the area under walnut cultivation. The area under the walnut cultivation has increased over the years of study and is shown in figure 1.
On comparing the values of AICc, RMSE, MAPE, MAE, MaxAPE and MaxAE for production and productivity the spline regression has the smallest values. The increasing trend in the production and productivity over the years of study is shown in the figure 3 and 5. It can be observed that upto 2013-14 there is sharp increase in production and productivity. However, a decline in production and productivity can also be observed during the year 2014-15 is observed which is due to the floods that occurred during the said year (Islam and Shrivastava, 2017).
The values of area are initially fitted at the smoothing parameters in order to obtain the best fit of the data points we obtain the graph of the data points in the neighborhood of the smoothing parameters and look for the curve which covers all the points of the data. The one which covers maximum points is the best fit of the data points. In figure 2 the smooth curve fits are obtained for area in the neighborhood of smoothing parameters i.e., at 0.25, 0.375, 0.60 and 0.80. It is observed that the best fit is obtained at smooth=0.375. In figure 4 smooth fits for production are plotted in the neighborhood of the smoothing parameter at 0.15, 0.20, 0.35 and 0.62 and it is observed that the best fit obtained for smooth=0.62. Figure 6 provides the fits for productivity in the neighborhood of the smoothing parameters i.e., at smooths equal to 0.25, 0.35, 0.45 and 0.75. The best fit is observed to be at the smooth=0.75 Even values of RMSE, MAE, MAPE, MaxAE and MaxAPE for area production and productivity of Kashmir for non-parametric regression has observed lower values than the parametric regression (Tables 3, 4, 5). This is clear indication of the superiority of these techniques over the parametric models. These models perform very well in visualizing the past trends where the parametric models fails to.
Among the nonparametric and semiparametric regression, the spline regression has shown the lowest values of AICc, RMSE, MAPE, MAE, MaxAPE and MaxAE for area, production and productivity of walnut in Kashmir hence spline regression is the best fitted model for walnut production in Kashmir (Fig. 5). Various scientist viz. Aydin (2007) and Pal (2011) observed similar results where the spline gave the better results than the Loess smoothing.
In the above study, three types of modeling are discussed parametric, semipara-metric, nonparametric modeling. Nonparametric and semi-parametric regression models are flexible compared to parametric models. Semi-parametric is hybrid of both parametric and nonparametric which allow to have the best of both worlds: a model that is understandable and offering a fair representation of the data in the real life. However, semi/nonparametric regression requires larger sample sizes than regression based on parametric models because the data must supply the model structure as well as the model estimates (Mahmoud, 2019). From the above study it is observed that there is dramatic increase in the area under walnut cultivation and in the production as well as productivity. In order to maintain the trend more and more land is to be brought under the walnut cultivation. Parametric regression usually utilized in studying the trend seems not to perform better than the nonparametric and semi-parametric regression. And out of the nonparametric and semi-parametric regression methods the semi-parametric regression (spline) is the best fit for the trend analysis of the walnut production of Kashmir. | 2020-12-17T09:08:58.778Z | 2020-10-20T00:00:00.000 | {
"year": 2020,
"sha1": "f1c20832bb6c50d9fdfe9d747b56760c030711d1",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/9-10-2020/Nageena%20Nazir,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "55bad568d20737446e5f4cefa096f90546129fd0",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
244894562 | pes2o/s2orc | v3-fos-license | Prolonged Asthma Exacerbation as an Initial Presentation in Hereditary Hemorrhagic Telangiectasia
305 Prolonged Asthma Exacerbation as an Initial Presentation in Hereditary Hemorrhagic Telangiectasia Jacob Umscheid, M.D.1,2, Joshua Albright, D.O.1,2, John Chazhoor, MS-41, Rhythm Vasudeva, M.D., M.S.1,3 1University of Kansas School of Medicine-Wichita, Wichita, KS 2Department of Pediatrics 3Internal Medicine/Pediatrics Residency Program Received Aug. 4, 2021; Accepted for publication Nov. 21, 2021; Published online Dec. 2, 2021 https://doi.org/10.17161/kjm.vol14.15752
INTRODUCTION
Hereditary hemorrhagic telangiectasia (HHT), also known as Rendu-Osler-Weber syndrome, is an autosomal dominant disease of multiple pathological arteriovenous malformations (AVM) throughout the body. 1 The malformations of HHT are fragile in nature and directly connect arterial blood flow to the venous vasculature, bypassing normally present capillary beds. Disease presentation is dependent on the location of these malformations in the body. Although typically seen as telangiectasias or small AVMs on the skin and mucosa, large AVMs and complications secondary to AVMs can present with symptoms involving the brain, lungs, and liver. As mentioned, the fragility of these vascular malformations lends a hand to deleterious consequences such as hemorrhage, vascular shunting, and passage of venous emboli to the brain. 2 Initially thought to be a very rare disease, genetic testing has shown the prevalence of HHT to be greater than previously believed. 1 Prevalence for HHT is estimated to be 1/5,000 to 1/10,000 with equal distribution between gender and race. [3][4][5] Initial symptoms, such as petechiae and epistaxis, may be mild in presentation, and the actual prevalence may be greater than what can be measured. Severe features of HHT, such as pulmonary AVMs, can be the presenting features of HHT and have been observed in 15 to 35% of cases. 6 In this article, a case of a young female is presented with prolonged acute asthma exacerbation discovered to have large pulmonary AVMs who showed improvement after endovascular coiling and eventual diagnosis of HHT.
CASE REPORT
A 17-year-old female with a past medical history of mild intermittent asthma presented to the pediatric emergency department with a chief complaint of cough, congestion, and wheezing that was unresponsive to albuterol. Her wheezing had started one day prior. Her past medical history included multiple episodes of community acquired pneumonia not requiring hospital admission and recurrent epistaxis. Family history was not available as her parents were not available at the time of presentation.
Initial vital signs included temperature of 37.4°C, pulse of 128 bpm, respiratory rate of 20 breaths per minute, blood pressure of 139/81 mmHg, and oxygen saturation of 85%. Physical examination was significant for respiratory distress with suprasternal retractions and inspiratory and expiratory wheezing. Chest x-ray showed faint patchy airspace opacities in the right midlung ( Figure 1). Viral respiratory PCR indicated infection from rhinovirus and enterovirus. Hypoxia improved with 4L via nasal canula and the patient was transferred to the pediatric intensive care unit for management of acute asthma exacerbation triggered by a viral infection. She was initiated on scheduled albuterol for bronchodilation and corticosteroids for anti-inflammation and showed rapid improvement. On day two of hospitalization, she was maintaining an oxygen saturation above 90% on 2L and was transferred to the pediatric floor for continued hypoxia management.
On day three of hospitalization, the patient developed worsening oxygenation and ventilation and a repeat chest X-ray revealed a persistent right middle lobe opacity, prompting initiation of amoxicillin-clavulanate for presumed community acquired pneumonia. She also developed epistaxis that resolved spontaneously and was switched to a high flow face mask due to presumed nasal mucosa irritation from nasal canula. The patient showed minimal improvement and persistent hypoxia despite prolonged treatment course. On day 10 of hospitalization, the patient's mother revealed a personal history of HHT and concern for a pulmonary arteriovenous malformation was investigated with computerized tomography angiogram (CTA). CTA revealed multifocal filling branches extending to the periphery of the right middle lobe with at least three major feeding AVMs present, expanding from directly adjacent to the bifurcation of the right middle lobe and right lower lobe bronchial arteries extending to the periphery ( Figure 2). Echocardiogram demonstrated agitated saline contrast in the left atrium, indicating a right to left shunt consistent with a pulmonary AVM. Interventional radiology was consulted for correction of the pulmonary AVMs with three separate locations requiring coil and plug placement (Figure 3). Her hypoxia resolved immediately after this intervention.
An investigation for other AVMs was initiated with a CTA of the head and liver ultrasound, both with negative results. No telangiectasias were observed on the skin of the patient or on the oral mucosa. The genetics service was consulted and a presumptive diagnosis of HHT was made. Genetic testing was offered to confirm the diagnosis but was denied by family.
DISCUSSION
This case demonstrated a presentation of a pulmonary AVM secondary to HHT resulting in prolonged recovery of an acute asthma exacerbation. Although epistaxis secondary to telangiectasias and involvement of the nasal mucosa is the most common presentation of HHT, these diagnoses can go unrecognized and dismissed as benign. 5 When epistaxis is combined with a prolonged course of respiratory illness, investigation into HHT is warranted. Diagnosis of HHT in adolescents can be difficult, as severity and recurrence increase with age and initial episodes of epistaxis could be relatively mild and result in the underdiagnosis of HHT. 7 The presentation of epistaxis often precedes other manifestations of HHT by 20 to 30 years, making it imperative to carry a low index of suspicion in patients presenting with recurrent epistaxis. 8 In addition to telangiectasias, much larger AVMs in the pulmonary vasculature can contribute to significant morbidity of individuals with HHT. 6 These consist of a direct connection between a branch of the pulmonary artery and a branch of the pulmonary vein with potential aneurysm at the point of convergence. Patients with HHT tend to present with multiple AVMs and are found most commonly bilaterally in the lower lobes of the lungs. 9 Depending on the method of investigation, 60 to 90% of individuals with pulmonary AVMs have an underlying diagnosis of HHT. 6,10 Detection of these underlying AVMs increases when utilizing tools such as high-resolution computed tomography (CT) and transthoracic contrast CT with saline contrast. 1 Clinical manifestations, such as dyspnea, fatigue, or cyanosis, from pulmonary AVMs stem from the right-to-left shunt created and likely increase in severity depending on the number of AVMs.
Other concerning features of HHT include neurological and hepatic manifestations. Paradoxical emboli resulting in cerebral vascular accidents or abscesses can occur secondary to pulmonary shunting. 11,12 Cerebral vascular malformations also can result in dural fistulas, cavernomas, and aneurysms. 11 Because of this risk, guidelines recommended angio-magnetic resonance imaging screening to investigate cerebral vascular malformations in patients with definite HHT. 5 The prevalence of cerebral AVMs in HHT was approximated to be 10% based on computed tomography, however, this is considered as an underestimate as there are more sensitive methods of investigation available. 13 Akin to both pulmonary and central nervous system vasculature, hepatic vasculature in individuals with HHT also can be found to have AVMs. According to studies utilizing both CT and ultrasounds, frequency of hepatic vascular abnormalities were 74% when using CT and 41% when investigated using ultrasound. 13,14 Only 8% of those studied were symptomatic prior to investigation. 14 Complications of hepatic AVMs include the potential of inducing heart failure secondary to high cardiac output caused by left-to-right shunting within the hepatic vasculature, but also biliary disease and portosystemic encephalopathy. 13,15 Until 2000, there were no standardized clinical diagnostic criteria for the diagnosis of HHT. A consensus statement on four criteria with an interpretation on the number of positive results and subsequent chances of positive diagnosis, named the Curaçao criteria, was made in 2000. 5,16 These criteria are described in Table 1. Three of the four criteria can be seen with history and physical exam alone. Since its initial release, studies have looked to verify the proposed Curaçao criteria. One study of 263 first-degree relatives who were carriers of disease-causing mutations used genetic testing as a gold standard and found that the Curaçao criteria had a sensitivity of 90.3% of the 186 with HHT causing mutations, 100% positive predictive value of firstdegree relatives, and negative predictive value of unlikely diagnosis with 97.7%. 12 In 2020, McDonald et al. 17 reviewed the genetic testing results of 152 individuals who were diagnosed clinically using the Curaçao criteria and concluded approximate 97% presence of causative mutations of either ENG, ACVRL1, or SMAD4. The HHT diagnosis is definite if three or more criteria are present, possible or suspected if two are met, and unlikely if fewer than two criteria are present.
CONCLUSIONS
The diagnosis of HHT remains a clinical diagnosis through thorough medical history, family history, and physical examination. When the diagnosis is made, a multisystem investigation is required to investigate potential AVMs. It is also important to investigate family members for HHT due to its autosomal dominant inheritance pattern. Unexplained presentation, such as a prolonged asthma exacerbation in this case presentation, with significant family history should make one suspicious of the diagnosis. Since AVMs can be present in multiple organ systems, a multidisciplinary approach involving specialists in hematology, pulmonology, gastroenterology, interventional radiology, cardiology, neurology, and genetics may be required to provide optimal care for affected patients. | 2021-12-05T16:14:19.819Z | 2021-12-02T00:00:00.000 | {
"year": 2021,
"sha1": "99bffd7e10c6afee58636d549d3fa4d6fa939fe8",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.ku.edu/kjm/article/download/15752/14629",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6da64ccc6fc5ed41d1f9b2f11b6e04e6a88c2b78",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7058580 | pes2o/s2orc | v3-fos-license | Non-Uniformly Coupled LDPC Codes: Better Thresholds, Smaller Rate-loss, and Less Complexity
We consider spatially coupled low-density parity-check codes with finite smoothing parameters. A finite smoothing parameter is important for designing practical codes that are decoded using low-complexity windowed decoders. By optimizing the amount of coupling between spatial positions, we show that we can construct codes with excellent thresholds and small rate loss, even with the lowest possible smoothing parameter and large variable node degrees, which are required for low error floors. We also establish that the decoding convergence speed is faster with non-uniformly coupled codes, which we verify by density evolution of windowed decoding with a finite number of iterations. We also show that by only slightly increasing the smoothing parameter, practical codes with potentially low error floors and thresholds close to capacity can be constructed. Finally, we give some indications on protograph designs.
I. INTRODUCTION
Low-density parity-check (LDPC) codes are widely used due to their outstanding performance under low-complexity belief propagation (BP) decoding. However, an error probability exceeding that of maximum-a-posteriori (MAP) decoding has to be tolerated with (sub-optimal) low-complexity BP decoding. A few years ago, it has been empirically observed that the BP performance of some protograph-based, spatially coupled (SC) LDPC ensembles (also termed convolutional LDPC codes) can improve towards the MAP performance of the underlying LDPC ensemble [1]. Around the same time, this threshold saturation phenomenon has been proven rigorously in [2], [3] for a newly introduced, randomly coupled SC-LDPC ensemble. In particular, the BP threshold of that SC-LDPC ensemble tends towards its MAP threshold on any binary memoryless symmetric channel (BMS).
SC-LDPC ensembles are characterized by two parameters: the replication factor L, which denotes the number of copies of LDPC codes to be places along a spatial dimension, and the smoothing parameter w. This latter parameter indicates that each edge of the graph is allowed to connect to w neighboring spatial positions (for details, see [2] and Sec. II). The proof of threshold saturation was given in the context of uniform spatial coupling and requires both L → ∞ and w → ∞. This poses a serious disadvantage for realizing practical codes, as relatively large structures are required to build efficient codes.
In practice, the main challenges for implementing SC-LDPC codes are the rate-loss due to termination and the decoding complexity. The rate-loss, which scales with w, can be made arbitrarily small by increasing L, however, a large L can The work of L. Schmalen was supported by the German Government in the frame of the CELTIC+/BMBF project SENDATE-TANDEM. worsen the finite-length performance of SC-LDPC codes [4]. Known approaches to mitigate the rate-loss (e.g., [5], [6]) often introduce extra structure at the boundaries, which is usually undesired. Therefore, we would like to keep the rate-loss as small as possible for a fixed, but small L. Additionally, the decoding complexity can be managed by employing windowed decoding (WD) [7], however, the window length and complexity scale with the smoothing parameter w. For both reasons, w should be as small as possible, ideally w ∈ {2, 3}, to keep the rate-loss and complexity small, e.g., in high-speed optical communications [8].
In this paper, we construct code ensembles that have excellent thresholds for small w, that have smaller rate-loss than SC-LDPC ensembles and can be decoded with less complexity by maximizing the speed of the decoding wave. We achieve these properties by generalizing the uniformly coupled SC-LDPC codes of [2] to allow for non-uniform coupling. It was already recognized in [9], [10] that non-uniform protographs can lead to improved thresholds in some circumstances by sacrificing a one-sided converge of the chain, which is not problematic when using WD. A very particular, exponential coupling was used in [11] to guarantee anytime reliability.
We extend non-uniform coupling to randomly coupled SC-LDPC ensembles and protograph-based ensembles. We analyze their performance under message passing with and without windowed decoding. We show that we can achieve excellent close-to-capacity thresholds by optimizing the coupling, for small w and large d v , which is required for codes with low error floors. Furthermore, we introduce a new multitype-based non-uniform coupling that further improves the thresholds without increasing w. We find that the rate-loss is decreased by non-uniform coupling as well. We finally show that the decoding speed, which is an indicator of the complexity, can be increased by non-uniform coupling.
II. SPATIALLY COUPLED LDPC CODES
We briefly describe two construction types of nonuniformly coupled LDPC codes: the random ensemble and the protograph-based ensemble. The former is easier to analyze and exhibits the general advantages of non-uniform coupling while the latter is more of practical interest.
We now briefly review how to sample a code from a random, non-uniformly coupled (d v , d c , ν, L, M ) SC-LDPC ensemble with regular degree distributions. We first lay out a set of positions indexed from z = 1 to L on a spatial dimension. At each spatial position (SP) z, there are M variable nodes (VNs) and M dv dc check nodes (CNs), where M dv dc ∈ N and d v and d c denote the variable and check node degrees, respectively. The non-uniformly coupled structure is based on the smoothing distribution ν = [ν 0 , . . . , ν w−1 ] where ν i > 0, i ν i = 1 and w > 1 denotes the smoothing (coupling) width. The special case of ν i = 1 w leads to the usual, well-known spatial coupling with the uniform smoothing distribution [3].
For termination, we additionally consider w−1 sets of M dv dc CNs in SPs L + 1, . . . , L + w − 1. Every CN is assigned with d c "sockets" and imposes an even parity constraint on its d c neighboring VNs. Each VN in SP z is connected to d v CNs in SPs z, . . . , z + w − 1 as follows: For each of the d v edges of this VN, an SP z ∈ {z, . . . , z + w − 1} is randomly selected according to the distribution ν, and then, the edge is uniformly connected to any free socket of the M d v sockets arising from the CNs in that SP z . This graph represents the code with n = LM code bits, distributed over L SPs. Because of additional CNs in SPs L+1, . . . , L+w−1, but also because of potentially unconnected CNs in SPs 1, . . . , w−1, the design rate is slightly decreased to which increases linearly with w.
In the limit of M , the asymptotic performance of this ensemble on a binary erasure channel (BEC) can be analyzed using density evolution, with where ε denotes the channel erasure probability and x (t) z the average erasure probability of the outgoing messages from VNs in SP z at iteration t. The messages are initialized as x becomes the known DE equation for SC-LDPC codes with uniform coupling [2, Eq. (7)].
B. Protograph-based SC-LDPC Ensembles
SC-LDPC ensembles with a certain predefined structure can be constructed by means of protographs [12]. The Tanner graph of the protograph-based SC-LDPC code is some Mcover of the protograph, i.e., M copies of the protograph are bound together by random permutation of the edges between the same type of sockets. Protograph-based SC-LDPC codes are of practical interest because of their simple hardware implementation and their excellent finite-length performance [13]. An exemplary protograph of an SC-LDPC code with non-uniform coupling is shown in Fig. 1-a). As the coupled protograph is a chain of repeating segments, we represent coupled protographs by their distinct elementary segment shown in Fig. 1
C. Windowed Decoder Complexity
The decoding complexity is an important parameter for practical SC-LDPC codes. Consider the profile of densities (1). It has been shown in [2], [14] that the profile behaves like a "wave": it shifts along the spatial dimension with "a constant speed" as the BP decoder iterates. The wave propagation speed is analytically analyzed and bounded in [15], [16].
The wave-like behaviour enables efficient sliding windowed decoding [7]: the decoder updates the BP messages of edges lying in a window of W D SPs I times, and then shifts the window one SP forward and repeats. Thus, the decoding complexity scales with O(W D ILM d v ) as there are 2M Ld v BP messages and each BP message is updated W D I times.
The required window size W D is an increasing function of the smoothing factor w [7] which implies that we should keep w small. The number of iterations I > 1 v where v is the speed of the wave. In the continuum limit of the spatial dimension, v is defined as the amount displacement of the profile along the spatial dimension after one iteration. For the discrete case of (1), the speed can be estimated by where T D in the minimum number of iterations required for the displacement of the profile by more than D SPs, i.e., The approximation of v becomes more precise by choosing larger D. We chose D = 10 in this paper.
We quickly recapitulate the asymptotic analysis for the windowed decoder here. Instead of the windowed decoder proposed in [7, Def. 4], we employ a slightly modified, more practical version, which updates the complete window after one decoding step. For every windowed decoding step, indexed by c ∈ [1, L], we generate a copy y L+w−1 ) on which we apply the update rule (1) for SPs z ∈ {c, c + 1, . . . , c + W D − 1} only, for a total of I iterations. After I iterations, we update the SPs as We use a finite number of iterations in the windowed decoder to accurately predict the performance of a practical decoder.
III. NON-UNIFORM COUPLING: RANDOM ENSEMBLES
In this section, we optimize non-uniformly SC-LDPC ensembles with random coupling for the BEC. First, we consider w = 2, the smallest possible smoothing parameter. This case has a high practical interest as w should be kept as small as possible in order to keep the decoding latency and window length W D manageable when employing windowed decoding. We show numerically that non-uniform coupling improves the BP threshold and also the decoding complexity as the total number of iterations decreases. Afterwards, we show the advantages of non-uniform coupling w > 2.
A. Non-Uniform Unit-Memory Coupling (w = 2) Consider a random (d v , d c , ν, L, M ) SC-LDPC ensemble with smoothing vector ν = [α, 1 − α]. It is enough to assume 0 ≤ α ≤ 1 2 because of symmetry. In the limit of M , the asymptotic performance of the ensemble over BEC can be evaluated using DE. We consider the BP threshold is updated according to (1). Figure 2 illustrates ε BP (α) in terms of α for different values of d v . Each curve has two minima and a maximum. The two minima are at α = 0 and α = 1 2 where ε BP (α = 0) = ε BP,uncoupl. corresponds to the BP threshold of the uncoupled ensemble and ε BP (α = 1 2 ) corresponds to the BP threshold of the SC-LDPC ensemble with uniform coupling. The respective maxima of the curves are indicated by a marker and obtained for α * . We can see that uniform coupling (α = 1/2) does not lead to the best thresholds. In particular, if we increase d v , which is required for constructing codes with very low error floors, uniform coupling with w = 2 is not efficient anymore, and the thresholds are significantly away from the BEC capacity. With an optimized α , we can achieve thresholds that are close to capacity (and the MAP threshold of the uncoupled LDPC ensemble ε MAP,uncoupl. ) and significantly outperform the uncoupled and the uniformly coupled cases. Table I gives the thresholds of the optimized codes together with the unoptimized, uniformly coupled and uncoupled cases. Although coupling always improves the threshold, with w = 2, uniform coupling is not a good solution and significantly better thresholds are obtained by non-uniform coupling, especially for larger d v . Moreover, it is easy to show that the rate-loss ∆ is maximized for uniform coupling (α = 1/2). Hence nonuniform coupling will always reduce the rate-loss. We can see that as d v increases, α decreases as well. An interesting open question is whether α saturates to some constant or if it will converge to zero. Non-uniform coupling can also decrease the decoding complexity of windowed decoding. Figure 3 illustrates the effect of non-uniform coupling on the wave propagation. While uniform coupling (α = 1 2 ) leads to a wave propagation from both ends towards the middle, non-uniform coupling sacrifices one of those waves in favor of the other one, which will (usually) travel at a faster velocity. We compute the speed v according to (2) for different values of α ∈ [0, 1/2] and different values of ε ∈ [ε BP (α = 0), ε BP (α )] and show the contour lines of equal decoding speed v in Fig. 4 for d v = 5 and d v = 10. Points along a contour line indicate that the decoding wave moves with the same speed. When building practical decoders, usually a hardware constraint is imposed which limits the amount of operations that can be done. Hence also the decoding speed is limited. We can see that for a fixed speed v, non-uniformly coupled codes can be operated at much higher erasure probability than with uniform coupling. Note that the maxima of the speed contours coincide practically with the α maximizing the threshold. Figure 4 suggests that windowed decoding also benefits from non-uniform coupling. For this reason, we use density evolution including windowed decoding, as detailed in Sec. II-C. Figure 5 exemplarily shows the thresholds for windowed decoding for the (5, 10, [α, 1 − α], L = 100) and the (10, 20, [α, 1 − α], L = 100) SC-LDPC ensembles for four window configurations: W D ∈ {10, 20} and I ∈ {3, 9}. We see a good agreement between the speed contour lines of Fig. 4 and the windowed decoding thresholds. Again we can see that for non-uniformly coupled codes and identical window configurations, we can significantly increase the decoding threshold.
B. Non-Uniform Coupling with w > 2
We have seen in the previous section that non-uniform coupling can increase the BP threshold if we constrain w = 2. However, for d v > 5, we have to tolerate a gap to capacity. In this case, we can relax the constraint on w. In fact, for w > 2, non-uniform coupling can be more beneficial as there are more degrees of freedom for optimizing the smoothing vector ν. We numerically show in the following that it results For regular ensembles with asymptotic rate r = 1 2 (d c = 2d v ), we observe that the BP threshold, ε BP (ν), depends on the choice of ν and can get very close to the capacity. We used a grid search with a fine resolution to numerically optimize the BP threshold for the ensembles with d v ∈ {4, . . . , 10}. The results are given in Tab. II where the optimized smoothing distribution is denoted by ν = [ν 1 , ν 2 , 1 − ν 1 − ν 2 ]. We observe that the BP thresholds almost saturate to the capacity (or ε MAP,uncoupl. , respectively), while the BP threshold of uniformly coupled ensembles (ε BP (ν = [ 1 3 , 1 3 , 1 3 ])) have a gap to capacity which increases for larger d v . Note that especially for small d v , many different choices of ν lead to good thresholds ε BP . In that case, we select the optimum ν which leads to a good threshold and also yields a small rate loss ∆. Note that in contrast to the w = 2 case, where the rate-loss was maximal for uniform coupling, it is not hard to show that the rate-loss ∆ for w = 3 is maximized with ν = [ 1 2 , 0, 1 2 ]. It is an interesting open question whether it is possible to construct capacity-achieving codes with a finite w.
C. Non-Uniform Coupling with Different Types
Non-uniform coupling is a general concept. So far, we presented the simplest way of non-uniform coupling in which the edges of all VNs in an SP are randomly connected according to a distribution ν. Generally, the edges of each VN can be connected according to a set of distributions. Let us illustrate the benefits of such coupling by an example. Consider again a coupled LDPC ensemble with w = 2 and d c = 2d v . Inspired by the protograph structure shown in Fig. 1, we partition the VNs in each SP into two sets of equal size, called "upper set" and "lower set". As described in Sec. II-A, the edges of VNs in the upper set are randomly connected to CNs according to the "upper" smoothing distribution ν = [α, 1 − α]. Similarly, the edges of VNs in the lower set are distributed according to the "lower" smoothing distribution ν = [α, 1 − α]. Therefore, each CN receives two types of BP messages from VNs. Let x Using DE analysis and a rough exhaustive search, we optimized α and α to find the largest BP threshold for different values of d v . The thresholds are summarized in Tab. III. We observe that the thresholds almost saturate to capacity for d v = 6 and d v = 7 with only w = 2.
IV. NON-UNIFORM COUPLING: PROTOGRAPH ENSEMBLES
As most practical codes are based on protographs, we extend the findings of this paper to protograph-based codes with the elementary building segment of Fig. 1-b). In comparison to the random ensembles, there is less room for optimization as there are finite choices for b 1 and b 2 , each requiring a separate DE analysis, which is also slightly more complicated as the BP messages come from different edge types (multi-edge types DE). We computed DE thresholds for all possible protographs based on a simple elementary segment with 2 VNs and 2 CNs for L = 100 (r = 0.495). In Tab. IV, we summarize the best protographs and the respective thresholds that we find for different choices of d v . Some of the best elementary segments are shown in Fig. 6. Up to d v = 6, protographs with b 1 = b 2 = 1 are optimal, however, when d v > 6, interestingly, the choice b 1 = 1 and b 2 = 5 becomes optimal. | 2017-01-26T09:41:14.000Z | 2017-01-26T00:00:00.000 | {
"year": 2017,
"sha1": "5e8f34bc6237ab2e4ee19d1117815afcb57eb7f0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1701.07629",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c8e709b579e1b9f140fe4fbc51d684abd11279da",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
119051053 | pes2o/s2orc | v3-fos-license | The Large Number Limit of Multifield Inflation
We compute the tensor and scalar spectral index $n_t$, $n_s$, the tensor-to-scalar ratio $r$, the consistency relation $n_t/r$ in the general monomial multifield slow-roll inflation models with potentials $V \sim\sum_i\lambda_i \left|\phi_i\right|^{p_i}$. The general models give a novel relation that $n_t$, $n_s$ and $n_t/r$ are all proportional to the logarithm of the number of fields $N_f$ when $N_f$ is getting extremely large with the order of magnitude around $\mathcal{O}(10^{40})$. An upper bound $N_f\lesssim N_*e^{ZN_*}$ is given by requiring the slow variation parameter small enough where $N_*$ is the e-folding number and $Z$ is a function of distributions of $\lambda_i$ and $p_i$. Besides, $n_t/r$ differs from the single-field result $-1/8$ with substantial probability except for a few very special cases. Finally, we derive theoretical bounds $r>2/N_*$ ($r\gtrsim0.03$) and for $n_t$ which can be tested by observation in the near future.
The consistency relation [21] in single-field slow-roll inflation n t /r = −1/8 is a relation between tensor spectral index n t and the tensor-to-scalar ratio r. It is hoped that the detection of such a compelling signature can further validate inflation theory especially for the singlefield ones. Unfortunately the excess of B-mode power detected by BICEP2 [22] can be explained by the polarized thermal dust, not the primordial gravitational wave [23][24][25].
Recent experimental progress however, especially the Planck 2015 results [6] shows r 0.002 < 0.11 at 95% C.L. by fitting the Planck TT,TE,EE+lowP+lensing combination (P15). The BICEP2 & Keck Array B-mode data (BK14) implies r 0.05 < 0.09 (95% C.L.). Combining with Planck 2015 TT+lowP+lensing and some other external data, the upper bound on r becomes r 0.05 < 0.07 (95% C.L.) [11,26] in the base ΛCDM+r model. The tight constraint on r lead to the chaotic single-field inflation model with a potential V (φ) ∝ φ 2 being disfavored at more than 2σ C.L. [25]. Moreover, single-field inflation models with a monomial potential and the natural inflation model are all marginally disfavored at 95% C.L. and all single-field inflation models with a convex potential are not favored [26].
On the other hand, many high energy theories contain * guozhk@itp.ac.cn large numbers of scalar degrees of freedom in extremely high energy scales [27][28][29][30], therefore single-field inflation models are simple but not natural in the very early Universe in approaching Planck energy density. Consequently, the studies of gravitational wave consistency relation and other inflationary observables for largenumber multifield inflation are necessary and may provide a better representation of our real Universe. Price et al. derived good results for the N f -monomial models with potential V ∼ i λ i |φ i | p by marginalizing probability random method and many-field limit [31]. However, diverse exponent p will be more appropriate and more consistent with many high energy theories [32][33][34][35][36][37][38][39]. In this paper we will derive robust results for n t /r and other inflationary parameters in N f -monomial models with diverse exponent p i . In Sec. II, we employ δN-formalism, central limit theorem (CLT) and Laplace method sequentially to calculate the expectations and corresponding variances of all the inflation parameters and prove their robustness. Numerical verifications and intuitive graphic representations are showed for some well-motivated prior probabilities of λ i , p i and initial conditions. We conclude in Sec. III.
II. THE GENERAL LARGE-N f MONOMIAL MULTIFIELD MODELS
We consider the multifield inflation with potential where φ i is the inflaton field, N f is the number of fields and λ i , p i are real, positive constants. For simplicity, we set the reduced Planck mass M pl = 1/ √ 8πG ≡ 1. According to the slow-roll inflation of first-order approximation we have n t = −2 and arXiv:1708.06136v2 [astro-ph.CO] 28 Aug 2017 In δN-formalism, applying initial flat slice of spacetime at time t * gives the number of e-folds N * from t * when the pivot scale k * leaves the horizon to the end of inflation at t c [40,41] where φ i, * and φ i,c denote values at the horizon crossing time and the end of inflation respectively. Substitute V i and V i = λ i p i |φ i | pi−1 then We can also express the gauge-invariant curvature perturbation ζ by the field perturbations at horizon crossing ζ ≈ i N * ,i δφ i, * , where N * ,i ≡ ∂N * /∂φ i . The power spectrum of scalar field perturbations around a smooth background at time t * is P ij δφ = (H * /2π) 2 δ ij . Consequently, the power spectrum of curvature perturbation is Recalling the tensor power spectrum P h = 2H 2 * /π 2 finally comes to the expression of tensor-to-scalar ratio in δN-formalism For N f -monomial models, it is reasonable to neglect the field values φ i,c at the end of inflation, i.e., we apply the horizon crossing approximation (HCA). From the definitions of scalar spectral index n s and we can derive n s − 1 = d ln i N * ,i N * ,i /dN − 2 , where we have taken the first order approximation of . Substituting the Friedman equations, the Klein-Gorden equations and the relation dN = Hdt comes to In our general N f -monomial model the V and N * are Eq.
(1) and Eq. (4) respectively and using HCA then gives the scalar spectral index n s in the first order of where φ i means the field values φ i, * at horizon crossing. Other inflation parameters in explicit expressions of φ i , p i , λ i are as follows: We set up the probability distribution for the parameters Eq. (7)-(11) by marginalizing them over P (λ), P (φ * ), and P (p) and then calculate their expectations and corresponding variances by applying the central limit theorem (CLT) in many-field limit in the order of magnitude about N f > O(100). To further simplify the expressions, we boost N f to be really large in the order O(10 40 ) and use Laplace method to produce the final analytical results precisely. Different choice of initial conditions has an insignificant effect on the density spectra [42]. And applying the HCA in Eq. (4) implies that P(φ * ) is a uniform prior on the surface of an N f ellipsoid whose elliptic radii are determined by P(p). So we can sample the ellipsoid by defining where N (0, 1) is a multivariate normal distribution. Subsequently, one of the summations in Eq.
In many-field limit N f → ∞ the CLT ensures that the summation is normally distributed with mean in which we assume that λ i , p i and x i are independent and angle brackets . indicates the expectation value. The lower term of denominator in Eq. (13) j x 2 j is χ-distribution and approaches normal distribution N ( N f , 1/ √ 2) in many-field limit. Besides, for any normally distributed variable x ∼ N (µ, σ) [43] where F 1,1 is the confluent hypergeometric function of the first kind and ν > −1. If ν < −1, |x| ν may diverge. As for µ = 0, σ = 1, then F 1,1 = 1. Also we know the ratio distribution α/β for normally distributed random variables (RVs) α ∼ N (µ α , σ α ) and β ∼ N (µ β , σ β ) as P (β > 0) → 1 will approach a normal distribution with mean µ α /µ β and standard deviation [44] is also approximately normal in many-field limit and we can prove the relation Then in many-field limit the mean of the summation is finite when p > 1/2. The means of other summations are similar to Eq. (17) and all the standard deviations can be calculated from the mean values and the corresponding two-moments so they are tedious algebraic functions of λ , λ 2 , λ 4 and many other terms. Finally by applying Eq. (16) and other conclusions in many-field limit the value of r is normally distributed with a mean and a standard deviation proportional to where γ is the correlation between the numerator and denominator in Eq. (10). The value of n t is normally distributed with a mean where p m is the minimum possible value and f (p) is the probability density function (PDF) of p. Note that a finite prediction for mean requires p > 1/2 and a finite standard deviation requires p > 3/4. To get the approximation Eq. (21) we have employed Laplace method (see Appendix A). Both the standard deviations of n t and are proportional to The value of consistency relation n t /r is a multiplication of two normally distributed asymptotic-sharp random variate with a mean where the requirements for p are the same as above and please see Eq. (B1) in Appendix B for the validity of multiplication splitting. Concretely, for typical P (λ) and P (p), Eq. (23) will be a very good approximation when N f is larger than O(100) but the approximation Eq. (24) is as good as Eq. (23) generally only if N f is larger than O(e 100 ) ∼ O(10 40 ). The standard deviation is proportional to and also see Appendix B for the detailed proof. The value of n s is a combination of two normally distributed variate with a mean and the standard deviation of the left term of the result is also proportional to From Eq. (21), requiring the slow variation parameter 0.1 then sets the upper limit of N f where Z is a value depends on the specific probability distributions of λ i and p i In addition, combining Eq. (18), Eq. (21), Eq. (24) and Eq. (27) immediately reaches the lower limiting value of consistency relation n t /r and a relation which is independent of specific probability distribution of λ i , p i and φ i, * . Adding the restriction p m > 1/2 gives two bounds of r r > 2(1 − n s + n t ), (34) and the value range of n t as 1 2 which can be tested by observation in the near future because Eq. (33) indicates r 0.03, which is exactly on the coverage of the next generation projects under construction.
Obviously, with all p i equal we can regain all the conclusions described in [31] and many other classic results from Eq. (23), Eq. (18), Eq. (20), and Eq. (26). But the extent of deviation from the single-field model result of n t /r = −1/8 gets much larger than the fixed-p ones. Figure 1 compares the predicted value from CLT for n t /r in Eq. (23) to corresponding numerical results from Eq. (11) with uniform-distribution λ i ∈ U[10 −14 , 10 −13 ] and uniform-distribution p i ∈ U [1,2] and p i ∈ U [1,3] respectively, showing excellent convergence in many-field limit. Furthermore, the wider the distribution of p i is, the larger N f is needed for getting the comparable convergence. Also we can strictly prove that the corresponding relative error is proportional to 1/N f . Figure 2 delineates the PDF for n t /r with λ i ∈ U[10 −14 , 10 −13 ] and p i ∈ U [1,3] when N f is 100 and 200 respectively. As shown, the larger the N f becomes, the sharper the PDF of n t /r will be and the more likely that the mean of n t /r can well represent the real value, as proved in Eq. 25. To understand the Laplace approximation result in Eq. (24) more intuitionally, we compare the central limit re-sults for n t /r in Eq. (23) to the predicted analytical values from Eq. (24) when N f is extremely large in Figure 3. It is clearly observed that when N f is small, the Laplace approximation is at a great deviation while the extremely large N f leads to a good agreement with Eq. (23) and the logarithmic correlation relation is evident. Also a wider distribution of p i needs a larger N f for a good approximation. But in a narrow distribution of p i as the setup in the figure, N f needs not to be as large as O(10 40 ) to make the Laplace approximation valid, only N f O(10 5 ). The N f ∼ O(10 40 ) condition is suitable for general cases. Notice that the C. L. results, in such a large N f , can represent the numerical ones perfectly well according to the aforementioned analysis.
III. CONCLUSIONS
We have computed the probability distributions for the tensor spectral index n t , tensor-to-scalar ratio r, scalar spectral index n s , and the consistency relation n t /r in the general large number monomial multifield inflation model, as a function of the probability distribution of couplings λ i , power indexes p i , initial field values and the number of fields N f . In many-field limit, all the distributions become sharp with the variances s 2 ∝ 1/N f , so the expected values we get are very robust.
We give a novel prediction that the inflationary parameters , n t , n s and n t /r are all proportional to ln N f when N f is extremely large. The dependency between and ln N f immediately gives the upper bound of N f N * e ZN * if we require small enough such as O(10 −1 ) where Z is a value decided by the specific probability distributions of λ i and p i . But the tensor-to-scalar ratio r = 4/ (N * 1/p ) depends only on the probability distribution of p i .
Besides, we find some distribution-independent relations between the inflationary observables and thereby we give some theoretical bounds for r and n t especially r > 2/N * (r 0.03) which can be tested by observation in the near future. All predictions above together can distinguish diverse-p-N f -monomial models, fixed-p-N f -monomial models and their single-field analogues. This work marks another significant step in the multifield scenario where the predictions are sharp and generic in large-N f limit [31,[45][46][47][48][49][50][51][52][53][54][55][56]. Additionally, exploring a broader class of large number multifield models such as the multifield extension to small-field inflation will be intriguing follow-up work, in order to advance our understanding of the very early universe and the physics in extremely high energy.
Acknowledgments
The author would like to thank Qing-Guo Huang for careful review, comments, and feedback on this paper. The author is also grateful to Shi Pi and Cheng Cheng for helpful discussions. The contribution of HPC Cluster of ITP-CAS is highly appreciated. This work is sup-ported by the project 11647601 of National Natural Science Foundation of China.
For the term in Eq. (7) Then in large-N f limit For the term in Eq. (11) In large-N f limit Obviously we have proved the asymptotic inverse square root relation | 2017-08-28T06:44:50.000Z | 2017-08-21T00:00:00.000 | {
"year": 2017,
"sha1": "bf7f1ea7d3e2b4beea5557e86b19abcaa36e3921",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1708.06136",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bf7f1ea7d3e2b4beea5557e86b19abcaa36e3921",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119113873 | pes2o/s2orc | v3-fos-license | 2D Kinematics and Physical Properties of z~3 Star-Forming Galaxies
We present results from a study of the kinematic structure of star-forming galaxies at redshift z~3 selected in the VVDS, using integral-field spectroscopy of rest-frame optical nebular emission lines, in combination with rest-frame UV spectroscopy, ground-based optical/near-IR and Spitzer photometry. We also constrain the underlying stellar populations to address the evolutionary status of these galaxies. We infer the kinematic properties of four galaxies: VVDS-20298666, VVDS-020297772, VVDS-20463884 and VVDS-20335183 with redshifts z = 3.2917, 3.2878, 3.2776, and 3.7062, respectively. While VVDS-20463884 presents an irregular velocity field with a peak in the local velocity dispersion of the galaxy shifted from the centre of the galaxy, VVDS-20298666 has a well-resolved gradient in velocity over a distance of ~4.5 kpc with a peak-to-peak amplitude of v = 91 km/s . We discovered that the nearby galaxy, VVDS-020297772 (which shows traces of AGN activity), is in fact a companion at a similar redshift with a projected separated of 12 kpc. In contrast, the velocity field of VVDS-020335183 seems more consistent with a merger on a rotating disk. However, all of the objects have a high local velocity dispersion (sigma ~ 60-70 km/s), which gives v/sigma<1. It is unlikely that these galaxies are dynamically cold rotating disk of ionized gas.
INTRODUCTION
To understand the assembly of stars in galaxies, it is crucial to characterize the star formation rate and the total stellar mass already formed as a function of the dynamical mass (the mass of the halo including dark matter). Determining this relation at intermediate and high redshifts allows us to explore whether we are seeing the initial build-up of stellar mass in a fully-formed dark halo, or if the stellar mass to total mass ratio is fairly constant compared to lower-redshift samples (as might be the case if much of the stellar mass formed very early, or dark haloes were assembled through dry mergers triggering little further star formation). The redshifts range z ∼ 2 − 4 is therefore of great interest as massive galaxies are thought to undergo their most active period of star formation (e.g, Hopkins & Beacom 2006).
In the past, deep multi-wavelength surveys (e.g., Current address: Gemini Observatory, 670 N. AÓhoku Place, Hilo, 96720 Hawaii; E-mail: mbusserolle@gemini.edu GOODS - Giavalisco et al. 2004;Hubble Deep Field -Williams et al. 1996;DEEP2 -Faber et al. 2007) have highlighted the large diversity in morphology and stellar populations among high redshift galaxies. Many studies using UV-IR photometry and long-slit spectroscopy, mainly focusing on the Lyman-break galaxy (LBG) populations (e.g. Erb et al. 2003Erb et al. , 2004Erb et al. , 2006a and complemented by various studies using lensed objects (e.g., Lemoine-Busserolle et al. 2003;Bunker et al. 2000), have helped to understand the global characteristics of z ∼ 2 − 4 galaxies. However, with the limited information available from long-slit spectroscopy it is difficult to answer keys questions on the dynamical nature of high redshift galaxies. It is plausible that some may be perturbed by bursts of star formation and merging to lie outside the low-redshift morphological classification scheme (the familiar Hubble sequence). To properly explore whether this is the case, or instead if most star-forming galaxies have stable disk kinematics, requires studying the resolved kinematic structure through "3D" spectroscopy with Integral Field Units (IFUs). The recent advent of high resolution was discovered in our IFU data cube to share the same redshift. The original photometric redshift from VVDS/CFHT-LS was z phot = 1.747 with broad minimum in χ 2 . It was not observed spectroscopically with VIMOS as part of the VVDS survey.
(2) Total observing time usable for each target (900s for each individual exposures). (3) median seeing estimated from the PSF stars. (4) average airmass of all the exposures.
IFUs on 8m-class telescopes has enabled the study of highredshift galaxies on scales of a few kiloparsecs in seeinglimited conditions (e.g. Bunker et al. 2004;Smith et al. 2004) and even down to sub-kiloparsec scales using adaptive optics and the magnification due to gravitational lensing (e.g., Stark et al. 2008). Very recent studies using Integral Field Spectroscopy have started to give an insight into the kinematics properties and physical phenomena at play in high redshift galaxies, such as merging and galactic winds associated with strong star formation (Genzel et al. 2006;Law et al. 2007aWright et al. 2007Wright et al. , 2009Förster Schreiber et al. 2006;Bouché et al. 2007;Genzel et al. 2008;Nesvadba et al. 2008;van Starkenburg et al. 2008). There is some controversy from this early work about the properties of the dynamical structure of these distant galaxies (particularly the proportion of rotationally supported gaseous disks versus objects kinematically dominated by high velocity dispersions). However, there is a consensus from the work so far that at z > 2 galaxies typically exhibit higher velocity dispersions (relative to ordered rotation) compared with lower-redshift samples.
In this paper, we present results on the kinematic properties of a sample of four galaxies at 3 < z < 4 selected in the VVDS Deep data (I < 24;Le Fèvre et al. 2005), using IFU K-band spectroscopy with VLT/SINFONI. These spectra are sensitive to the rest-frame optical, in particular the nebular emission lines from gas ionized by hot stars in regions of recent star formation. Hence we can determine the current star formation rate, and the spatial distribution of star formation, as well as measuring the kinematics of the gas. We are able to study the stellar populations through CFHT photometry from both the VVDS and CFHTLS surveys, and also imaging with the Spitzer Space Telescope using IRAC and MIPS, complemented by UV rest-frame spectroscopy from VLT/VIMOS. The results presented here are part of a study to construct samples representative of the global intermediate and high-z population by selecting galaxies only on the basis of their magnitude (IAB < 24.75). Two companion papers, Lemoine-Busserolle et al. (2009b, in revision) and Queyrel et al. (2009, in revision) are devoted to the study of intermediate redshift galaxies (1 < z < 1.5) observed with VLT/SINFONI during the same observing runs.
In Section 2 we describe our sample, observational strategy and the data reductions techniques. In Section 3 we address the nature of the stellar population of the galaxies, using broad-band photometry to constrain properties such as stellar mass, age and star formation rate. In Section 4 we investigate the kinematic structure and dynamical properties inferred from the integral field spectroscopy. Finally in Section 5 we summarize our results and explore the nature of these LBG galaxies at z = 3 − 4. We compare our findings with previous studies of kinematic properties of z ∼ 2 − 3 galaxies.
Sample Selection
Our IFU observations targetted a sample of 3 galaxies selected in the 02h "deep" (17.5 IAB 24.0) field of the VIMOS VLT Deep Survey (VVDS; Le Fèvre et al. 2005). The z > 3 objects presented here were selected on the basis of redshifts derived from the VLT/VIMOS rest-UV spectra, such that the expected wavelengths of the rest-optical Hβ/[O III] emission lines would be clear of bright OH sky lines in the near-IR.
SINFONI Observational Strategy
The near-IR spectroscopic observations of the sample were acquired with the 3D-spectrograph SINFONI (SINgle Faint Object Near-IR Investigation - Eisenhauer et al. (2003)) at ESO-VLT during two 4-nights runs, on September 5-8, 2005 (ESO run 75.A-0318) and on November 12-15, 2006 (ESO run 78.A-0177). We used the largest-field mode of SINFONI, to enable good sky subtraction without the need to do offset sky observations. This mode has a spaxel scale of 0. 125 × 0. 25 pixels (with coarser sampling in y), leading to a field-of-view of 8 × 8 . Our observations used the K-grism, spanning 1.94 − 2.46 µm with 2.45Å pixels, and are detailed in Table 1. Unresolved sky lines had a measured width of ≈ 7Å (FWHM), indicating a spectral resolution RS = λ/∆λFWHM ≈ 3300 (or 90 km/s velocity resolution). Conditions were photometric and the median seeing for each objects is in Table 1. Each target was acquired through a blind offset from a nearby bright star. Individual integrations were 900 s, and we read out the array in multiple non-destructive read mode to reduce the readout noise to ∼ 7 e − and become background-limited. Between readouts the telescope was nodded by ≈ 4 , positioning the galaxy in opposite corners within the 8 × 8 SINFONI field-of-view. Two sets of observations located around the upper corner (A) and the lower corner (B) were obtained per object. This observational procedure allows background subtraction by using frames contiguous in time, but with the Figure 1. Two-dimensional spectrum of the nebular emission lines of VVDS-8666 and VVDS-7772. The x-axis is wavelength (increasing to the right) and spans the range λ = 20800 − 21620Å. The y-axis is spatial distance, with a length of 6. 0 displayed. galaxy in different locations. Moreover, the target was never located exactly at the same position on the detector, a minimal sub-dithering of 0.3 was required in order to minimize instrumental artifacts when the individual observations are aligned and combined together. The target was reacquired from the offset star every hour. The blind offset stars were also used to monitor the PSF. The total on-source integration times are listed in Table 1. SINFONI was used in its seeing-limited mode for two galaxies (VVDS-020298666 in September 2005, andVVDS-020335183 in November 2006 -hereafter abbreviated to VVDS-8666 andVVDS-5183), and in closed-loop NGS-AO (Natural Guide Star -Adaptive Optics) mode for the observations of VVDS-020463884 (hereafter called VVDS-3884) on 07 September 2005, which had a suitably bright K = 10.7 nearby star 27. 4 away. The Strehl ratio achieved at 2.2 µm was 15%.
SINFONI Data Reduction and Flux Calibration
Data reduction has been performed with the ESO-SINFONI pipeline (version 1.7.1, Modigliani et al. 2007), IRAF and custom IDL scripts. The individual 900s exposures were run through the pipeline, which flat-fields the frames using an internal lamp, corrects for hot pixels using dark frames, wavelength calibrates each individual observation from the spectra of reference arc lamps, and then reconstructs and spatially registers the three-dimensional datacubes (with the sky background still present) for each exposure after applying a distortion correction. Our observing strategy was implemented to maximize on-source integration and therefore no offset blank-sky frame was obtained. As the field of view used with SINFONI is larger than the spatial extent of the galaxies, we were able to obtain a clean sky background subtraction by stepping the galaxies between two well-separated points within the field of view, using the approach and IDL script presented in Davies (2007). Sky background subtraction was done directly on the cube, using the individual cubes at position B as a sky for the individual cube at position A and vice versa. After sky subtraction, individual cubes were aligned in the spatial direction by relying on the telescope offsets from the nearby bright star (used as a reference for the blind acquisition) and then combined together, rejecting remaining cosmic ray strikes to produce the final reduced cube for each object. A flux calibration is required in order to derived absolute parameters (e.g. star formation rate) from the uncalibrated flux measured in emission lines. Each science observation has been accompanied by the observation of a telluric standard star at similar airmass of B spectral type and magnitude K = 6 − 7. Integration times were 2 − 6 s for these standard star observations, and the cubes were reduced in a similar way to our science exposures. We extracted 1D spectra of the stars (summing all the flux within an aperture of diameter 5 resolution elements) which were used both to perform the flux calibration of our galaxy cubes, and to correct them from telluric absorption lines. We assume that the standard stars are well fitted by a pure blackbody curve (a good approximation for these B stars). Starting from the blackbody temperature of the standard star and its magnitude in K-band, we calibrated the sensitivity function using IRAF. The galaxy cubes were then divided by the calibration curve appropriate for that observation to produce a flux calibrated spectrum, corrected for atmospheric absorption. At the centre of the K-band, the conversion was approximately 1 count/s = 2 × 10 −17 erg cm −2 s −1 .
We measured the noise in each data cube by taking a slice in wavelength close to the position of the line emission. We then determined the RMS counts in blank areas uncontaminated by emission from the target galaxy in the region where all the dithered pointings overlapped. In computing the flux uncertainty in an emission line from the extracted spectrum, we multiplied the measured pixel-to-pixel noise in the data cube by πr 2 spat d λ , where rspat is the radius in pixels of the extraction aperture, and d λ is the wavelength extent (again in pixels) over which the line flux is measured. For the galaxies VVDS-5183, VVDS-8666 and VVDS-3884 we used spatial apertures of rspat = 8 pixels (r = 1. 0), and for the compact companion VVDS-02029772 (hereafter abreviated to VVDS-7772) we used a smaller aperture of rspat = 4 pixels (r = 0. 5). In all cases, we used an extraction of width 8 pixels in wavelength (20Å or 300 km/s), which was > ∼ FWHM of the lines. The typical 1 σ noise was 0.2 − 0.3 × 10 −17 erg cm −2 s −1 for our extraction aperture.
The blind offset stars were reduced in the same way than the telluric standards. We produced collapsed 2D images using the SINFONI pipeline to allow to the extent of the PSF to be estimated from the offset stars. . The companion is included in the VVDS photometric catalog as object VVDS-020297772 (hereafter VVDS-7772), with IAB = 25.0, and is undetected in U -band. The photometric redshift is only z phot = 1.7478, very different from our spectroscopic redshift of zspec = 3.2878, which may be due to an unusual SED (perhaps contaminated by the presence of an AGN, see Section 3.3).
SPECTROPHOTOMETRY PROPERTIES
The spatial separation of the galaxies is 1. 6 ± 0. 2 from the line emissions in the 3D IFU cube, which corresponds to a projected distance of 12 kpc at z = 3.29 (the nominal separation in the broad-band images in worse seeing is 1. 9). If we assume that these two galaxies are gravitationally bound, and the velocity offset is due to orbital motion and not outflows etc., the enclosed mass is given by v 2 r/G, and we can set a lower bound on the orbital radius, r, by using the projected separation (12 kpc) and also the resolved component of the orbital velocity along the line of sight (307 km s −1 ). This yields Msystem > 2.6 × 10 11 M .
Rest-Frame UV Spectroscopy
Optical spectra have been obtained with the VIMOS instrument installed at ESO/VLT as part of the VVDS survey (Fig. 2). The spectral coverage is 5600Å < λ < 9250Å which corresponds approximately to rest-frame UV wavelengths 1300Å < λ < 2150Å for the z ≈ 3.3 galaxies, and 1190Å < λ < 1970Å for VVDS-5183 at z ≈ 3.7 (which encompasses Lyman-α 1216Å in this high redshift case, but not in the other galaxies). The resolution is Rs ≈ 230. Of the three galaxies, VVDS-8666 has a relatively featureless spectrum with only O I 1302Å/ C II 1304Å absorption prominent. VVDS-3884 has strong absorption features from the highionization interstellar medium (ISM) lines C IV 1548,1550Å and Si IV 1394,1504Å (with rest-frame equivalent widths of W0 = 4.5Å and 3.0 respectively), as well as weaker C II 1334Å and Si II 1526Å absorption. The high-ionization lines are blueshifted with respect to the [O III] redshift by ≈ 100 km s −1 , with the low ionization lines blueshifted by ≈ 300 km s −1 . There is a hint of He II 1640Å emission in the 1D spectrum. In contrast, VVDS-5183 has more prominent low-ionization ISM absorption lines than the highionization C IV & Si IV. All the ISM lines are blueshifted by ∼ 500 km s −1 relative to the [O III] redshift. This galaxy exhibits weak Lyman-α emission of 6 × 10 −18 ergs cm −2 s −1 that is redshifted by 200 km s −1 relative to [O III], with a rest-frame equivalent width of W0 ≈ −7Å.
Rest-Frame Optical Spectroscopy and Metallicity Estimates from Nebular Lines
Integrated rest-frame optical 1D spectra were extracted from the SINFONI data cube for each object. From these we measured line fluxes and velocity widths of the integrated line emission (see Fig. 3). We can use the [O III]/Hβ ratio to investigate any possible contribution of an AGN to the nebular spectrum. We follow the classification proposed by Lamareille et al. (2009). This classification is based on the distribution of galaxies of known type in the 2dFGRS survey, with respect to their [O III]/Hβ ratio. The classification as a star-forming galaxy is secure for log([O III]/Hβ) lower than 0.4, and the classification as an AGN is secure for log([O III]/Hβ) greater than 0.7. It is impossible to classify with certainty galaxies lying in the region between these two limits, but we know that this region is dominated by starforming galaxies (≈ 60% for log([O III]/Hβ) lower than 0.6).
With a log([O III]/Hβ) of 0.46 and 0.51, respectively, VVDS-3884 and VVDS-5183 are candidate star-forming galaxies. The nebular spectra of these two galaxies are probably produced in hot H II star-forming regions. However, there is also a chance (≈ 40%) that they are instead dominated by an AGN (i.e. Seyfert 2 galaxies 2.08 2.10 2.12 2.14 Observed wavelength / !! VVDS-8666 is without doubt a star-forming galaxy. Finally, the Hβ line from the companion (VVDS-7772) is undetected, unlike in VVDS-8666, and we derive an upper-limit for Hβ emission by collapsing the data cube around the wavelength inferred from companion redshift from [O III], and performing aperture photometry and background subtraction on the 2D line map. We quote a 2 σ upper limit from the local background noise in the 3D data cube. We can thus compute a lower limit for log([O III]/Hβ) which is 0.93. With such a large [O III]/Hβ ratio, VVDS-7772 is probably dominated by an AGN.
Gas-phase oxygen abundances may be also estimated from the [O III]/Hβ ratio (see Table 2) using the relation provided by Liang et al. (2006), which was calibrated on Tremonti et al. (2004) metallicities in the SDSS DR4 sample. We find 12+log(O/H) of 8.57 ± 0.02, 8.66 ± 0.02, and 8.56 ± 0.02 for VVDS-3884, VVDS-8666, and VVDS-5183, respectively. Figure 4 shows the position of these galaxies in the stellar mass-metallicity plane. In this figure, we also show the z ∼ 0 (Tremonti et al. 2004), z ∼ 2 (Erb et al. 2006a), and z ∼ 3 (Maiolino et al. 2008) relations, the last two being rescaled to Tremonti et al. (2004) metallicities using equations from Kewley & Ellison (2008). They fall between the z ∼ 2 and z ∼ 3 relations. We note however that the [O III]/Hβ (Liang et al. 2006) calibration relies on the underlying relation between gas-phase oxygen abundance and ionization degree in the parent sample. Thus, this calibration might overestimate metallicities for galaxies at high redshift, provided that they are likely to show higher ionization states and higher [O III]/Hβ ratios for lower metallicities than the ones observed in the local universe. Compared to the local mass-metallicity relation, the galaxies of our sample show a −0.47 dex mean systematic shift in metallicity. We use a post-starburst SED with no dust to fit it, rather than continuous star formation. The columns are as follows: (a) Stellar Mass (10 10 M ) (b) Stellar population age (Myr) from SED fitting, (c) Luminosity (×10 +28 ergs s −1 Hz −1 ) from rest-frame UV continuum flux, (d) Star formation rate ( M yr −1 ) estimated from rest-frame UV continuum flux, (e) Star formation rate ( M yr −1 ) estimated from SED fitting. (f) Dust reddening derived from the SED fitting.
Photometry, SED modelling and Stellar population properties
We use broad-band photometry from the optical to the midinfrared to determine the spectral energy distribution, and through fitting spectral evolutionary synthesis models we constrain the masses and ages of the stellar populations, and derive an independent estimate of the star rate to compare with that from the rest-frame optical nebular lines. We use the photometry from the VVDS catalog of the VVDS Deep Field 0226-04 1 (McCracken et al. 2003;Le Fèvre et al. 2004) in the BV RI filters, obtained with the CFH12K camera on CFHT. For some galaxies this is supplemented by J and KS imaging with SOFI on the ESO NTT (Iovino et al. 2005;Temporin et al. 2008). We supplement these magnitudes with the recent CFHT Legacy Survey observations of this area obtained with MegaCam, using photometry in the griz from the TERAPIX CFHTLS-T0003 release 2 (Ilbert et al. 2006). We note that all magnitudes used here are on the AB system. For the VVDS optical/near-IR photometry we are using MAG AUTO (that is, "total" magnitudes derived by SExtractor). There is also Spitzer Space Telescope imaging of this field (the XMM LSS), obtained under the Spitzer Widearea Infrared Extragalactic Survey (SWIRE, Surace et al. 2005). We measured the IRAC photometry from version 4 of the SWIRE team's reduced data products 3 . In the reduced dataset, the IRAC pixels have been rebinned by a factor of 2 during the "drizzling" process from their original 1. 2 pix −1 , so the pixel scale is now 0. 6. We convert the units of the mosaics from MJy/sr (the calibration from the Spitzer PBCD pipeline) to µJy, requiring multiplying by 8.74. As with the optical/near-IR imaging, we work in AB magni-1 Available from http://cencosw.oamp.fr/VVDSphot/VVDS/vvds.html 2 http://cencosw.oamp.fr/CFHTLS/ presents the CFHT-LS magnitudes and photometric redshifts 3 Available from http://data.spitzer.caltech.edu/popular/swire/20061222 enhanced/XMM LSS/irac/ tudes, where 1 µJy corresponds to 23.93 AB mags. Images in all four Spitzer channels we available (with central wavelengths 3.6 µm, 4.5 µm, 5.8 µm & 8.0 µm), although the less sensitive channels 3 and 4 had no detections in the vicinity of our SINFONI observations, with 3 σ limits of mAB < 21.3 and 21.1 at 5.8 µm & 8.0 µm respectively. For channels 1, 2, 3 & 4 we used circular apertures of radius 2, 2, 2.5 & 3 pixels (2. 4, 2. 4, 3. & 3. 6 diameter) respectively, and an aperture correction of 0.7 mag (see Eyles et al. 2005). Once magnitudes in each of the different wavebands had been obtained, the photometric data were then used to construct SEDs for each of our selected sources. We made use of the Bruzual & Charlot (2003) isochrone synthesis code (hereafter B&C), utilising the Padova-1994 evolutionary tracks (preferred by B&C). The models span a range of 221 age steps approximately logarithmically spaced, from 10 5 yr to 2 × 10 10 yr, although here we discount solutions older than ∼ 2 × 10 9 yr (the age of the Universe at z ≈ 3). The B&C models have 6900 wavelength steps, with high resolution (FWHM 3Å) and 1Å pixels over the wavelength range 3300Å to 9500Å and unevenly spaced outside this range. Throughout this paper, we adopt the Chabrier (2003) initial mass function (IMF). From the range of possible star formation histories (SFH) available, we focus on a constant star formation rate (SFR), as these Lyman-break galaxies are selected on their rest-UV continuum (i.e. have ongoing star formation). This star formation rate history is also used in the work of Erb et al. (2006b). For the constant SFR model, the B&C template normalization is an SFR of 1 M yr −1 . We also consider the possibility that the optical-infrared colours of objects within our sample could be due to intrinsic dust reddening, rather than an age-sensitive spectral break. We adopted the empirical reddening model of Calzetti (1997), suitable for starburst galaxies. Table 4 summarise the properties of the stellar population (absolute magnitudes, stellar mass, population age, extinction, etc.) obtained for the galaxies of our sample. The columns are as follows: (a) area of the [O iii] λ5007 emission in kpc 2 , (b) gas surface density M yr −1 pc −2 , (c) Gas mass (10 10 M ), (d) Gas fraction µ = Mgas/(Mgas + M * ), (e) is the nebular spectrum dominated by an AGN ('no' implies it is dominated by star formation) (f) gas-phase oxygen abundance, (g) Luminosity (×10 40 erg s −1 ), (h) star formation rate (M yr −1 ) estimated from the Hβ luminosity (i) reddening suffered by gas, derived from SED fitting (j) dereddened star formation rate.
Star Formation rates
We computed star formation rates for the four galaxies in our sample deduced from the Hβ, the SED fitting and from the rest-frame UV continuum emission. Ultraviolet-derived star formation rates were calculated from the broadband optical photometry, using I-band for VVDS-5183 at z = 3.7 and R-band for the other galaxies at z ≈ 3.3 to determine the UV continuum level around 1500Å. In the absence of dust, the UV continuum density per unit frequency (fν ) is approximately flat between the Lyman-α and the Balmer break for a constant star formation rate. The mean luminosity in frequency units is calculated using the following equation: (1) We can then deduce SF RUV following Kennicutt (1998a) scaled to a Chabrier (2003) IMF: SFR ( M yr −1 ) = 0.83 × 10 −28 L1500 (ergs s −1 Hz −1 ) (2) and the star formation rates derived from the rest-frame UV are shown in Table 4.
We also derived star formation rates from the Hβ emission-line luminosity. Given that no other Balmer line is observed, we are not able to compute the dust attenuation from the standard Balmer-decrement method. Thus, we decide to use instead the dust attenuation determined from the SED fitting. Following Erb et al. (2009) have shown that when the dust attenuation is not computed from the Balmer-decrement method (e.g. with the observed Hα/Hβ ratio), the standard scaling-law which relates the emission-line luminosities and the star formation rates, i.e. SF R = a×L(line) (Kennicutt 1998a), is not valid anymore. It has to be replaced instead by a power-law relation, i.e. SF R = a × L(line) b with b = 1. We used the Calzetti et al. (2000) extinction law. We thus combine equations 6 and 14 of Argence & Lamareille (2009) to estimate the nebular star formation rates of our galaxies. We multiplied the results for star formation rate by a factor of 0.88 to convert from Kroupa (2001) IMF to Chabrier (2003) IMF. The resulting dust-corrected star formation rates from Hβ are shown in Table 5. We found high values for SF R neb corrected which should be taken with caution. It is indeed difficult to estimate a reliable dust extinction from SED fitting at such high redshifts, where the relations between dust, age, and metallicity might be very different to what is observed and relatively well understood at low redshift. Thus, this values are indicative, and we use the non-dereddened values of the star formation rate, SF R 0 neb , in the rest of the paper.
Gas Masses and Gas Fractions
We computed the gas mass using the empirical global Schmidt law, the correlation between star formation surface density and gas density (see Kennicutt 1998b). Assuming the galaxies in our sample follow such a law, we used the SF R 0 neb and the spatial extent of the [O iii] λ5007 emission, an area A neb (measured from the 2D intensity map of the [O iii] λ5007 emission, and deconvolved with the PSF, see 4.2), to compute the star formation surface density (as is done in Erb et al. 2006b). From the star formation surface density, we deduced the gas density using equation (4) in Kennicutt (1998b). We then deduced the gas masses by Mgas = ΣgasA neb . Using the stellar masses determined from the SED fitting, we can derive the gas fraction by µ = Mgas/(Mgas + M * ). Table 5 summarizes the results obtained for the four galaxies of our sample. All of these objects, except VVDS-7772, have high values of the gas fraction (µ > 0.5), similar to that found in the z ≈ 2−2.5 sample of Law et al. (2009).
Kinematic measurements
We produced maps of the dynamics for the galaxies in our sample by using E3D, the Euro3D visualization tool , and code for the fitting and analysis of kinematics (e.g., Sánchez et al. , 2005. Our main goal was to determine the kinematics of the ionized gas using the strongest emission line, [O iii] λ5007 . For each spatial pixel (spaxel) we fit the [O iii] λ5007 emission line region to a single Gaussian function, in order to characterize the emission line, and a pedestal to characterize any spectral continuum. We first smoothed the reduced cubes spatially with a Gaussian of FWHM = 3 pixels (0. 37). The line flux, FWHM, central wavelength and the continuum (pedestal) were then fitted. From the results of this fitting we obtained maps (see Fig. 6) of the [O iii] λ5007 emission line intensity, the relative radial velocity map (Vr) and the velocity dispersion (σ). The dispersion map was corrected for the contribution of the instrumental dispersion, as determined from the FWHM of unblended and unresolved sky lines. Error maps for the velocity and dispersion measurements were also computed, which are dominated by the effect of random noise in fitting the line profile, which produces large errors at low S/N. Errors ranged from 6 km s −1 to 24 km s −1 for the maps.
Morphologic properties
The left-most panel, (a), of Figure 6 shows the UV restframe morphology around 1500Å, with panel (b) showing the maps of the ionized gas morphology for the galaxies in our sample. For the four objects of the sample, the UV restframe morphology and the morphology of the ionized gas appear similar. VVDS-3884 presents a bright concentrated central region, with a diameter of 2.3 kpc. Two fainter regions slightly elongated toward the South and the Northwest surround the bright nucleus. VVDS-8666 exhibits an elliptical morphology with a large peak in intensity at its centre in both the R-band image and the line map. The galaxy VVDS-7772 looks compact, showing a peak in the ionized gas map slightly offset to the North-East. VVDS-5183 consists of a bright nucleus and a low surface brightness elongated region extended toward the North-East. For VVDS-8666 we used the GALFIT software (Peng et al. 2002) on the CFHT R-band image, to deduce the morphological parameters such as the center, the position angle, and the axial ratio. Assuming that the rest-UV light is dominated by stars in an inclined disk, we derived a position angle of θ = 41 • East of North and an inclination of i = 51 • (where i = 0 • would be an edge-on disk). For the other galaxies in our sample, the images were too faint, compromised by a stellar diffraction spike, or lacked good nearby PSF stars for GALFIT to return satisfactory fits.
We assume that the rest-UV morphology of the young stellar population will closely trace that of the nebular emission from the ionized gas, and hence we infer the morphological parameters of the galaxies from the [O III] map. We fit a 2D gaussian with the FWHM along the major axis, with the axial ratio and position angle as free parameters. A first guess of the position angle is estimated from the velocity map as the direction where the gradient in the velocity profile is maximum. It is then allowed to vary between ±15 • in the rotation modelling (see 4.3) which finally gives the best-fit position angle. If the galaxy is a disk, then the inclination is estimated as cos −1 (b/a) where a is the radius of the major-axis (along the first guess of the position angle) and b the radius of the minor-axis (at 90 • ). We measured the area of the nebular emission (deconvolved with the PSF) from the two-dimensional [O iii] λ5007 flux map for each of the galaxies (Table 5).
Rotation modelling
Each galaxy is modelled by a pure, infinitely thin, rotating disk. The parameters of a model are the kinematic centre (x0,y0), the position angle (PA), the inclination (i), the velocity offset (Vs) of the centre relative to the integrated spectrum, and the velocity curve Vc(r) where r is the radius from the kinematic centre. We also use the true physical velocity dispersion σ0 as a model parameter, which we assume is constant, and represents the thickness of the rotating disk. The radial velocity V for any point is then defined with standard projection equations. Note that the position angle gives the direction of positive radial velocities. The velocity offset accounts for redshift uncertainties. We define the velocity curve by two parameters, following Wright et al. (2007): the maximum velocity Vmax and the radius rc where this maximum velocity is achieved. In this model, below rc the velocity is defined as The model computes Vrot, the asymptotic maximum rotation velocity at the plateau of the rotation curve, corrected for inclination (Vrot = Vmax/ sin i).
In reality, the spatial resolution is limited by the seeing and the spaxel size. The observed radial velocity is thus the weighted convolution of the true radial velocity by the point spread function (PSF). The weights come from the flux map of the line used to compute the velocity map: a spaxel where the observed line flux is negligible will not contribute to the convolution. Additionally, the observed velocity dispersion accounts not only for the true physical dispersion, but also for the variations in the velocity field inside the width of the PSF. We generate a map of the velocity dispersion where the velocity gradient (determined from the modelling) has been subtracted. This yields a better measure of the intrinsic local velocity dispersion, rather than a raw value of σ which may be inflated if the velocity gradient across a spaxel is large. From this map, we compute the flux-weighted mean velocity dispersion, σmean (Table 6). We also measure a ve- . locity width for the emission lines from the extracted 1D spectrum (i.e. the total integrated light); this σ1D includes contributes both from the random motions and any ordered rotation or bulk motions. The beam smearing introduced by the PSF causes two significant effects. First, the observed velocity map appears smoothed, so the velocity gradient and the maximum velocity are underestimated. Second, the observed dispersion map shows a peak of dispersion near to the kinematic centre.
We can reproduce modelled velocity and dispersion maps by applying mathematically the same weighted convolution to the ideal velocity field. These modelled maps are compared to the observed maps by a χ 2 minimization. The parameters which minimize the χ 2 are computed in two successive grids with increasing resolution. First guesses for the kinematic centre, the position angle, and the inclination are set from the morphology.
VVDS-8666
The VVDS-8666 galaxy displays a well-resolved velocity gradient over ∼ 5 kpc in projected distance with a peak-to-peak amplitude of 92 km s −1 (uncorrected for inclination). There is some evidence of a flattening of the rotation curve, particularly in the Eastern-most region of the galaxy. Taking into account the fact that these data are seeing limited, the smearing effect of the seeing decreases the peak-to-peak observed amplitude while mostly keeping the overall shape of the velocity shear. This shear appears to be aligned with the morphological major axis defined by the ionized gas, and is also nearly spatially coincident with the peak in the dispersion map. This, combined with the fact that the nebular line flux distribution is centrally concentrated, leads us to conclude that the observed shear is consistent with a rotat- ing gaseous disk rather than a merger. Figure 7 shows the [O iii] λ5007 velocity map recovered from the rotation modelling (see Figure 6, panel c) and convolved with the PSF. It show also the one-dimensional relative velocity curves along the kinematic major axis for both the SINFONI observed velocity shear and the best-fit model. The velocity is well matched by our simple rotating disk model, leading to an inclination-corrected vrot ∼ 91±26 km s −1 . However, the ratio of the rotation velocity to the velocity dispersion is large (vrot/σmean = 1.52) compared with lower-redshift samples of galactic disks (vrot/σmean = 10 − 20, Dib et al. 2006) which probably indicates that this object is not mainly supported by rotation, and there are significant random motions. The presence of random motion is in particular confirmed by a peak of ∼ 50 km/s, located to the north of the galaxy, in the residuals of the velocity maps after subtraction of the rotating model.
VVDS-7772
The velocity map of VVDS-7772 (the companion galaxy of VVDS-8666) shows a smoothly varying gradient along the North-South axis, but without evidence for a flattening.
However, this galaxy is very compact and only marginally resolved. We note that the velocity fields of both VVDS-7772 and VVDS-8666 are aligned along their displacement vector and close to the major axis of the resolved galaxy VVDS-8666, so this combined system might plausibly be part of a single rotating disk. The best fit simple rotation model give an asymptotic velocity of Vrot ∼ 98 ± 45 km/s which is reached in ∼ 5.4 kpc radius. The σ-map of the ionized gas of VVDS-7772 shows strong variation with position, as can be seen in panel (d) of Fig. 6, which prevents us from modelling the true physical dispersion; the weak constraint is σ0 = 0 ± 28 km s −1 . In particular, σ peaks around 58 km s −1 in the south-western edge of the galaxy, slightly offset with respect to the flux peak for which σ decreases to around 50 km s −1 , and drops further to ∼ 30 km s −1 in the faint emission of the north-eastern region. However it is uncertain whether the peak of σ ∼ 58 km s −1 represents a real feature or is simply noise in the extreme edges of the galaxy. Due to the compact size of this objects (r ∼ 3kpc), it is difficult to draw conclusions on the nature of the kinematics from our seeing-limited observations.
VVDS-3884
This galaxy presents an irregular velocity field, which is inconsistent with a smoothly varying velocity gradient along the major axis. It was therefore impossible to fit this disturbed shear with a single rotation model. The σ-map shows a peak in the local velocity dispersion displaced from the centre of the galaxy, again inconsistent with disk rotation. This galaxy has the largest velocity dispersion in our sample (σ ∼ 126 km s −1 ).
VVDS-5183
This galaxy is dominated by an extended emission line component at the systemic redshift with a faint secondary extension located ∼ 2 kpc to the North-East. The two spatially distinct components are more apparent in the data cube (i. e. the velocity map) than in the [O iii] λ5007 intensity maps. Between the two components, the velocity seems to return to the systemic (central) velocity of the main component, i.e. the velocity of the whole system is not monotonically increasing with position. The faint secondary feature may represent a kinematically distinct star-forming region or a small galaxy in the process of merging with the brighter system. The primary component exhibits a smoothly varying velocity field well fit by a rotating disk aligned with the morphological major axis. However the maximum of the velocity shear is small (Vmax ∼ 30 km s −1 ), and once again this galaxy has significant random motion (high velocity dispersion) with vrot/σmean = 0.95.
Dynamical masses
From the nebular line kinematics determined from our 3D data cubes, we are able to estimate the dynamical masses of the systems, subject to several caveats. Firstly, we assume that the nebular emission from the gas traces Keplerian motions (i.e. the gas is not outflowing/inflowing, and is in orbit rather than unvirialized as might be the case during a merger). Secondly, we cannot say with certainty that any of our galaxies are traditional rotationally-supported gaseous disks -the ground-based seeing precludes accurate measurement of morphological parameters (such as the crucial incli-nation angle). Thirdly, the measured velocity dispersion is significant when compared with any velocity gradient across these galaxies, implying that a putative disk may not be supported purely by rotation.
We have measurements of the spatially varying velocity field (v) and velocity dispersion (σ) from our 3D data cubes, and in the 1D extracted spectrum we have a flux-weighted measure of the overall velocity spread in the galaxy (σ1D) which arises from the combined effects of any systematic velocity shift across the galaxy and random motions. We can use this σ1D as a crude estimate of the dynamical mass using the formula: The factor C depends on the geometry of the system (in particularly the density profile): for a uniform sphere, C = 5, and Erb et al. (2006c) derive C = 3.4 for the more realistic scenario of a gas-rich disk with an average inclination angle. Using this value of C = 3.4, in Table 6 we present dynamical masses inferred from the σ1D and the spatial extent of the [O iii] λ5007 line emission in our data cubes (rgas) measured along the major axis and deconvolved with the seeing. Of course, the availability of 3D data cubes means we can model any spatially-resolved systemic velocity shift, as might arise in a rotating disk (see Section 4.3). For objects where the velocity shear is well fitted by the simple rotation model, we compute the dynamical masses using the formula: where Vrot (the turnover of the rotation curve) has been inclination-corrected (a factor of sin i, fitted by the model from an initial estimate based on the observed ellipticity). Both this asymptotic velocity, and the radius of the turnover (r model ) are inferred from our model fits to the observed velocity maps to correct for the the effect of beam-smearing (due to the seeing). Only in one case (VVDS-8666) do we convincingly see this flattening of the rotation curve within the radius probed by our detected [O iii] λ5007 emission (Fig. 7), from which we infer M dyn = 2.9 × 10 10 M . For galaxies VVDS-7772 and the primary component of VVDS-5183, the inferred turnover from the models lies beyond the range of data, and hence must be treated as unreliable (see Figure 10. Combined velocity, S 0.5 2 = 0.5Vrot 2 +σmean 2 versus integrated linewidth σ 1D . The diagonal lines are the same as in Weiner et al. (2006), i.e. a 1 : 1 line and the Rix et al. (1997) σ = 0.6Vc line (Vc is the circular velocity). Line width and S 0.5 are correlated; the 0.5 pre-factor makes the combined velocity width a better estimate of velocity dispersion, so that the correlation is tighter and the galaxies closer to the 1 : 1 line. Our galaxies show good agreement with this. Fig. 8 and Fig. 9 respectively); we can place a lower limit on the dynamical mass from the measured maximum velocity and spatial extent of the nebular emission: M dyn > V 2 shear rgas / G, which corresponds to M dyn > 7 × 10 9 M in both these cases. The galaxy VVDS-3884 does not demonstrate evidence of significant ordered rotation, and we were unable to fit a simple rotating disk model in this case.
DISCUSSION AND CONCLUSIONS
We present the results of a study of the kinematic properties of the ionized gas in four star-forming galaxies at redshift z ∼ 3, from the spatially-resolved spectra of the [O III] & Hβ nebular emission lines. We found that the ionized gas in these objects has high velocity dispersions (σmean ≈ 60 − 70 km s −1 ). We have also found that all our galaxies (except VVDS-7772) contain large quantities of gas as compared to their stellar mass and their dynamical mass inferred from rotation. With their high SFRs, it is likely that these objects are undergoing episodes of strong star formation. Due to our seeing-limited observations, it is difficult to classify these objects with confidence as either 'disks' or mergers. In the case VVDS-8666 and its companion (VVDS-7772), we classify them as a "heated disk system" since for both galaxies the velocity structure of the ionized gas appears to be consistent with rotation. In fact it is unlikely that these objects have a dynamically cold rotating disk of ionized gas, due to the significant velocity dispersions. VVDS-8666 has the highest metallicity, probably indicating a later evolutionary stage. The galaxy VVDS-3884 possesses a particularly high nebular star formation rate as compared to its SED star formation rate which is consistent with this object experiencing a major recent burst of formation. This burst might originate from a major merging event, consistent with its complex kinematics. Finally, VVDS-5183 is mostly consistent with a merger on to a rotating disk. It has a huge star formation rate, a large fraction of gas and a low metallicity.
Due to the uncertainties on the inclination angle for the objects in our sample, it is not possible to investigate the traditional Tully-Fisher relation using Vrot. However, we can give an insight into a key question: Can we use the line widths of integrated nebular emission to get a good estimation of the dynamical masses of galaxies showing low values of v/σ ? Figure 10 plots the combined velocity scale S0.5 against the one-dimensional line width σ1D for the three galaxies for which we found a consistency with the presence of rotation. The combined velocity scale S0.5 2 = 0.5Vrot 2 + σmean 2 is estimated from the asymptotic velocity (Vrot) and the flux-weighted mean velocity dispersion, both inferred from the rotation modelling (see Figure 10). It represents an estimator, corrected from the smearing effect of the seeing (see section 4.3), of rotation velocity together with the presence of random motions. S0.5 is also well known to be strongly correlated with σ1D (Weiner et al. 2006;Kassin et al. 2007). We found a very good correlation between S0.5 and σ1D for the VVDS-8666, VVDS-7772 and the primary component of VVDS-5183. This shows not only that the one-dimensional line width σ1D can be used as a kinematic measure when estimating dynamical masses, but that Vrot alone cannot suffice to trace the majority of the dynamical mass. In fact, using only Vrot to investigate Tully-Fisher relation will push galaxies with σmean > Vrot to erroneously low dynamical masses (see also Weiner et al. 2006). Despite the small size of our sample, we can try to compare our results with those found by the 'SINS' survey (Förster Schreiber et al. 2006;Genzel et al. 2006;Bouché et al. 2007;Shapiro et al. 2008;Genzel et al. 2008), also using SINFONI on VLT, and the Law et al. (2009) study using OSIRIS on Keck. Our four galaxies have a mean assembled stellar mass of log(M * /M ) = 10.4, while the 'SINS' galaxy population has a larger mass of log(M * /M ) = 10.9−11.0. More comparable to our sample are the 13 galaxies detected by Law et al. (2009), selected by the BX technique, with stellar masses of log(M * /M ) ≈ 10.1. We also found, in agreement with Law et al. (2009), that such galaxies of intermediate mass at z ∼ 3 possess extremely large quantities of gas in comparison to their stellar mass. The typical v/σ for our sample is around ∼ 0.4 − 1.5. The Law et al. (2009) sample have ∼ 0.8 while the 'SINS' galaxies have a range of ∼ 2 − 4. In contrast to Law et al. (2009), we have found evidence of AGNs in our sample. The connection between AGN and star formation could play an important role in shaping galaxies at z ∼ 3 − 4. Figure 11 plots the maximum observed shear velocity of galaxies, v shear , as a function of stellar mass. Our z = 3 −3.7 objects appear to have values of v shear similar to the highest values detected by Law et al. (2009) but corresponding to the lowest values displayed by the galaxies of the 'SINS' survey. In fact at similar stellar masses our galaxies tend to have higher values of v shear than Law et al. (2009) (2006) sample. We also found in general that the highest mass galaxies do not tend to have the highest velocity shear among our objects, similar to the Law et al. (2009) sample. Therefore, our sample at z = 3 − 3.7 seems to comprise intermediate mass and intermediate shear velocity objects, in contrast to the sub-sample of Law et al. (2009) which seems to have negligible velocity shear and the more massive objects, and the 'SINS' survey sample which have appreciable old populations where stable rotation dominates.
The similarity in properties of our sample of galaxies at z ∼ 3 − 3.7 with that of Law et al. (2009) at z ∼ 2 − 2.5, seems to confirm that these high-redshift objects have not yet turned a significant fraction of their gas into a sizeable stellar population. Such galaxies also tend to possess high local velocity dispersions due to random motions intrinsic to the gas. If any velocity shear is present, it is not likely to be significant compared with the random motions, and probably occurs in a non-stable disk system. These galaxies also exhibit strong episodes of star formation and already possess metal rich gas.
Drawing general conclusions about the nature and the properties of the dynamical structure of distant galaxies based on the current set of observations is obviously very challenging, given the small size of current samples and the spatial resolution limitations. However, initial insights into the kinematic properties of z ∼ 2 − 4 galaxies on scales of a few kiloparsecs have been revealed by the recent IFU studies, although larger statistical samples and unbiased selection criteria are needed to give a quantitative picture of the mechanisms at play in such objects. It already appears that the dynamical state of galaxies during this early period, where the star formation appears to be intense, cannot be easily classified based on what is observed at low redshift. Specifically, the star-forming galaxies at z ∼ 3 (even those with measurable velocity gradients) exhibit large velocity dispersions quite unlike the cold rotationally-supported disks seen at lower redshift.
ACKNOWLEDGMENTS
We would like to thank Aprajita Verma and Matthias Tecza for helpful discussions. We are very grateful to the VLT Observatory for accepting this programme. The authors also thank Markus Hartung for help obtaining the observations during the observing runs and Sebastian Sánchez for providing the code to create the kinematics maps from the 3D cube. The anonymous referee is greatly acknowledged for providing useful and constructive comments. The authors wish to recognize and acknowledge the significant contribution of the VVDS collaboration in providing the targets. In particular we thank Thierry Contini who helped greatly with the target selection. Part of this work was supported by the Marie Curie Research Training Network Euro3D; contract No. HPRN-CT-2002-00305. M. Lemoine-Busserolle is supported by the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., on behalf of the international Gemini partnership of Argentina, Australia, Brazil, Canada, Chile, the United Kingdom, and the United States of America. | 2009-09-08T04:53:55.000Z | 2009-09-08T00:00:00.000 | {
"year": 2010,
"sha1": "599a8742a79998de4bbd44a0a97808f1b8c36cb5",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/401/3/1657/3816378/mnras0401-1657.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "599a8742a79998de4bbd44a0a97808f1b8c36cb5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
248399995 | pes2o/s2orc | v3-fos-license | Is Hearing Impairment Causally Associated With Falls? Evidence From a Two-Sample Mendelian Randomization Study
Background Observational studies have suggested that hearing impairment (HI) was associated with the risk of falls, but it remains unclear if this association is of causal nature. Methods A two-sample Mendelian randomization (MR) study was conducted to investigate the causal association between HI and falls in individuals of European descent. Summary data on the association of single nucleotide polymorphisms (SNPs) with HI were obtained from the hitherto largest genome-wide association study (GWAS) (n = 323,978), and statistics on the association of SNPs with falls were extracted from another recently published GWAS (n = 461,725). MR Steiger filtering method was applied to determine the causal direction between HI and falls. Inverse-variance weighted (IVW) method was employed as the main approach to analyze the causal association between HI and falls, whereas weighted median, simple mode, weighted mode, and MR-Egger methods were used as complementary analyses. The MR-Egger intercept test, the MR-PRESSO test, and Cochran's Q statistic were performed to detect the potential directional pleiotropy and heterogeneity, respectively. The odds ratio (OR) with 95% confidence intervals (CIs) was used to evaluate this association. Results A total of 18 SNPs were identified as valid instrumental variables in our two-sample MR analysis. The positive causality between HI and risk of falls was indicated by IVW [OR 1.108 (95% CI 1.028, 1.194), p = 0.007]. The sensitivity analyses yielded comparable results. The “leave-one-out” analysis proved that lack of a single SNP did not affect the robustness of our results. The MR-Egger intercept test exhibited that genetic pleiotropy did not bias the results [intercept = −2.4E−04, SE = 0.001, p = 0.832]. Cochran's Q test revealed no heterogeneity. Conclusion Our MR study revealed a causal association between genetically predicted HI and falls. These results provide further evidence supporting the need to effectively manage HI to minimize fall risks and improve quality of life.
INTRODUCTION
Falls have become a major public healthcare problem in many countries (1). The rate of falls varies with socioeconomic status and increases with age. It is estimated that roughly 30-40% of adults over 65 years suffer from fall each year across the globe, and approximately half of them suffer from recurrent falls (2)(3)(4). Falls can result in such consequences as handicaps, depression, loss of independence, fear of falling, functional impairment, and even deaths and can tend to pose a substantial economic burden on the victims and the society at large (1). Previous studies focused on contributors to falling, such as environmental factors (5) (e.g., poor lighting and surface irregularities) and intrinsic factors (6, 7) (e.g., balance disorders, vitamin D deficiency, cognitive and sensory impairment, diabetes, and depression). Further understanding of the risk factors is warranted to better prevent and manage falls.
Hearing impairment (HI) represents one of the most common sensory dysfunctions (8,9), which pose a great burden on healthcare resources. HI is characterized by a slow onset and progressive deterioration and tends to go unrecognized and under-treated (10). Evidence from epidemiological studies indicated that HI increases the falls risk (6,11,12), but the association remains controversial due to reverse causality and confounding effects (6,13), which render the interpretation of these findings difficult and their implication uncertain (14). A randomized controlled trial (RCT) is the best approach to demonstrate the association between HI and fall (15). However, RCTs are not always feasible due to the complexity of study design, financial or ethical constraints, and/or difficulties involved in the collection of a large sample (16,17). Thus, Mendelian randomization (MR) can effectively remedy the shortcomings of classical observational studies and offers an effective methodology to examine the etiology of a condition (18,19).
The MR is an approach that employs single nucleotide polymorphisms (SNPs) as instrumental variables (IVs) of the exposure to assess the causal effects of the exposure on an outcome (19). In comparison to traditional epidemiological research, the MR approach draws on Mendel's laws of segregation and independent assortment (20), by which MR can avoid biased associations coming from interfering or reverse causal effects (21). The principal advantages of the one-sample MR approach over other alternatives are the flexibility to conduct rigorous MR, and the capability of evaluating the independence and exclusion restriction assumptions by assessing confounders at the individual level (22). Nonetheless, the limitations may affect the estimation of causality in one-sample MR datasets (e.g., traditionally low power, selection bias, weak instrument bias, and winner's curse) (23,24). Different from the one-sample MR, its two-sample counterpart is able to assess causal relationships between a variety of exposures and outcomes, which might not Abbreviations: HI, hearing impairment; MR, Mendelian randomization; SNP, single nucleotide polymorphism; GWAS, genome-wide association study; IVW, inverse-variance weighting; OR, odds ratio; CI, confidence interval; IV, instrumental variable; RCT, randomized controlled trial. be achieved with a single sample technique (25). Additionally, it increases sample size and enhances the power of MR analyses, with plenty of data on exposures and outcomes that can be used for interrogation, and these may not be feasible or affordable in the measurement of the same set of individuals (22). Thus, in this study, we used a two-sample MR approach to investigate whether the observational associations between HI and falls are likely causal, and the directionality of their relationship.
Study Design
The MR study relied on three core assumptions (21): (1) the identified IVs are strongly associated with falls; (2) IVs are not associated with confounders; and (3) IVs are associated with falls only via HI. The MR schematic is shown in Figure 1.
Data Sources
The summary data showing the associations between SNPs and HI were from the UK Biobank. The UK Biobank is a national health research resource involving 502,639 European participants aged 37-70 years, recruited from across the UK between 2006 and 2010. In the present study, we utilized publicly accessible datasets from published studies in which formal consent from participants and ethical approval by relevant committees had been obtained. Thus, no additional ethics approval was required. The dataset from the UK Biobank was made available to the research communities through the genome-wide association study (GWAS) database, which is a database regarding the genetic associations from large population-based cohorts of Europeans in the OpenGWAS database (26). The characteristics of the exposure and outcome of the GWAS data are detailed in Supplementary Table S1 in the supplementary material.
HI GWAS Dataset
For the exposure data set of HI, we obtained the analysis results of GWAS that involves 323,978 individuals of European ancestry (84,839 cases and 239,139 controls) to generate the IVs (https://gwas.mrcieu.ac.uk/datasets/ukb-a-257/). Participants were assigned case/control status based on their responses to questionnaire measures regarding hearing difficulty. Selfreported hearing status data were based on the responses to the question: "Do you have any difficulty with your hearing"? Subjects who responded "Yes" were coded as having a hearing impairment, whereas "No" signified not having a hearing impairment.
Fall GWAS Dataset
As for the outcome datasets of the GWAS, data on falls were taken from another independent GWAS analysis that includes 461,725 individuals of European ancestry (89,076 cases and 372,649 controls) (https://gwas.mrcieu.ac.uk/datasets/ukbb-2,535/). The falls dataset of UK Biobank was self-reported by participants via a touch screen questionnaire. Fall cases were defined as subjects who responded "Yes" to the question "In the last year have you had fallen down for any reason (i.e., various extrinsic and intrinsic factors predisposing adults to fall)?" (27).
SNP Selection
First, to ascertain the association with HI as IVs, we selected independent genetic variants with genome-wide significance (p < 5 × 10 −8 ) are selected as the potential instruments from the corresponding datasets. Then, to avoid bias due to linkage disequilibrium (LD) relationship in the analysis (28,29), the LD of SNPs closely associated with HI had to satisfy the following conditions, i.e., r 2 < 0.001 and distance > 10,000 kb (28). Palindromic SNPs with intermediate allele frequencies were excluded from the selected instrumental SNPs (palindromic SNPs refer to SNPs with the A/T or G/C alleles and "intermediate allele frequencies" refer to 0.01 < allele frequency <0.30). SNPs with the wrong causal direction identified by the MR Steiger filter are excluded. SNPs with a minor allele frequency (MAF) of <0.01 were also excluded to avoid potential statistical bias resulting from the original GWAS, since they usually carry low confidence. Additionally, to rule out the influence of known confounders on the causality assessment, potential secondary phenotypes of the selected SNPs were manually browsed with the PhenoScanner (http://www.phenoscanner.medschl.cam.ac. uk). Finally, we calculated the F-statistics for the SNPs to measure the strength of the instruments (30). IVs with an F-statistic less than 10 were excluded and are generally regarded as a "weak instrument" (30).
MR Analysis
To perform the data analysis, individual estimates of the causal effect of exposure on site-specific outcomes mediated by each instrumental SNP were computed as the Wald ratio (31). We calculated the strength of the association between HI and falls by using the inverse-variance weighted (IVW) method as the main analysis and the MR-Egger, weighted median, simple mode, and weight mode methods as complementary analyses (32). The causal effects were measured in the odds ratio (OR). Then, Cochran's Q statistic (33,34) was employed to estimate heterogeneity from each SNP. The MR-Egger intercept test and the MR-PRESSO test were utilized to evaluate the bias stemming from ineffective IVs and the potential horizontal pleiotropy (35)(36)(37). In addition, we performed "leave-one-out" sensitivity analysis to determine whether the result was affected by a single SNP (36). We applied the R package "TwoSampleMR" by following the guidelines from the developers (https://mrcieu. github.io/TwoSampleMR). All analyses were conducted using R software (version 4.1.1, the R Foundation for Statistical Computing, Vienna, Austria). A two-tailed p-value of less than 0.05 was considered statistically significant.
Genetic Variants Selection
In total, 23 SNPs were successfully extracted from the HI GWAS dataset (p < 5 × 10 −8 ). However, 4 SNPs (rs1126809, rs13277721, rs34656207, and rs9296413) were removed because of their possible associations with confounding traits. To be exact, rs1126809 was associated with vitiligo and carcinoma; rs13277721 was associated with mood swings; rs34656207 was associated with rheumatoid arthritis, ankylosing spondylitis, diabetes, and disability or infirmity; and rs9296413 was associated with hypertension (Supplementary Table S2). Moreover, one SNP(rs12660376) was dropped due to the dataset had no corresponding effector gene. After exclusion of these 5 SNPs, our two-sample analysis identified the remaining eighteen SNPs as IVs. No LD was found among these SNPs, and the phenotype variance explained by genetics was 0.25%. The F-statistic of these SNPs ranged from 30 to 97 (general F-statistic = 45), suggesting that they satisfied the strong relevance assumption of MR and that "weak instrument" bias was unlikely. A total of 18 SNPs included in our analysis are shown in Table 1.
MR and Sensitivity Analyses
The results of fixed-effect IVW estimates showed that HI was significantly associated with a higher risk of falls [OR 1.108 (95% CI 1.028, 1.194), p = 0.007] ( Figure 2). The "leave-one-out" analysis indicated that the MR analysis was reliable of our results (Figure 4 and Supplementary Table S3). MR-Egger regression was used to assess the horizontal pleiotropy between IVs and outcome, and our results indicated no evidence for a significant intercept [intercept = −2.4E−04, SE = 0.001, p = 0.832] ( Table 3). MR-PRESSO results also showed that no horizontal pleiotropy in our study (p = 0.835). The funnel plot showed general symmetry (Supplementary Figure S1), suggesting that there was no heterogeneity or horizontal pleiotropy. In addition, there was a significant correlation between HI and falls in MR analysis that included five SNPs dropped due to potential pleiotropy ( Supplementary Table S4). Therefore, our results supported that there exists a causal association between HI and falls.
DISCUSSION
We, for the first time, employed a two-sample MR study to examine the causal relationships between HI and falls on the basis of the summary level data of large GWASs. Our analysis provided evidence of the causal links between genetically predicted HI and falls. Consistent estimates were observed in sensitivity analyses, suggesting the association was robust and the horizontal pleiotropy was minimal.
Our results suggested that the genetic liability to HI exerts an independent effect on falls. Although the association between HI and falls has been reported previously in observational studies (11,12,(38)(39)(40), this association observed in uncontrolled studies has been controversial (6, 13). Heitz et al. (5) found that self-reported hearing loss was significantly associated with the increased risk of falls, but the relationship weakened with adjustment for cardiovascular, vision, and emotional factors and disappeared when controlling for vestibular vertigo. Lopez et al. (11), in a longitudinal study, found that self-reported HI was significantly associated with the increased risks of suffering a fall, but not with injuries from a fall. Thus, due to the difficulty in observational epidemiological studies to eliminate the bias (e.g., reverse causality and confounding effects), etiological interpretation might have some limitations (14,16).
In the present study, we selected SNPs with the genomewide association and independent inheritance as IVs from HI GWAS dataset to detect their causal association with falls. To make our conclusions more reliable, we employed a range of well-established sensitivity methods to control for pleiotropy and heterogeneity and to ensure consistency of results. In MR, we minimized the confounding factors by applying a random combination of alleles against Mendel's second law. Additionally, reverse causality was also ruled out since genetic variants were fixed at conception and could not be altered by disease processes. As a result, our evidence had a high-level precision and stability.
Maintaining balance in the upright stance is maintained by the integrated input of vision, vestibular and somatic sensation into the central nervous system, resulting in a context-specific motor response via static and dynamic posture modifications (41,42). Multiple theories were proposed to explain the association between HI and falls. The first hypothesis is the coexistent vestibular pathology: the peripheral vestibular organs, which collect information on the physical position, movement, and balance, are located in the inner ear, close to the auditory organs (43). Thus, HI is often concomitant with vestibular dysfunction and balance difficulties (44). The second assumption is the cognitive load hypothesis, which postulates that HI may increase cognitive load, thereby reducing the cognitive capacity remaining for balance, especially during walking (45,46). The third hypothesis believes that individuals with sensory dysfunction may have reduced auditory and spatial awareness of their immediate surroundings, rendering them more likely to suffer from accidents and accidental injuries (47). The fourth theory is based on the effect of multisensory integration, i.e., the auditory system is a perceptual system that engages in the perception of the dynamic environment and in complex representations of 3D space through vision and touch (48). Thus, the abnormality in the integration of different sensations or modalities might lead to falls when relevant factors of balance are significantly altered (49).
Additionally, HI is a condition among older adults and can be treated with hearing amplification to improve balance ability. Lacerda et al. (50), in a prospective clinical study, employed an SF-36 questionnaire to examine the effects of bilateral hearing aid on the quality of life and found that the quality of life was improved and the fear-of-falling reduced 4 months after fitting. Parietti-Winkler et al. (48) also evaluated the effect of unilateral cochlear implantation on the modalities of balance control and sensorimotor and revealed that the balance performance of cochlear implantees reached a near-normal level compared to the age-matched healthy controls. These findings confirmed that the restoration of the ability to gather auditory information might contribute to improved balance regulation in patients with amplified hearing. Thus, these findings further underscore the importance of hearing healthcare.
The present study has the following strengths. First, this is the first two-sample MR study to confirm the causal association between HI and falls by using the summary level data of large GWASs. Second, a series of sensitivity analyses were conducted to further verify the hypothesis, making our findings more reliable. However, some limitations of our MR analysis need to be mentioned. First, the participants in the HI GWAS dataset might have overlapped with the participants in the fall GWAS dataset. In this study, we were unable to estimate the degree of overlap among participants, which might lead to weak instrument bias (51). Thus, we conducted an analysis to calculate the lower limit of a one-sided 95% CI for the F-statistic, where the result was 37.872 and considerable weak instrument bias would not be expected (51). Second, the summary of GWAS data merely concerned individuals of European descent, and our results might not be fully representative of the whole population. FIGURE 4 | "Leave-one-out" analysis of the causal association of HI with falls. The black dots and bars indicate the causal estimate and 95% CI when an SNP was removed in turn. The red dot and bar indicate the overall estimate and 95% CI using the fixed-effect IVW method. HI, hearing impairment; CI, confidence interval; SNP, single nucleotide polymorphism; IVW, inverse-variance weighted. Therefore, care should be exercised to extrapolate our conclusion to other racial and ethnic populations. Third, the two-sample MR study only provides an estimate of the putative causal effect, and further studies are required to estimate the direct causal effect of HI upon falls.
CONCLUSION
Our findings indicated that there was a causal association between HI and falls. Since individuals with HI are potentially at risk of falls even without vestibular disease or balance impairment, our findings suggested that sufficient attention should be paid to HI at every single link of hearing screening, diagnosis, treatment, and prognosis evaluation. Our study provided further evidence that supports the need to effectively manage the HI to minimize fall risks and improve quality of life.
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the patients/participants or patients/participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.
AUTHOR CONTRIBUTIONS
S-LZ, W-JK, and JW conceptualized and designed the study. JW, DL, and Z-QG prepared and analyzed the data and drafted the manuscript. All co-authors contributed to the manuscript's modifications and approved the final version. All authors contributed to the article and approved the submitted version. | 2022-04-28T01:52:23.551Z | 2022-04-25T00:00:00.000 | {
"year": 2022,
"sha1": "e4ee4a6dbc2fb78d55f45855dc5fa65eb5d000b4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "e4ee4a6dbc2fb78d55f45855dc5fa65eb5d000b4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
158627703 | pes2o/s2orc | v3-fos-license | Animal Welfare , Market Power and Tangential Interests
The history of social movements which atempt to bring previously marginalized issues to the forefront is replete with examples of confict between cooperaton, compromise and collaboratng among actors whose interests and motvatons only tangentally coincide on the one hand and a desire to work with people whose aims and beliefs ft (nearly) perfectly with one’s own. Historical pathways are rarely straight-forward and are rarely formed by isolated sets of actors.
1
The history of social movements which atempt to bring previously marginalized issues to the forefront is replete with examples of confict between cooperaton, compromise and collaboratng among actors whose interests and motvatons only tangentally coincide on the one hand and a desire to work with people whose aims and beliefs ft (nearly) perfectly with one's own. Historical pathways are rarely straight-forward and are rarely formed by isolated sets of actors.
In Kamila Lis' Coalitons in the Jungle, the author traces a brief history of the meat processing industry in the United States from 1890 to present and the industry's impact on animal welfare. She places special emphasis on how advances and retreats in animal protecton have come about ofen through cooperaton with actors exogenous to the animal welfare movement.
For Lis, the history of the meat processing industry and levels of animal welfare is one of market concentraton, de-concentraton and re-concentraton. The systematc abuse of animals has historically reached its zenith during periods of concentraton; oligarchic market power in the meat processing industry is closely linked with negatve externalites such as lax animal and human safety standards, poor labor conditons and the atendant accidents and animal abuse due to overwork and frustraton. When this market power is coupled with coziness between the industry and the regulators who oversee it animal abuses routnely go unchecked. As such, Lis considers the concentraton of the meat industry as the principal target for improving animal welfare and living standards. However, concentrated power rarely accedes to de-concentraton willingly and de-concentratng power requires acton across multple pressure points. This in turn necessitates coordinaton among actors and groups whose interests align only tangentally.
The original concentraton of the Meat Industry from the 1880's untl the 1930's. The widely acknowledged human and animal atrocites from this period are notably depicted in Upton Sinclair's The Jungle. These practces, partcularly the human toll, came under fre from government Ant-Trust measures, such as the Packers and Stockyards Act (PSA), and from increased pressure from unions striving to improve worker conditons within these plants. Lis sees the acton by unions in promotng laborers' working conditon as being vital to the promoton of animal welfare: "it was unions and not animal welfare groups that actually (albeit unintentonally) improved conditons for animals in slaughterhouses during the middle of the century." The common thread that ted together the human and animal mistreatment was the meat processing speed; observers had long noted that human and animal sufering was causally linked to increased processing speed. Union actvists championed the decrease of processing speeds because "it not only improved the physical safety of the workers, but also simultaneously decreased their levels of frustraton while on the job", benefts which decreased the prevalence of "inadvertent blunders" and "intentonal animal abuses in the slaughterhouse" (p. 75). Hence, Lis atests that the very real improvements to animal treatment that accompanied the following period of market de-concentraton came about as result of union actvists fghtng for beter working conditons. This contenton is not intended to diminish the work of animal welfare groups, but rather to show how actors with tangental interests and politcal voice can achieve material advances in animal welfare.
These increases in human and animal safety standards which accompanied the wave of de-concentraton proved ephemeral with the introducton of Concentrated Animal Feeding Operatons (CAFO's) in the 1960's, a renewed desire for meat consumpton and the rise of the American fast food industry. These three factors mutually reinforced one another, and concentraton within the meat industry rose again to fulfll the ever-expanding demand for meat products. The increased concentraton was once again accompanied by a rise in abuse, animal deprivaton and pain, and a reducton in animal welfare standards. Yet, just as in the past, when meat-packing workers conditons worsened, the new concentraton of the animal industry has forced negatve externalites over a broad swath of actors who could in turn be fruitul sources of collaboraton to challenge the concentraton of meat industry market power.
Lis focuses on one group of actors in partcular, small animal producers, whose producton capacites, measures and standards have all come under pressure or become subjugated to the rules of the meat-processing industry. Contracted meat purchases, imposed growing standards impossible to meet through natural growth processes, and distorted incentves for the agricultural cultvaton of corn combine to tangentally align the interests of small farmers with those of the animal welfare movement, just as the interests of unions were partally aligned with animal welfare groups in the past. In Lis' opinion, this creates a window of opportunity through which material gains for animals can be obtained via cooperaton with small animal producers, by combining the legal expertse of the animal welfare community with the untapped possible politcal, economic and moral 2 strength of small producers to exploit current legal ambiguites in court decisions related to the PSA and fght the tde of re-concentraton in the meat industry.
At its core, Lis' artcle is one that posits a queston most social movements have encountered in their history: the inescapable tension between purity of mission and the possibility of realizing tangible gains through compromise. The ultmate goals of animal welfare actvists can inevitably fnd tension with the goals of small animal producers, yet profound advances can be forged via cooperaton while the window of opportunity is stll open, though, as Lis contests, this avenue is one that has remained mostly unexplored. Given that negatve externalites from the meat processing industry are being imposed on a wide variety of actors (small farmers, environmentalists, public health advocates, among others), this queston of purity of mission versus tangible advances is one likely to come up again. | 2019-05-20T13:04:01.632Z | 2013-10-01T00:00:00.000 | {
"year": 2013,
"sha1": "158c4ead79441ee8245021769824a48896e49693",
"oa_license": "CCBY",
"oa_url": "http://revistes.uab.cat/da/article/download/v4-n4-parsons/144",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "158c4ead79441ee8245021769824a48896e49693",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Economics"
]
} |
262465187 | pes2o/s2orc | v3-fos-license | ERASER: Towards Adaptive Leakage Suppression for Fault-Tolerant Quantum Computing
Quantum error correction (QEC) codes can tolerate hardware errors by encoding fault-tolerant logical qubits using redundant physical qubits and detecting errors using parity checks. Leakage errors occur in quantum systems when a qubit leaves its computational basis and enters higher energy states. These errors severely limit the performance of QEC due to two reasons. First, they lead to erroneous parity checks that obfuscate the accurate detection of errors. Second, the leakage spreads to other qubits and creates a pathway for more errors over time. Prior works tolerate leakage errors by using leakage reduction circuits (LRCs) that modify the parity check circuitry of QEC codes. Unfortunately, naively using LRCs always throughout a program is sub-optimal because LRCs incur additional two-qubit operations that (1) facilitate leakage transport, and (2) serve as new sources of errors.Ideally, LRCs should only be used if leakage occurs, so that errors from both leakage as well as additional LRC operations are simultaneously minimized. However, identifying leakage errors in real-time is challenging. To enable the robust and efficient usage of LRCs, we propose ERASER that speculates the subset of qubits that may have leaked and only uses LRCs for those qubits. Our studies show that the majority of leakage errors typically impact the parity checks. We leverage this insight to identify the leaked qubits by analyzing the patterns in the failed parity checks. We propose ERASER+M that enhances ERASER by detecting leakage more accurately using qubit measurement protocols that can classify qubits into |0⟩, |1⟩ and |L⟩ states. ERASER and ERASER+M improve the logical error rate by up to 4.3×and 23× respectively compared to always using LRC.
INTRODUCTION
Noisy quantum hardware and imperfect operations limit us from running most promising quantum applications [10,13,23,28,32,40,44,45,55,57,58].Quantum Error Correction (QEC) can bridge the gap between quantum applications and qubit devices.Fault-Tolerant Quantum Computers (FTQCs) use QEC codes to encode logical qubits using several physical qubits such that the error rate of the logical qubits is lower than the physical error rate if the latter is below a certain threshold.Moreover, the logical error rate decreases exponentially with increasing redundancy, measured as a QEC code's distance ().This exponential suppression of errors enables QEC codes to achieve the target error rate necessary to run a given quantum application.
In this paper, we focus on surface codes, widely recognized as the most promising QEC code, which uses data qubits to store quantum information and parity qubits to detect errors [21,25,38,39].An FTQC executes syndrome extraction circuits that project errors on the data qubits onto the parity qubits and measure the parity qubits to obtain a bitstring of parity checks called a syndrome.A classical decoder uses the syndrome to identify errors.It sends the correction to the control processor, which corrects the errors.In practice, syndrome extraction uses quantum operations, which are also error-prone.To tolerate these errors, a decoder analyzes at least consecutive rounds of syndromes, also known as a QEC cycle.FTQCs enable computations by interleaving QEC cycles in between logical operations.
Recent studies from IBM and Google show that leakage errors degrade QEC performance on real hardware [1,41,64].Leakage errors occur when qubits leave the computational basis (|0⟩ and |1⟩) and enter a higher energy state |⟩ [1, 5-7, 24, 48, 52, 53, 63, 64].As quantum operations are only calibrated for the computational basis, leakage errors deteriorate logical performance for two reasons.First, leaked qubits cause faulty operations during syndrome extraction, inducing random errors on their neighboring qubits and obfuscating other errors from being detected due to incorrect parity checks.Second, these faulty operations spread leakage onto other qubits via leakage transport.If not removed, leakage continues spreading, affecting more qubits over time and increasing the leakage population ratio, or the number of qubits leaked at any given time, as shown in Figure 1(a), making QEC codes increasingly vulnerable.For example, our studies show that leakage errors increase the logical error rate by 27× and 467× for a distance 7 surface code after one and five QEC cycles, respectively.Thus, reducing the impact of leakage errors is crucial to improving the performance of QEC.
Leakage errors arise from fundamental device-level imperfections and cannot be wholly eliminated despite improving qubit qualities.Instead, recent approaches actively remove them as they occur by resetting the leaked physical qubits.The most common technique uses leakage reduction circuits (LRCs) that modify the syndrome extraction circuit to swap the data and parity qubits [3,6,24,63], as in Figure 1(b).Syndrome extraction rounds without LRCs proceed normally, where the parity qubits are measured and reset, eliminating any leakage from them.These rounds are followed by rounds with LRCs where the SWAPs remove leakage from the data qubits.Prior works schedule LRCs every alternate round throughout the duration of a program.However, our studies show that always scheduling LRCs is sub-optimal and limits their efficacy.Always-LRCs scheduling throughout program execution done in prior work has the following drawbacks.First, leaked qubits facilitate leakage transport through the two-qubit operations intended to eliminate them.Our studies show that the leakage population ratio (LPR) continues to increase over QEC cycles despite using LRCs; ideally, we want the LPR to remain as low as possible to prevent performance degradation.Second, LRC operations increase the number of two-qubit operations in a syndrome extraction round from 4 to 9, as shown in Figure 1(b).Two-qubit operations are themselves error-prone and serve as new sources of errors even when there are no leakage errors.Our studies show that although the state-of-the-art Always-LRCs scheduling policy can improve the logical error rate, its performance is still far from an idealized policy that schedules LRCs only when leakage occurs.For example, Always-LRCs scheduling can improve the logical error rate by 4× for distance 7 surface codes, as shown in Figure 1(c).However, the idealized policy can improve the logical error rate by up to 10×.Furthermore, this gap consistently increases with the increasing code distance, heavily limiting the performance of QEC.This paper aims to bridge this gap via the optimal usage of LRCs such that leakage errors are maximally removed while limiting leakage spread and minimizing errors from LRC operations.We propose ERASER that achieves this goal.
ERASER comprises of three key components.(1) The Leakage Speculation Block (LSB) analyzes the current syndrome to speculate potentially leaked qubits, (2) the Dynamic LRC Insertion (DLI) block modifies the next syndrome extraction round to include LRC operations for this subset of qubits, and (3) the QEC Schedule Generator (QSG) issues the updated syndrome extraction schedules to the qubits.Designing efficient LSB logic is non-trivial because leakage errors may remain invisible during syndrome extraction while continuing to induce errors on other qubits.Even if they impact syndrome extraction, they create random parity qubit flip patterns, as a leaked data qubit can cause any arbitrary combination of its four neighboring parity qubits to flip.Efficient DLI logic design is also challenging as the QEC schedules must be adapted in real time.Failure to introduce LRCs in real-time causes leakage to persist, whereas waiting to determine which LRCs to use causes QEC cycles to slow down and errors to accumulate on qubits.
To overcome these challenges, we leverage the insight that most leakage errors become visible to syndrome extraction within a few syndrome extraction rounds, and thus, optimizing the LSB to tackle visible leakage is sufficient.To address the challenge related to arbitrary syndrome bit-flip patterns caused by leakage errors, we speculate a leakage has occurred if at least half of the neighboring parity qubits have flipped for a data qubit.This achieves a sweet spot between two extremes: a conservative prediction based on too few neighboring syndrome bit-flips introduces more LRC operations, whereas a more aggressive prediction may cause leakage to remain undetected.Note that during Always-LRCs scheduling, each data qubit swaps with a unique parity qubit.However, as ERASER schedules LRCs dynamically, two data qubits may request to swap with the same parity qubit, thus preventing their LRCs from being scheduled concurrently.To resolve this problem, DLI schedules the LRCs in the upcoming round to maximize the number of LRCs scheduled.By scheduling LRCs only for likely-leaked data qubits, ERASER removes leakage errors while minimizing any additional errors caused by LRCs.
Finally, recent device-level research has been increasingly exploring the efficacy of multi-level readout, which classifies additional states beyond |0⟩ and |1⟩ [12,49].While the accuracy of multi-level readout is worse than standard readout, the additional information granted by multi-level readout can enhance leakage detection accuracy.We propose ERASER+M that leverages multi-level readout to improve LRC scheduling further.
Our evaluations show that ERASER and ERASER+M improve the logical error rate by up to 4.3× and 23×, respectively, compared to Always-LRCs scheduling.Furthermore, ERASER requires <1% logic and 5ns latency on Xilinx FPGAs, demonstrating that real-time leakage suppression can be achieved at low cost.Further evaluations regarding the applicability of ERASER to alternatives for LRCs and an analysis of its performance under different noise models can be found in the Appendix.
Overall, this paper makes the following contributions: (1) Our studies show that always scheduling LRCs throughout program execution (Always-LRCs scheduling) has limited efficacy.
(2) We propose ERASER, a dynamic LRC scheduling policy that predicts the subset of qubits that may have leaked and only applies LRCs to those qubits in real time.To the best of our knowledge, this is the first paper that proposes real-time leakage suppression.
(3) We propose a Leakage Speculation Block to accurately detect leakage and Dynamic LRC Insertion to adapt the QEC schedules.
(4) We propose ERASER+M, which extends ERASER to leverage the capabilities of multi-level readout.
BACKGROUND AND MOTIVATION 2.1 The Surface Code
A distance surface code encodes a logical qubit using an alternating lattice of 2 data and 2 −1 parity qubits, as shown in Figure 2(a).Periodically, syndrome extraction circuits entangle the data qubits with their neighboring parity qubits to project any errors on the data qubits into Pauli errors or (none), (bit flip), (bit and phase flip), and (phase flip) errors on the parity qubits [21,25,38,39,54].Each parity qubit and its neighboring data qubits execute a quantum circuit to measure a 4-qubit operator, called a stabilizer, in a syndrome extraction round.The surface code uses two types of stabilizers, and , which correct and errors, respectively.The surface code can correct arbitrarily many errors provided the errors do not form an error chain (a sequence of adjacent errors) of length more than ⌊( − 1)/2⌋.
Decoding Errors on the Surface Code
Errors on the logical qubit are detected by periodically executing syndrome extraction circuits, which measure the parity checks.
These parity checks, also called syndromes, are used to identify errors on the logical qubit in real-time by pairing or matching the nonzero parity bits using graph algorithms, such as Minimum-Weight Perfect Matching (MWPM).This process is known as decoding and is performed independently for and stabilizers.For example, matching the non-zero or flipped syndrome bits and enables the decoder to accurately identify the error on the data qubit shown in Case-1 of Figure 2(b).In practice, decoders simultaneously decode consecutive rounds of syndromes to tolerate operational errors in syndrome extraction.This constitutes a logical or QEC cycle.Amongst decoders for the surface code, MWPM is widely recognized as the gold standard for decoding surface codes because of its high accuracy [16,17,33,56,70].
Pauli+ Errors and Leakage Errors
Not all errors on data qubits can be classified as Pauli errors on real quantum hardware.In recent demonstrations of QEC codes, Google identified a class of errors in addition to decoherence and operational errors that reduce the performance of QEC [1].These errors are referred to as Pauli+ errors and are fundamentally hard to tackle for two reasons.First, it is difficult to characterize these errors in real systems.Second, these errors can cause correlated errors, and thus, a decoder unaware of such correlations may be unable to handle such errors [1,6,65].
Leakage errors, which occur when a qubit leaves the computational basis (|0⟩ and |1⟩) and enters a higher energy state |⟩, are the most damaging class of Pauli+ errors because leaked physical qubits spread errors onto other qubits through two-qubit operations [1, 3, 5-8, 24, 48, 53, 69].As these two-qubit operations are calibrated for only the computational basis, performing an operation between a leaked qubit and an unleaked qubit can lead to either (1) a random error modifying the unleaked qubit's state (which can be modeled as a random Pauli error), or (2) the unleaked qubit becoming leaked through leakage transport from the leaked qubit [48,49,53,69].
Although leakage errors are less frequent than gate and measurement errors, they significantly degrade the logical performance because the errors spread by leakage errors obfuscate other errors from getting detected.For example, Case-2 of Figure 2(b) shows how the leaked qubit at the center of the code lattice leads to faulty syndrome extraction on its adjacent stabilizer , causing it not to flip.Now, the decoder observes only a single non-zero stabilizer and assigns an error to the data qubit on the boundary of the lattice, thereby introducing an error itself while the actual error remains undetected.Moreover, the leaked qubit remains faulty, creating a pathway for the leakage to spread onto other qubits, leading to an even greater possibility of errors in future syndrome extraction rounds.Consequently, the logical error rate increases, degrading the performance of QEC.
For example, Figure 2(c) shows the logical error rate of a distance 7 surface code over multiple QEC cycles.After the first cycle, the logical error rate is 27× higher in the presence of leakage.Moreover, the impact of leakage accumulates over multiple cycles.For example, the logical error rate increases by only 5× after five QEC cycles in the absence of leakage errors, whereas it increases by nearly 100× in the presence of leakage errors.Leakage errors rapidly widen the gap in logical performance with increasing QEC cycles, going from 27× to 467× in just five cycles.The sharp decline in logical performance shows that leakage errors pose a significant barrier to scaling up logical qubits and realizing fault tolerance.
Prior Works on Leakage Error Reduction
There are several prior works that focus on leakage error mitigation that can be classified into three broad categories: (1) Post-processing: This approach identifies leakage errors from stabilizer flips observed during syndrome extraction [8,69].The drawback of this approach is that it requires many rounds to determine leakage errors accurately, and thus, it is mainly used to post-select or filter trials that had leakage errors during memory experiments on real systems.
(2) Calibrating operations on leaked states: This approach mitigates leakage by using new operations on leaked qubits that interact with states (|⟩) outside the computational basis [49,52,53].Such approaches are either inherently specific to the underlying quantum processor [53] or require calibrating custom pulses to interact with higher energy states [43,49].Thus, such approaches are not the focus of this paper, which tackles leakage in a manner generalizable to any processor. 13) SWAP-Based Leakage Removal: This involves swapping leaked data qubits with unleaked parity qubits during syndrome extraction.The modified syndrome extraction circuit is called a leakage reduction circuit or LRC [3,6,24,63]. 2The measurement and reset operations post-SWAP eliminate leakage from the data qubit.LRCs are scheduled periodically to minimize both parity and data qubit leakage, as shown in Figure 3 for the = 3 code.In round 1 , no LRCs are performed, and the parity qubits are measured and reset during usual syndrome extraction, thus removing any leakage from the 2 − 1 parity qubits.In round 2 , 2 − 1 data qubits are scheduled for LRCs (each data qubit is swapped with a unique parity qubit).The LRC in round 3 removes leakage from the remaining data qubit.LRCs are a straightforward approach for leakage reduction readily implementable on any device as their only overhead is modifying the syndrome extraction circuit.
Round R 1 Round R 3 LRC LRCs (+3 CNOTs for SWAP) Figure 3: An example of a SWAP LRC schedule.
Limitations of LRCs
LRCs have two fundamental limitations.First, LRCs are unoptimized for reducing the impact of leakage transport as it has only been observed recently on real systems [52,53].Second, LRCs are inefficiently scheduled: qubits do not have leakage errors every round, so using LRCs only adds additional points of failure during syndrome extraction.
Goal
Ideally, we want greater accuracy while maintaining the simplicity of LRCs to mitigate leakage errors.Our proposed solution ERASER achieves this goal.
ARE ALWAYS-LRCS A GOOD IDEA?
We discuss the limitations of LRCs, specifically their poor performance in the presence of leakage transport.
LRCs facilitate leakage transport
An LRC, shown in Figure 4(b), removes leakage from a data qubit () by swapping it with a parity qubit ().However, an LRC may introduce leakage onto via leakage transport when is leaked.In such a situation, the LRC may introduce leakage rather than remove it as intended.In the following section, we model the introduction of leakage errors during syndrome extraction with and without an LRC.A summary of the notation used in this section is shown in Table 1.
Leakage Errors Without
LRCs.Consider a syndrome extraction round without an LRC, as shown in Figure 4(a), and suppose that the parity qubit is leaked.During the round, may transport leakage to one of its neighboring data qubits, which we denote , whereas any leakage on will be removed once it is reset.Thus, we are interested in the probability that becomes leaked by the end of the round, given is leaked before the start of the round.We denote this probability as P( data | parity ) as designated in Table 1.
can only incur leakage through either (1) operation errors through CNOTs with neighboring parity qubits or (2) a transport error in the CNOT with .For calculating (1), we note that the probability of the -th CNOT causing leakage (not due to transport) Probability that a data qubit leaks given a parity qubit is already leaked.
3.1.2Leakage Errors with LRCs.Now, we consider syndrome extraction with an LRC, as in Figure 4(b), and suppose that now the data qubit is leaked.We are interested in the probability becomes leaked by the end of the round, given is leaked before the start of the round.We denote this probability as P( parity | data ) as in Table 1.However, unlike the situation without an LRC, we note that is used in nine CNOTs.Furthermore, interacts with six times.However, only four of these CNOTs occur before is reset and can thus cause leakage transport.The other two CNOTs occur after is reset and are unlikely to cause leakage transport.
As with the prior calculation, we can separate P( parity | data ) into (1) a probability of leakage caused by operation error and (2) a probability of leakage caused by leakage transport.Thus, P( parity | data ) is found as in Equation ( 2), which we estimated to be about 34%.
3.1.3Impact of Leakage Transport.As P( parity | data ) is about 3× larger than P( data | parity ), we expect that LRCs significantly contribute to increasing the amount of leakage on a logical qubit.This is indeed the case.Figure 5 shows the leakage population ratio (LPR), or the probability that a given physical qubit on the logical qubit is leaked, over 10 QEC cycles in a = 7 code at = 1 × 10 −3 .We observe two trends corroborating our analytical results in Equation (1) and Equation (2).First, the LPR spikes after even rounds, which are rounds with LRCs.Second, each spike generally increases the LPR over the last spike, increasing the LPR over time.
LRCs are inefficiently scheduled
The state-of-the-art LRC policy schedules LRCs every alternate round such that rounds without LRCs remove leakage from parity qubits, and rounds with LRCs remove leakage from data qubits.However, such scheduling is inefficient because the additional CNOTs in LRCs create new sources of failure.Ideally, we want to use LRCs only to remove leakage errors when they occur.
To assess the impact of the extra LRC operations on the logical performance (LPR and LER), we compare state-of-the-art LRC scheduling to an idealized scheduling policy that schedules LRCs for qubits as soon as they are leaked.Figure 6 shows the LPR and LER for both policies over 10 QEC cycles for a = 7 code at = 1 × 10 −3 .The LPR continues to increase for the state-of-the-art policy, resulting in 10× higher LER than the idealized policy.The performance gap is due to the idealized policy scheduling significantly fewer LRCs: the idealized policy schedules one LRC every three QEC cycles whereas the state-of-the-art policy schedules 24 LRCs every round.
Characterizing the Spread of Leakage
To better understand how leakage spreads on a real system, we perform density matrix simulations of a stabilizer on the surface code.Our simulation implements the leakage phenomena observed by Google during their recent demonstration of a distance 5 surface code on their Sycamore processor [1,53].As Google Sycamore's leakage phenomena are reported to interact with the |3⟩ state, our simulation uses ququarts, 3 where |⟩ corresponds to |2⟩ and |3⟩. Figure 7(a) provides an overview of our simulation, which simulates the spread of leakage originating from a single leaked data qubit 0 across a stabilizer over an LRC round followed by a no-LRC round.During syndrome extraction, each CNOT can incur errors due to (1) leakage transport, (2) error on unleaked qubits if one operand is leaked, and (3) leakage injection, as shown in Figure 7 errors in our experiment are fixed to be (0.65), which was the error measured during leakage studies on Google Sycamore [53]. Figure 8 shows the movement of leakage from the leaked parity qubit and the impact of leakage on the stabilizer measurement.We discuss three points of interest.At point A , which marks the end of the LRC with 0 , we observe that the parity qubit has significantly leaked due to interactions with 0 , confirming that LRCs do facilitate leakage transport.Consequently, then spreads leakage errors onto the other data qubits during the no-LRC round, thus increasing the leakage population.Point B shows the first point where is affected by leakage during a CNOT with 0 (CNOT #4).If was measured at this point, we would get a random outcome; note that we ideally want to measure as 0 as there are no errors on the data qubits.As syndrome extraction continues, the measurement probabilities fluctuate.At point C , before the measurement of , the probability of measuring the correct outcome is slightly better than random.Thus, leakage errors interfere with syndrome extraction measurements by inducing random measurement results.
We note that as our simulations in this section are restricted to a single stabilizer, the results observed understate the impact of leakage.In reality, the leakage error on 0 will also spread to the rest of the logical qubit and cause more errors.We refer to Google's recent studies on leakage and recently published qutrit simulations for more extensive analyses on the spread of leakage across an entire logical qubit [48,53].
ERASER: INSIGHTS AND DESIGN
We propose ERASER that judiciously schedules LRC operations such that errors from both leakage and LRC operations are simultaneously minimized.Figure 9 gives an overview of ERASER.The Leakage Speculation Block (LSB) uses the current syndrome to speculate a subset of qubits that may have leaked.The Dynamic LRC Insertion (DLI) block interrupts the QEC Schedule Generator (QSG) to modify the syndrome extraction circuits for the next round and issues LRC operations only for qubits speculated as leaked.
Enabling adaptive LRCs presents two key challenges.First, we must accurately speculate leakage.Failure to do so causes leakage to remain and degrade performance.Second, the control processor must integrate LRC operations into QEC schedules in real time to prevent QEC cycles from stalling.We discuss the insights to overcome these challenges.
Leakage Speculation Block: Challenges
The only information available about the qubits during syndrome extraction that could be used to detect leakage errors is the measured syndrome.We discuss how often leakage errors impact syndrome extraction and the challenges with precisely detecting leakage errors using syndromes.
4.1.1Is Leakage Visible or Invisible from Syndromes?Our analysis shows that leakage errors can be broadly classified into two categories: visible which immediately affect syndrome extraction, and invisible which persist for multiple rounds before affecting syndrome extraction.We discuss how long a data qubit potentially remains invisible without LRCs; note that parity qubit leakage does not accumulate as these qubits are reset every round.This happens in two scenarios: (1) When the leaked data qubit causes an error in syndrome extraction.This can be modeled as a depolarizing error and affects parity qubit measurement with a 50% probability.
(2) When the leaked data qubit transports leakage resulting in the parity qubit accumulating leakage.When measured, the parity qubit will be randomly classified as a 0 or 1.There is a 50% probability that the error affects the measurement.
As a data qubit neighbors at most four parity qubits, the probability a leaked data qubit is invisible in a round is ( 12 )4 = 1 16 .As a qubit remains invisible until it affects a parity qubit measurement (probability is 1 − 1 16 = 15 16 ), the probability a leaked data qubit remains invisible for rounds is given by Equation (3).
Table 2 shows the probability of a leaked data qubit remaining invisible over multiple rounds.Note that more than 99% of leakage errors affect syndrome extraction within two rounds, resulting in most leakage errors becoming quickly visible.
ERASER: Insight #1
Visible leakage errors are the most common variant of leakage errors, and optimizing LRCs for them is sufficient.Instead of attempting to identify exactly where leakage errors have occurred, we use the insight that leveraging the flipped syndrome bits to speculatively detect a leakage with high accuracy is sufficient.However, even performing such speculative detection is nontrivial as there is an inherent trade-off between LRC scheduling frequency and performance.Speculating too conservatively schedules too many LRCs, degrading the QEC code's performance as the extra LRC operations increase errors during syndrome extraction.On the other hand, speculating too aggressively schedules LRCs too infrequently, also degrading performance as leakage is not removed in time.To maximize performance, ERASER achieves a sweet spot between the two and schedules LRCs on a data qubit when at least 50% of its neighboring parity qubits flip.
ERASER: Insight #2
Speculative leakage detection has an inherent trade-off between LRC scheduling frequency and performance.Scheduling LRCs too aggressively or too conservatively will degrade performance by causing more errors to occur.
Leakage Speculation Block: Design
ERASER uses the current syndrome to speculate the data qubits that may have encountered leakage.The Leakage Speculation Block (LSB) maintains a Leakage Tracking Table (LTT) with one entry per data qubit, as shown in Figure 10.The LSB analyzes the current syndrome and speculates if a qubit has leaked.If it identifies a leakage, the corresponding LTT entry is marked as leaked. 4The LSB also maintains a Parity qubit Usage Tracking Table (PUTT) to track the allocation of parity qubits for LRC operations on the data qubits.In the Always-LRCs scheduling, LRCs span two consecutive syndrome rounds as the number of data qubits exceeds the number of available parity qubits for swapping, and more than one data qubit may require swapping with the same parity qubit.As ERASER dynamically schedules LRC operations, the conflict is resolved by scheduling LRCs for the data qubits on any adjacent available parity qubit.The rules for handling the LTT and PUTT entries are discussed in the following subsection.The Dynamic LRC Insertion (DLI) block uses the LTT and PUTT entries to introduce LRCs.
Speculating
Leakage on Data Qubits.A data qubit may have two, three, or four neighboring parity qubits.If no LRC operations were scheduled for a data qubit in the previous round (which yields the current syndrome) and at least half of the neighboring parity qubits flip, then the LSB block marks the LTT entry for the corresponding data qubit as leaked to schedule LRC operations in the next round.We choose half the number of parity qubits as a cutoff because, on average, half the parity qubits are expected to flip if there is a leakage error.Note that if LRC operations were scheduled on that particular data qubit in the previous round, any leakage on the qubit would have been removed, and we do not speculate any leakage even if 50% of its neighboring parity qubits flip.
Handling
Parity Qubits Usage Tracking.In Always-LRCs, each data qubit has a primary parity qubit it swaps with to perform an LRC.However, as there are 2 data qubits but only 2 − 1 parity qubits, LRC operations cannot be scheduled for all data qubits in the same round.Instead, one LRC must be carried over into the next round.For example, Figure 11(a) shows how both the leaked qubits conflict with the same primary parity qubit, and the LRC operations for both of them cannot be scheduled in the same round.
To overcome this limitation, we leverage the insight that as ERASER schedules LRCs dynamically, only a subset of data qubits will require LRC operations in the same round.Thus, LRCs need not be carried over to the next round.To facilitate this, we select one of the neighboring parity qubits for LRC operations based on availability at runtime instead of allocating primary parity qubits offline.The LSB allows each data qubit to use any neighboring parity qubit and marks it as used in the PUTT.Now, the LRC operations for both leaked data qubits in Figure 11(b) can be scheduled simultaneously.However, a completely arbitrary selection of parity qubits may lead to the accumulation of leakage on parity qubits if the same parity qubit gets selected for LRCs over multiple consecutive rounds for different data qubits.This happens because the associated parity qubit continuously gets swapped and is not reset for a prolonged duration.Note that each parity qubit may be used by up to four data qubits in case of such an arbitrary selection.To resolve this bottleneck, if a parity qubit has participated in an LRC in the previous round, it is marked as used in the PUTT and is not used for LRCs in the next round.The parity qubits that participated in LRC operations in the previous round will now be measured and reset in the next round, eliminating any leakage.The limited arbitrary selection of parity qubits enables us to schedule more LRCs in the same round and reduce leakage errors on both data and parity qubits.
Dynamic LRC Insertion: Challenges
Always-LRCs scheduling occurs offline before program execution by compiling syndrome extraction circuits down to the native gates of the quantum device [15,51,59].During program execution, the control processor repeatedly executes these gates in each syndrome extraction round.However, as ERASER only schedules LRCs when needed, it must interrupt the instruction supplier or the QEC Schedule Generator (QSG) to update the schedule for the subset of qubits it has identified as leaked in the subsequent syndrome round.Note that the real-time constraint for scheduling ranges in the order of a few tens of nanoseconds.The QSG must know by the fourth CNOT in the syndrome extraction circuit whether to schedule an LRC, as it will need to perform a SWAP after this CNOT to execute the LRC. Figure 12 shows this leaves about 120ns between obtaining the previous syndrome and the end of the fourth CNOT in the current round, assuming each CNOT takes 30ns (according to Sycamore latencies) [1,2].Failure to resolve whether or not to introduce LRC operations within this time either causes the qubits to idle until a decision is made or moves the LRC operations to the next round, causing the leakage to remain.Finally, as ERASER must be colocated within the control processor, it must fit on FPGAs to enable integration with existing quantum systems [2,16,26,36,50,61].: After a qubit is measured and the syndrome bit is sent to the control processor, there is a 120ns window to determine whether to schedule an LRC or not.
Dynamic Leakage Insertion: Design
After marking qubits as leaked in the LTT, ERASER attempts to schedule LRCs for all leaked data qubits while not scheduling parity qubits marked as used in the PUTT.We note that such scheduling is nontrivial as it requires solving a maximum matching problem in real time.We must pair each leaked data qubit with a unique unused parity qubit to swap with during an LRC.Also, we must maximize the number of leaked data qubits scheduled for LRC operations to ensure all leaked data qubits are reset in the next round.
To solve this problem efficiently, we propose using a lookup table containing pre-determined primary and backup SWAP neighbors for each data qubit; we call this lookup table the SWAP Lookup Table .For each leaked data qubit, ERASER uses the SWAP Lookup Table to get a neighboring parity qubit to swap with.If the parity qubit is already marked as used in the PUTT, ERASER looks through the backup parity qubits and repeats this process.By default, our design maintains one backup parity qubit for each data qubit.
QEC Schedule Generation: Design
After identifying LRCs that need to be scheduled, the control processor must execute the LRC operations in the next syndrome extraction round.By default, the control processor executes standard and stabilizer circuits.The DLI interrupts the QEC Schedule Generation (QSG), appends the instruction schedules by inserting the extra CNOTs corresponding to the LRC operations, and replaces the measurement operations on the associated parity qubits with those on the data qubits selected for LRC operations.
Enhancing ERASER Using Multi-Level
Readout Discriminators: ERASER+M 4.6.1 Modifications to the LSB.If a parity qubit is classified as |⟩ in the current round, we assume it has transported leakage to one or more of its neighboring data qubits.Therefore, we speculate all its adjacent data qubits have been potentially leaked and mark the corresponding entries in the LTT so that LRC operations can be scheduled on these qubits in the next round.
4.6.2Modifications to the QSG.During syndrome extraction with an LRC, if the data qubit is classified as |⟩, we observe that the parity qubit has a meaningless state since the SWAP during the LRC would have failed due to the data qubit leakage.Either the parity qubit has leaked or has a random unleaked state.Consequently, performing the SWAP after the data qubit reset is unnecessary as no useful information will be returned to the data qubit.However, the SWAP was also the only way to return the parity qubit to |0⟩, and this must be done before the next round.Thus, if the data qubit is classified as |⟩, the QSG (1) schedules a reset operation on the parity qubit and (2) squashes the second SWAP in the LRC circuit.
EVALUATION METHODOLOGY
In this section, we discuss our evaluation methodology before discussing our results.
Error Model
In this subsection, we discuss the error model used in our evaluations corresponding to different types of errors.
Modeling Operation Errors.
We consider a physical error rate of = 1 × 10 −3 and a circuit-level error model that injects (1) depolarizing errors on data qubits with probability at the start of a round, (2) measurement errors on qubits with probability , (3) depolarizing errors on qubit operands after each CNOT or gate with probability , and ( 4) initialization errors on qubits after a reset with probability [27,42].
Modeling Leakage Errors.
Modeling leakage in memory experiments is inherently challenging to approximate in a tractable manner [1,48,63].To ensure our results are reflective of real systems, we design our leakage error model based on prior studies on real systems [1,49,53,64,69].Our simulations also inject and track leakage in a manner consistent with prior work [6,7,24,48,63].We extend the circuit-level error model to inject leakage errors (1) on data qubits at the beginning of each round with probability 0.1 to model environment-induced leakage and (2) on qubit operands after CNOT operations with probability 0.1 to model operation-induced leakage. 5When an unleaked qubit interacts with a leaked qubit through a CNOT, we inject a random Pauli error (, , , ) on the unleaked qubit and apply a leakage transport with a 10% probability.
Our implementation of leakage transport conservatively assumes that the source qubit remains leaked after a leakage transport.Section A.1 reports results for an alternative implementation of leakage transport, where the source qubit may return to the computational basis provided the other qubit is not leaked.
5.2.3
Modeling the Measurement of Leaked Qubits.The output state of a qubit (0 or 1) is determined by a measurement discriminator [37,50,64].If a standard two-level discriminator measures a leaked qubit, the outcome will be random because the discriminator is not trained to classify |⟩.We assume this for ERASER.For ERASER+M, we assume that a multi-level discriminator, which classifies |0⟩, |1⟩, and |⟩, is erroneous at a rate of 10 to be consistent with results on real systems [12,49].
Simulation Infrastructure
We use Google's Stim simulator [27], a state-of-the-art framework for performing state-preservation, or memory, experiments [1,4,9,29,30,71], which we have extended to simulate leakage errors.Our evaluations go up to ten QEC cycles (each cycle is rounds) to evaluate the efficacy of our design over time.We use Minimum-Weight Perfect Matching decoding [22], but any other decoder may be used as well.
Hardware Cost of ERASER
To evaluate the hardware overheads of our design, we target Xilinx's off-the-shelf Kintex UltraScale+ FPGA and synthesize our design using Vivado.
EVALUATIONS
In this section, we discuss the performance of our proposed designs ERASER and ERASER+M.
Impact on Logical Error Rate
Logical error rate (LER) denotes the capability of a QEC code to suppress errors.A lower LER and exponentially decreasing LER with increasing code distance are desirable.Figure 14 shows ERASER improves the LER consistently with increasing distance on average by 3.3× and up to 4.3× in the best-case.ERASER+M is even more effective and achieves near-optimal LER, improving the LER on average by 8.6× and up to 26× in the best case.
At lower physical error rates such as = 10 −4 , ERASER's performance improves, reducing the LER by 5.4× on average and up to 9× compared to Always-LRCs.Concurrently, ERASER's performance is now closer in performance to ERASER+M and optimal scheduling, as error events become sparser at lower physical error rates [11,[18][19][20]60] and so leakage errors become more visible.
Impact on Leakage Population Ratio
A lower leakage population ratio or LPR means a greater reduction of leakage errors.Figure 15 shows the LPR of the default = 11 configuration for the competing LRC scheduling policies.ERASER consistently maintains a lower LPR and decreases the LPR by 1.5× on average and up to 2.1×.Furthermore, ERASER+M bridges the gap between optimal LRC scheduling and reduces the LPR by 2.2×
Hardware Implementation Cost
Table 3 shows the hardware resources required for implementing our proposed ERASER on standard off-the-shelf FPGAs as they are already being used to control and readout qubits on most existing quantum computers.Our implementation of ERASER requires less than 1% logic utilization up to = 11 and has a worst-case latency of 5ns to speculate leakage and adapt the QEC schedules.This makes ERASER a very practical, low overhead, and accurate solution that eliminates leakage errors in real-time.We further analyzed why there is a 3% accuracy gap between ERASER (and ERASER+M) and optimal LRC scheduling.Figure 16 shows the false positive rates (FPR) and false negative rates (FNR) for LRC usage across all policies.We make two observations.ERASER and ERASER+M can easily identify situations with no leakage errors, with a 3% FPR compared to a 50% FPR for Always-LRCs.Minimizing FPR is crucial as qubits are typically not leaked, so applying LRCs may create new errors.However, ERASER is not as accurate when detecting leakage, though ERASER+M can improve detection accuracy by up to 1.2×.
When does ERASER have False Negatives?
The higher FNR of ERASER may appear alarming, but we observe that the false negatives incurred by ERASER are hard-to-detect leakage errors.By design, ERASER's false negatives are either (1) invisible leakage errors or (2) leakage errors that only flip one parity check, which go undetected as ERASER schedules LRCs when at least two parity checks have flipped.Such errors are hard to identify as they barely affect any syndrome measurements.Nevertheless, as shown with ERASER+M, which has an FNR of 40% compared to ERASER's 50%, even small reductions in the FNR can significantly improve the logical error rate, as ERASER+M has similar performance to optimal LRC scheduling.
Analysis of Trade-Offs for ERASER+M
Although ERASER+M is significantly more effective compared to ERASER, it incurs overheads of using multi-level discriminators.
The measurement discriminator of a qubit is prepared by initializing it into each possible state that we want to classify, measuring it, and using the output signal to train a classification function.Typically, each execution is repeated for a few thousand trials.Multi-level discriminators must be trained to classify |⟩ states in addition to the usual |0⟩ and |1⟩.This results in two sources of overheads: (1) we must calibrate a single-qubit operation that can initialize a qubit in a higher energy state (such as |2⟩) and ( 2) additional executions to prepare and measure a qubit in the higher energy state to obtain the output signal for the leaked state.This process is required for each qubit.Assuming calibrating a single-qubit operation takes about 1K shots and another 1K shots are required to calibrate the classifier for the |⟩ state, we require 2 extra trials where is the number of physical qubits on the machine.Nevertheless, we note that other strategies also leverage multi-level discrimination, and thus ERASER+M naturally synergizes with such strategies [63].Note that ERASER is already very effective, and integrating the modifications needed for ERASER+M can be managed solely in software.Hence, the choice of using ERASER versus ERASER+M can be left to the programmer.
RELATED WORK
In this section, we discuss related work and compare or contrast as appropriate.
7.1 Leakage errors and their impact on QEC Improving device qualities and increasing system sizes have accelerated the demonstration of QEC codes in recent years [1,41,64].These real system studies reveal that leakage errors significantly degrade the performance of QEC.For example, the studies performed on Google Sycamore rely on post-processing the results to eliminate experimental results from rounds with leakage errors.While post-processing can be used during experimentation, it cannot be used during program execution on a fault-tolerant quantum computer, where errors, including leakage errors, must be suppressed in real time.In contrast, ERASER actively removes leakage errors by efficiently scheduling leakage reduction circuits.
Handling leakage errors
Although strategies for mitigating leakage errors have been studied in the past, they are either low-cost but inaccurate or accurate with added overheads [1,41,52,53,64].Leakage Reduction Circuits (LRCs) remove leakage from data qubits by executing SWAPs with other ancilla or parity qubits [3,6,63].There are three varieties: Full LRCs, Partial LRCs, and SWAP LRCs.As the former two variants of LRCs require denser device connectivity, we consider SWAP LRCs in this paper, which remove leakage errors from data qubits by swapping them with parity qubits.
Recent works have provided new leakage reduction strategies through custom operations that interact with states outside the computational basis [49,53].While such operations may require modifications to the quantum system [49], additional calibration overheads [43], or are specific to the underlying device [53], their performance is rather promising as they offer better performance than SWAP-based LRCs.Nevertheless, as such operations can also be erroneous and introduce leakage themselves, we observe that ERASER can improve the fidelity of such approaches as well, which we discuss at length in Section A.2.
CONCLUSION AND DISCUSSION
Leakage errors present a significant barrier to realizing fault-tolerant quantum computing as they degrade the performance of quantum error correction (QEC) codes.These errors cause qubits to leave computational basis states and enter higher energy states.Leakage errors are not device-specific and have been observed in both superconducting processors [41,49,52,53] and ion traps [5].
Prior works actively eliminate these errors by using leakage reduction circuits (LRCs) to periodically remove leakage from data qubits through SWAPs and resets.However, always using LRCs throughout a program is sub-optimal as they introduce additional two-qubit operations that facilitate leakage transport onto other qubits and may themselves fail.Ideally, LRCs should be scheduled so that leakage is wholly removed while ensuring minimal impact from the extra LRC operations.
We propose ERASER that detects the subset of qubits that may have leaked in real-time and judiciously applies LRC operations only on those qubits.ERASER leverages the insight that most leakage errors cause arbitrary parity check failures during QEC cycles.By identifying patterns in the failed parity checks, ERASER speculates the subset of leaked qubits.Once, the potentially leaked qubits are identified, ERASER adjusts the syndrome extraction schedules for these qubits by introducing LRC operations in realtime.The accuracy of leakage identification can be further enhanced by modifying the qubit measurement protocols to classify leaked states in addition to computational basis states.We leverage this insight to enhance ERASER using multi-level measurement classifiers.ERASER improves logical error rate by up to 4.3× compared to Always-LRCs.
ERASER is the first work to consider real-time leakage suppression, and ERASER's superior performance to Always-LRCs demonstrates that real-time, or adaptive, leakage suppression provides significant benefits over static leakage suppression, where LRCs are scheduled offline at compile-time.Our results suggest that accurately speculating leakage in real-time is an important open problem.While ERASER's speculation accuracy is rather high, its poor FNR due to hard-to-detect leakage errors is a significant source of logical error.Fortunately, we observe that even minor improvements in speculation accuracy, particularly in the FNR, can significantly improve the logical error rate.Thus, more sophisticated speculation strategies for leakage detection appear to be a rich and promising area for future research.
Finally, we observe that qubit loss in ion traps and neutral atom systems have a similar signature to leakage on superconducting systems, which was the predominant focus of this work [14,31,47,62,72].As qubit losses can cause operations to fail, such systems must be capable of detecting qubit loss through loss detection mechanisms to avoid errors.Given this parallel, we expect strategies similar to ERASER may be fruitful on such systems.Furthermore, as ion traps and neutral atom systems are much slower than superconducting systems, we note that time constraints for identifying leaked qubits are more generous, allowing for more sophisticated and accurate strategies.
A.1 Alternative Model for Leakage Transport
The leakage transport model used in the main text conservatively assumes that the source qubit remains leaked after a transport; that is, both qubits involved in the transport are leaked after the transport finishes.In this section, we consider an alternative model, where the source qubit and receiving qubit "exchange" leakage with each other.In such a model, if the receiving qubit is not leaked, it will become leaked, whereas the source qubit will return to the computational basis in a randomly initialized state.If the receiving qubit is leaked, then the transport essentially has no effect.
Figure 17 shows the LER for all policies under this alternative model for 10 QEC cycles.As expected, all models improve quite a bit under the alternative model.However, we further note that the gap between ERASER and Always-LRCs widens significantly, whereas the gap between ERASER and Optimal-LRCs has narrowed considerably.Now, ERASER improves the LER compared to Always-LRCs on average by 6.5× and up to 13.4×.Concurrently, ERASER+M improves the LER on average by 8.8× and up to 24.1×.We believe that ERASER significantly improves under this alternative leakage transport model for two reasons.First, we note that the LPR for all policies is significantly lower.Figure 18 shows the LPR for all four policies.We note that the LPR is substantially lower under the alternative model as the number of leaked qubits is preserved during a leakage transport under this model.Hence, the LPR curves for all policies, except Always-LRCs, stabilizes.The LPR for Always-LRCs spikes after rounds with LRCs and reduces after rounds without LRCs because LRCs may fail to remove leakage due to leakage transport.Second, LRCs have lower error than in the original model.Consequently, the impact of ERASER's high FNR is much lower compared to the original model.
A.2 Applicability of ERASER with DQLR
The evaluations in the main text consider the traditional SWAPbased LRC, which had been considered by much prior work [3,6,25,63].However, recent work has been moving towards LRCs using custom operations with tremendous success [49,53].As these operations exploit the underlying physics of the corresponding quantum processor, they can be calibrated for their respective systems without much difficulty.However, like any other quantum operation, these customized operations also may be erroneous.In this section, we analyze the applicability of ERASER for LRCs involving such operations.Specifically, we examine Google's DQLR approach [53] as shown in Figure 19(a), and we use the alternative leakage transport model from Section A.1 to ensure our results are reflective of Google Sycamore's leakage transport phenomena.
A.2.1 The DQLR Protocol.The DQLR protocol removes leakage from data and parity qubits every round by (1) performing syndrome extraction as usual, (2) resetting all parity qubits, which removes any leakage on the parity qubits, (3) using a custom operation known as a LeakageISWAP to remove data qubit leakage and move it to a parity qubit, and (4) resetting the parity qubits yet again.The fidelity of this operation is rather high, as (1) DQLR is not vulnerable to leakage transport, and (2) it only requires a single two-qubit operation to remove leakage.However, as shown in Figure 19(b), the DQLR protocol can introduce leakage on the data qubits if the first parity qubit reset fails (the parity qubit is initialized in |1⟩ instead of |0⟩), as then the data qubits may be excited to |2⟩6 .Thus, much like SWAP-based LRCs, overusing the DQLR protocol is risky, as it may introduce leakage even when there was no leakage to begin with.
A.2.2 Results.We examine the applicability of ERASER to the DQLR protocol and assume that the LeakageISWAP gate has the same fidelity as a CX gate.We compare the baseline DQLR policy, which executes the leakage removal protocol every syndrome extraction round; ERASER and ERASER+M, which schedule DQLR speculatively; and Optimal, which schedules DQLR whenever there is a data qubit leakage.Figure 20 shows the LER for all four policies.We observe that ERASER improves upon the baseline DQLR protocol by 1.8× on average and up to 1.9×, whereas ERASER+M improves 2× on average and up to 2.6×.We note that there is about a 4.4× gap between the optimal scheduling of DQLR and the baseline DQLR protocol.These results demonstrate that custom approaches can benefit significantly from real-time scheduling.
We further examine the LPR of all four policies.Figure 21 shows the LPR for all four policies for = 11 over 110 syndrome extraction rounds.Unlike SWAP-based LRCs, DQLR stabilizes the LPR rather quickly, as was reported in prior work [53].However, as DQLR can cause leakage if the first reset fails, overusing DQLR can cause additional leakage.As ERASER and ERASER+M judiciously schedule DQLR, they reduce the LPR by about 1.4× and 1.5× respectively.
B APPENDIX: ARTIFACT B.1 Abstract
The artifact contains the source code used to evaluate the designs proposed in this paper.We have listed how to reproduce the key results of our paper, namely those presented in Figures 5 and 6, which motivate the problem of inefficient LRC scheduling; Figures 14-16, which are our main results; and Table 2, which lists the utilization and timing results for our design (in RTL).
B.2 Artifact check-list (meta-information)
• Algorithm: ERASER, a leakage-detection algorithm.The code is built using CMake v3.20.3, though slightly older versions should be fine and can be enabled by modifying CMakeLists.txt.The compiler used in our evaluations was g++-12 and g++-13, and we also used OpenMPI v4.x.x to parallelize the experiments on computing clusters.All other dependencies have been packaged with the code and are referenced through CMake.
For data involving RTL, we used Vivado 2023.1 to synthesize the design and obtain utilization and timing data, but older Vivado versions (i.e.2022.x)should be sufficient.
B.4 Installation
We encourage using two build directories: build and build_RTL to avoid any issues.build is for creating the data for Figures 5, 6, 14, 15, and 16, whereas build_RTL is for generating the RTL (default distance 9).The executables leakage and eraser_rtl_gen can be generated as follows: $ cd build $ cmake .. -DCMAKE_BUILD_TYPE=Release $ make -j8 $ cd ../build_RTL $ cmake .. -DRTL=On -DCMAKE_BUILD_TYPE=Release $ make -j8 B.5 Experiment workflow B.5.1 Main Paper Figures.We explain how to generate the data for Figures 5,6,14,15,and 16, which represent the main insights and results of the work.We have provided several bash scripts in the leakage folder: figure_5_6.sh and figure_14_15_16.shwhich generate the data for the corresponding figures.We recommend running figure_5_6.shfirst as it can be done on any laptop within an hour.For figure_14_15_16.sh, we recommend using a cluster with sufficient memory, as the larger distance codes require significant amounts of memory and may need many cores to complete in time.For reference, our evaluations for Figures 5 and 6 took five minutes on an ARM server using 64 cores using about 1GB per core.In contrast, our evaluations for Figures 14 through 16 took two days running on a cluster with 512 cores, with about 8GB per core.
For figure_5_6.sh, there are only two parameters: proc, the number of processors (used by MPI), and shots, which is the number of trials to use in the experiment.The number of processors can be set to the user's preference.For shots, we used 100K in the paper, though 10K would be fine also.
For figure_14_15_16.sh, there are three parameters: p, the physical error rate; proc, the number of processors; and shots, the number of trials in the experiment.In the paper, Figure 14 uses both = 10 −3 and = 10 −4 , whereas Figure 15 and 16 both use = 10 −3 .We also note that the number of trials to obtain meaningful data increases with code distance () and lower physical error rate.We found that 10M trials (shots) is sufficient for all experiments at = 10 −3 , whereas 100M trials will provide the data reported for = 10 −4 in the paper.We note that = 9 and = 11 will be incomplete as they require more trials, likely 1B or beyond which is intractable to perform with our setup.
B.5.2 RTL Statistics.To generate the RTL (which is in SystemVerilog), run: $ cd build_RTL $ ./eraser_rtl_gen<DISTANCE> > <RTL-FILE> For example, ./eraser_rtl_gen 9 > eraser_d9.svwill write the RTL for a distance 9 code to the eraser_d9.svfile.After obtaining the RTL for distances 3 to 11, make a project in Vivado with the source file as the sole file.Then, add a constraint file (.xdc) to drive the clk signal for the RTL.Our file contained the single line where PERIOD (which is in nanoseconds) can be set to any value based on the desired frequency (i.e. for a frequency of 250MHz, set PERIOD = 4).Our designs generally have low critical path latencies, 7 See here for more details on adding constraint files in Vivado.
Figure 1 :
Figure 1: (a) Leakage errors spread over time (b) Regular syndrome extraction resets parity qubits every round, removing any leakage from them.Leakage reduction circuits (LRCs) swap data and parity qubits to remove leakage from the data qubit at the expense of five extra CNOTs (two extra SWAPs cost five CNOTs).(c) Logical error rate without LRCs, state-of-the-art Always-LRCs, and idealized LRC scheduling over 10 QEC cycles.
Figure 2 :
Figure 2: (a) Distance () 3 surface code.(b) In Case 1 (no leakage), there is an error on a data qubit that causes two stabilizers ( and ) to flip.In Case 2 (with leakage), the same data qubit has an error, but the central data qubit has a leakage error (), which causes stabilizer not to flip.The decoder fails to identify the actual data qubit error and instead assigns the error to the boundary.(c) Logical performance comparison with and without leakage errors.
Figure 4 :
Figure 4: Syndrome extraction (a) without an LRC, and (b) with an LRC.In (b), one SWAP swaps the parity and data qubit states, and another swaps them back.
Figure 6 :
Figure 6: LPR (top) and LER (bottom) comparison between state-of-the-art and idealized LRC scheduling.
Figure 7 :
Figure 7: (a) The simulated stabilizer.The density matrix simulation starts with 0 initialized in |2⟩.(b) CNOTs are followed by leakage transport, errors, and leakage injection.
Figure 8 :
Figure 8: (top) Spread of leakage errors, and (bottom) the effect of leakage on stabilizer measurement probability.We do not show qubit 0 's leakage probability as it begins the simulation already initialized in |2⟩.
Figure 11 :
Figure 11: (a) Two leaked data qubits must perform an LRC but have the same primary parity qubit.(b) Arbitrarily assign data qubits to parity qubits.
Figure12: After a qubit is measured and the syndrome bit is sent to the control processor, there is a 120ns window to determine whether to schedule an LRC or not.
Figure 13 :
Figure 13: Key modifications for ERASER+M.(a) The LSB schedules LRCs for data qubits adjacent to a parity qubit measured in |⟩.(b) The QCG modifies LRC operations upon measuring data qubit leakage.
Figure 14 :
Figure 14: LER with increasing code distance for (top) = 10 −3 and (bottom) = 10 −4 for 10 QEC cycles.Data is not shown for = 11, = 10 −4 for ERASER+M and optimal LRC scheduling as it was too low to be measured accurately.
Figure 16 :
Figure 16: (top) LRC speculation accuracy, and (bottom) FPRs and FNRs for = 11 over 10 QEC cycles.Data is not shown for optimal LRC scheduling as it has 100% speculation accuracy.
Table 1 :
Notation and Constants Used in Section 3.1 data | parity ) (b).The
Table 2 :
Invisible Leakage Error Probability Challenges in Exact Leakage Detection.Syndrome bit flips not only result from leakage errors but also arise from other types of errors such as decoherence, gate, and measurement errors.This makes the reliance on syndromes to detect leakage errors extremely challenging.Furthermore, unlike other errors, leakage errors do not cause syndrome measurements to flip according to a specific pattern.For example, an error on a data qubit only causes its adjacent syndromes to flip, whereas a measurement error causes the same syndrome measurement to flip across consecutive rounds.Unlike such errors, leakage errors cause random syndrome measurements to flip.For example, a leaked data qubit can cause any arbitrary combination of its four neighboring parity qubits to flip, making it difficult to detect the leakage during syndrome extraction.
Table 3 :
FPGA Synthesis ResultsERASER is effective due to two key reasons.First, the LSB can accurately speculate most of the leakage errors.Second, ERASER schedules a significantly lower number of LRC operations.Figure 16(a) shows the average speculation accuracy of the LSB.Both ERASER and ERASER+M correctly use LRCs about 97% of the time, whereas Always-LRCs correctly speculates about 50% of the time.Table 4 further shows the average number of LRCs used per syndrome extraction round for all four policies.Both ERASER and ERASER+M reduce the number of LRCs scheduled by 16.0× on average and by up to 17.4× in the best-case. | 2023-09-26T06:41:45.479Z | 2023-09-22T00:00:00.000 | {
"year": 2023,
"sha1": "e87addd3669aaeaa109d7981e6917aaf8269b0bb",
"oa_license": "CCBY",
"oa_url": "https://dl.acm.org/doi/pdf/10.1145/3613424.3614251",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "e87addd3669aaeaa109d7981e6917aaf8269b0bb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
234367104 | pes2o/s2orc | v3-fos-license | 3D-printing on textiles – an investigation on adhesion properties of the produced composite materials
The actual paper is related to adhesive properties of 3D objects printed on cotton textile fabrics. For practical applications of 3D prints in the textile sector, the adhesion of the printed object on the textile substrate is an important issue. In the current study, two different types of polymers are printed on cotton – polylactide acid (PLA) and polyamide 6.6 (Nylon). Altogether six cotton fabrics differing in structure, weight and thickness are evaluated. Also, the effect of washing and enzymatic desizing is investigated. For printing parameters, best results are gained for elevated process temperatures, intermediate printing speed and low Z-distance between printing head and substrate. Also, a textile treatment by washing and desizing can improve the adhesion of an afterwards applied 3D print. The presented results are quite useful for future developments of 3D printing applications on textile substrates, e.g. to implement new decorative features or protective functions.
Introduction
3D-printing is a versatile tool to produce three-dimensional objects of individual and extraordinary geometry [1][2][3]. Usually the 3D objects are produced as single independent pieces, e.g. as unique prototypes. However, 3D printing can be done also on other substrates with the aim of modification of these substrates by the added 3D object [4][5][6][7]. This application purpose is similar to traditional printing techniques. However, traditionally mainly two-dimensional images are printed. In contrast, by 3D printing also objects can be applied which have a three-dimensional structure [8]. Textile materials are often used as substrate for printing processes. Also, the 3D-printing on textiles can be used to create useful applications, as e.g. printed buttons, decorative or protective elements [4,9,10]. 3D printing can be also used as tool for application of conductive layers and the fixation of electronic components on textile surfaces, which can find applications in smart textiles [11][12][13][14]. A very fascinating application in this field is the realization of printed electroluminescent devices on textile materials [15]. 3D printing on textiles is also discussed as a first step to realize 4D textile materials [16]. These resulting materials can be named as 3D-prints on textiles but also as a kind of composite material, where two different types of materials are combined. For textile applications it is absolutely necessary that the 3D print adhere to the textile substrate. With this background, the aim of this current paper is to investigate parameters influencing the adhesion of 3D prints on cotton fabrics. Textiles from cotton are chosen, because cotton is in the textile sector the most used natural fiber. After polyester fibers, cotton is the second most used fiber globally.
Earlier studies are focusing on a broad field of various parameters influencing the success of the 3D printing on textile substrates. The precoating of fabrics from cotton, linen or polyester can enhance the adhesion to an afterwards applied 3D printed object [17,18]. However, this technique requires an additional coating process before 3D printing is performed, this can be of course a disadvantage for transfer to commercial applications. Also, the influence of distance between printing nozzle and textile substrate (Z-distance) is investigated [19]. A good overview on PLA 3D printed objects and their adhesion onto a big variety of different textiles made from cotton, polyester and acrylics is given by Mpofu et al. [20]. These authors use a reasonable regression method to compare a large amount of different textile materials. However, an effect of washing or any other kind of pretreatment is not in the scope of their investigations. An intensive investigation of PLA and Nylon 3D printed objects and their adhesion to different types of fabrics from synthetic fibers is reported by Sanatgar et al. [21].
In contrast to these earlier studies, the actual investigations are related to the comparison of differently structured cotton fabrics with 3D printed objects from PLA and Nylon. Beside the textile parameters also printing parameters are evaluated and optimized reasonably. A special view is done on the influence of washing procedures and enzymatic desizing. Recently, the short review of Kozior et al. summarizes main articles, findings and developments in the field of 3D printing on textiles with a special view on the adhesion properties of the 3D printed object [22]. In fact, due to the manifold types of materials, printers and techniques to measure the adhesion, it is quite difficult to unify the results from different research groups. Especially with the procedure of adhesion measurement and their reproducibility the main difficulty appears. Nevertheless, Kozior et al. give the interesting approach to translate basic physical processes to the printing parameters influencing the adhesion. He mentions especially the wetting, the diffusion and the pressure. An interesting approach is here also the after-treatment, by annealing or ultrasonic [22,23].
Materials
As materials for 3D printing two different types of polymer filaments are used -a polylactide acid (PLA) filament and a polyamide 6.6 (Nylon) filament. The PLA filament (polylactic acid) is supplied by the company Filamentworld (Neu-Ulm, Germany). The used filament type is named "Sonnengelb" with a diameter 1.75 mm. The Nylon filament (polyamide 6.6; PA 6.6) is supplied by the company Taulman3D (Indiana, USA). The used filament type is "Nylon-230-Filament" with a diameter of 1.75 mm. Further properties of used filaments are summarized in Table 1. Especially mentioned are the process temperatures recommended by the different suppliers. The 3D printing is performed on six different types of cotton fabrics. These fabrics and related properties are summarized in Table 2. For all textile materials a washing and enzymatic desizing procedure is done as precleaning procedure before 3D printing. Following, the performance of cotton fabrics with and without this precleaning is compared. This washing procedure is done with 2.58 kg cotton fabric washed in one liter water at 40 °C. An industrial washing machine IPSO ILG is used. As washing agent 16.5 g/L Felosan FOX (CHT,
Printing process
All 3D objects are produced using a commercially available 3D printer Orcabot XXL Pro supplied by the company Prodim International BV (Helmond, The Netherlands). The printed objects are prepared for the following force measurements. A picture of a 3D printed object used in current investigations is shown as example in Fig. 1. The weight of these objects is 3 g for both types of used filament. Also, the height is with 21 mm similar for both filaments. However, length and depth are different for PLA and Nylon material (see values given in Table 1). The used parameters for printing are different for both types of filaments and are summarized in Table 3. For first evaluation of printing parameters, orientation experiments are done (discussed in Orientation according to printing parameters). As result from these orientation experiments the parameters for the final comparison of different cotton fabrics are optimized (discussed in Comparison of textile substrates). The used z-distances in the final experiments differ for the different cotton fabrics. The values are listed in Table 2.
Analytical methods
The weight per area of used cotton fabrics are determined according to standard DIN EN 12,127, while the thickness of the fabrics is measured according to standard DIN EN ISO 5084. The surface roughness of cotton fabrics is determined using a 3D laser scanning microscope VK-X100 supplied by the company Keyence. Microscopic images of sample cross-sections are performed using a digital light microscope VHX-600 supplied by the company Keyence. The hydrophilic properties of the cotton fabrics are determined by using the TEGEWA drop test [24,25]. Here a drop of water is placed in a reproducible way on the textile and the time is measured until this drop sinks into the textile completely. A lower sink in time stands for a higher hydrophilic property of the textile. If a drop sinks not in during 3 min, the measurement is stopped and a sink in time of > 180 s is recorded. The TEGEWA drop test is repeated three times with each sample and the average value is calculated and given together with the standard deviation. Different testing arrangements are possible to determine the adhesion between a 3D printed object and a textile substrate [26]. Malengier et al. compared three different test arrangements -a perpendicular (vertical) test, a sheer test and a peel test. Finally, these researchers conclude that all these three tests can be used to determine adhesion of 3D-prints on textiles in a suitable way [26]. In the actual study a perpendicular tensile test is used. Disadvantageous for this test arrangement is the more time intensive production of testing samples due to a larger size of prepared 3D printed objects. An example of a printed testing object is shown in Fig. 1. These objects are printed on textile substrates with a size of 8 cm X 20 cm. The adhesion is tested by using force measurements during separation with a machine Zwick 1455 (Zwick/Roell GmbH, Ulm, Germany). Before measurement, all sample are maintained in standard conditions (21 °C; 65% humidity) for one day. For measurement, textile and 3D object are placed in the device as shown in Fig. 2. The force measurement is recorded as shown in Fig. 3. All measurements are repeated with three independently and similarly prepared 3D printed objects. The maximum force determined in this separation experiment is recorded and set as adhesion between the 3D printed object and the textile substrate. Finally, the average of the three measurements is calculated and shown in following figures as adhesion. The error bars shown in the graph are related to the differences between the performed three measurements.
Results and discussion
This section is divided in two parts. In the first part, orienting experiments are done to evaluate for each printing parameter an optimal setting. For these orienting experiments only two different types of fabrics are used, the woven fabric F2 and the knitted fabric F6. In the second part, the printing parameters are set based on evaluation in the first part and the application on all six different types of cotton fabrics is investigated. Here, the optimal properties for the substrate parameters are evaluated.
Orientation according to printing parameters
Evaluated are here four different parameters -the Z-distance between printing head and printing table, the printing speed of first applied layer, the extrusion temperature and the temperature of the printing table.
For experiments with PLA-filament, the Z-distance is set to 0.7 mm and for Nylon-filament this Z-distance is set to 0.5 mm. The other parameters for the orienting experiments are given in Table 3. During following experiments, three of these four parameters are set fixed and one parameter is modified in a range, which is reasonable according to the recommendation of the suppliers of filaments and 3D-printer. These orienting experiments are done with the PLA-filament on the cotton fabric F6 which is chosen as example for a knitted fabric. Experiments with the Nylon filament are done on the cotton fabric F2 which is chosen as an example for a woven fabric. Later in Comparison of textile substrates, both types of fabrics are used with both different types of printing filament and are compared with all six different types of cotton fabrics investigated. The evaluation to find an optimal Z-distance is done for printing with PLA-filament (Fig. 4). The distance between the printing head and the printing table (Z-distance) has a significant influence on the adhesion of the PLA specimen. In case of (Fig. 4). This range is determined by a high plateau level. From this plateau level the adhesion decreases continuously with increasing Z-distance. Due to the thickness of the cotton fabric F6 of 0.73 mm, with low Z-distance the fabric is compressed by the printing head during the application of first printed layer. By this compression, the PLA polymer is directly feed into the fabric structure. The microscopic images shown in Fig. 5 give a good view, how the Z-distance influences the interpenetration of the cotton fabric by the printed PLA. The yellow colored PLA is clearly distinguished by the white cotton fabric. For the shown samples, two PLA layers are printed on the cotton and a cross-section is made. The printing in two different Z-distances (0.5 mm and 0.8 mm) is compared. It is clearly seen, that the smaller Z-distance leads to a stronger interpenetration of the coated cotton fabric by the PLA print. An earlier study investigating the influence on the Z-distance intensively reports the same decrease in adhesion as function of an increasing Z-distance [19]. However, this earlier study does not show a clear plateau value for small Z-distances. Nevertheless, the finding of a plateau value is reasonable, if a maximum interpenetration of the cotton by the printed PLA can be assumed. In that case, lower Z-distance cannot improve the interpenetration further. Based on these results, the Z-distance is set to low values during following experiments presented in Comparison of textile substrates. The chosen Z-distance depends on the thickness of the different cotton fabrics (Table 2). Nevertheless, the set-up of the printer allows only a minimum Z-distance of 0.2 mm. By increasing the printing-speed for the first applied layer, the adhesion decreases in a nearly linear matter (Fig. 6). Compared to the influence of the Z-distance, the effect of the printing speed on the adhesion is weak. It is also weak compared to the later discussed process parameters. However, it should be clear that the printing speed of the first layer should be moderately slow. For following comparison experiments this printing speed is therefore set to 1.0 m/ min for the Nylon-filament and 0.8 m/min for printing with PLA-filament.
The effects of two different process temperatures are evaluated -the temperature of the printing table (printing ground) and the temperature of the printed filament (extrusion temperature). The temperatures chosen for current investigations are according to the recommend process temperatures of the suppliers (Table 1). Due to these recommendations, for the investigated Nylon filament, temperatures in a higher range are evaluated (Figs. 7 and 8). There is a clear correlation for both process temperatures observed. The adhesion is increasing with increasing temperature. However, the effect of the filament temperature is more significant compared to the temperature of printing table It can be assumed that higher temperatures lead to a more fluid polymer melt which is able to penetrate the treated cotton fabric more deeply, so the interpenetration is improved [21]. However, due to the thermal stability of the filament polymers, the extrusion temperature cannot be elevated unlimited. For the following comparative experiments, following extrusion temperatures are chosen -220 °C for PLA-and 250 °C for Nylon-filament. These temperature settings are higher compared to the process recommendations of the filament suppliers but still significantly below the given decomposition temperature.
Comparison of textile substrates
The experiments on six different cotton fabrics are done with the same printing parameters. Properties of the textile substrates are reported in Table 2. The used printing parameters are shown in Table 3. These printing parameters are chosen as result of the above described orientating experiments. Only the Z-distance is set individually for each type of cotton fabric depending on its fabric thickness (Table 2). Actually, the aim is to evaluate, which textile structure and properties enhance the adhesion of a 3D printed object on a textile surface. Beside the grey fabrics, also the printing results on washed cotton fabrics are evaluated. A summary of adhesion of 3D-objects on the different cotton fabrics is shown in Fig. 9. In this figure also, the printing results form PLA-and Nylon-filaments are compared. Figure 10 shows photographs from PLA objects after their separation from different cotton fabrics during the force measurements. In summary, two different cases have to be distinguished. In the first case, there is a clear separation of the PLA object from the cotton fabric, without visible damage of the cotton fabric. Here, the main force for separation can be clearly assigned to the adhesion between the PLA object and the cotton substrate. In the second case, there is additional to this separation also a destruction of the cotton fabric by the applied mechanical force. Here, the measured force for separation is related to two different material properties -the adhesion of the 3D object to the cotton substrate and the mechanical stability of this cotton fabric. Especially for the fabrics F5 and F6, the delamination of the textiles happens only partly and the textile is damaged, so no further delamination occurs. For this case, the maximum separation force cannot be assigned simply to the adhesion alone. It is instead a measurement of the mechanical stability of the fabric, which is obviously lower than the adhesion of this fabric to the printed PLA object. It could be estimated that a measurement with a textile of higher mechanical stability lead to higher adhesion values determined, so the adhesion of 3D object to the textile substrate is higher than the measured one. However, to set the measured adhesion in relation to the detached area and to present normalized data could lead to misleading numbers, because the measured force is not only related to the adhesion between 3D object and textile but also from the force needed for textile destruction. By view on Fig. 10, only for the fabrics F1, F2 and F4 a separation of the PLA objects from cotton without damaging of the cotton is observed. Only for these three fabrics, the discussion of the adhesion properties is senseful.
The determined adhesion for these samples is increased with the thickness of the textile substrate and the weight per area of the textile substrate (Figs. 12 and 13). This result can be explained by the determined adhesion as function of the Z-distance between printing head and printing table, as earlier discussed in Orientation according to printing parameters. The fabric F1 has the smallest thickness of only 0.2 mm and the minimum Z-distance possible by the printer is also 0.2 mm. In this case the printed filament is simply laid on the textile fabric. In case of the fabric F2 with the bigger thickness of 0.6 mm the Z-distance during the printing is set to 0.3 mm, so the fabric is compressed during the printing with 0.3 mm. By this compression the printed filament is forced into the textile structure and not simply laid down on the textile surface. Please, compare here also Fig. 5. The stronger compression of thicker textile substrates during printing leads to a stronger interpenetration of the coated cotton fabric by the PLA print. An earlier investigation reported on this issue by view on different Z-distances while printing on the same type of textile lead to similar results [19].
For all these three fabrics (F1, F2 and F4), an increase in adhesion is determined if the fabrics are washed before the 3D print is applied (Figs. 12 and 13). By washing, impurities are removed from the cotton surface. Also, by the applied enzymatic desizing, starch is removed from the cotton. The cleaning of the cotton surface by washing leads to a more hydrophilic cotton surface, as determined by lower sink in times during the TEGEWA drop test ( Table 2). The printing on such clean substrates lead to better adhesion, probable because the better and direct contact between the PLA and the cotton surface. This result of increasing adhesion for washed textile substrate, is determined for the here investigated material combination and no general statement. For other material combinations, also a negative impact of washing on the adhesion could be possible.
In contrast, Mpofu et al. report a different behavior for PLA prints on cotton fabrics, with a decreased adhesion in case of washing the fabric before printing [27]. However, in his study the washing is only performed at 40 °C and no enzymatic desizing is done. Probable especially the desizing to remove the starch from the cotton surface is an important issue to improve the adhesion of a PLA print. Different pretreatment and cleaning procedures for cotton fabrics before a PLA print are investigated and reported by Kozior et al. [28]. He found a slightly increased adhesion in case of a pretreatment by washing. Unfortunately, no description of the washing procedure, agent, temperature or kind of desizing procedure or agent is given in this reference [28].
If a comparison between the three types of fabrics and their adhesion to PLA has to be made, the ranking is F2 > F4 > F1 (Fig. 9). This ranking is clearly related to the two fabric properties, weight and thickness. A fabric with higher weight and thickness shows better adhesion properties to an applied PLA object compared to fabrics of lower weight and thickness.
Photographs of printed Nylon objects after separation experiments from cotton are compared in Fig. 11. With the Nylon objects only in case of two cotton fabrics F2 and F4, a separation of Nylon object and cotton is observed without additional damaging of the cotton fabric. The adhesion intensity for Nylon objects can be ranked here F2 > F4, which is the same order as observed for the PLA objects. With higher fabric weight and thickness, the adhesion to the 3D printed Nylon object is increased (Figs. 12 and 13). Also, for the Nylon 3D prints a previous cleaning of the fabric by washing enhances the adhesion to the afterwards printed Nylon object (Figs. 12 and 13). By view on the adhesion results of fabrics F2 and F4, also the performance of PLA and Nylon prints can be compared. For both cotton fabrics, the adhesion to the Nylon print is significantly stronger compared to the applied PLA print. For this result, there are mainly two reasons -one related to process parameters and the other related to the polymer structure. The extrusion temperature for Nylon is set to 250 °C and for this higher compared to the extrusion temperature for PLA (220 °C). The higher extrusion temperature for Nylon compared to the temperature for PLA, is related to the higher decomposition temperature T D for Nylon (Nylon T D = 390 °C; PLA T D = 300 °C) [29]. The effect of an increased extrusion temperature leading to an increased adhesion was also described in orientation according to printing parameters where the printing parameters are evaluated. An application at higher temperature can lead to a more fluid polymer melt, which has a better ability to penetrate the structure of the treated cotton fabrics. Additionally, cotton is a cellulosic fiber containing many hydroxy groups and a strong hydrophilicity. The strong hydrophilic properties of the cotton substrates are also documented by the short sink in times determined by the TEGEWA drop test ( Table 2). In contrast, PLA is a hydrophobic polymer. Nylon is less hydrophobic compared to PLA and could have for this reason a better adhesion to the hydrophilic cotton substrate. The different hydrophobicity of both polymers is shown by different contact angle values against water -for Nylon 70° and for PLA 77° [30,31]. In fact, it can be supposed that a less hydrophobic polymer as Nylon has a higher affinity to the hydrophilic cotton fabric -in comparison to lower adhesion of the hydrophobic PLA print.
Conclusions
It is possible to apply 3D printed objects on cotton textiles with strong adhesion between both materials. The intensity of adhesion depends on factors from three different categories -the textile properties, the process parameters for printing and the type of printed polymer. In case of textile material, the adhesion increases with fabric weight per area and thickness. An additional improvement can be achieved, if a precleaning of the textile is performed by washing and desizing. In case of printing process parameters, a strong influence is related to the printing temperatures. Higher process temperatures lead to stronger adhesion. Beside the temperature, also the Z-distance between printing head and table and the printing speed of the first applied layer are important parameters. Z-distance and printing speed should be small enough to reach sufficient adhesion. In case of the material composition of the printed object, it can be assumed that materials which enable higher process temperatures have a certain advantage. Also, the printing on hydrophilic textile substrates like cotton should be done with polymers of lower hydrophobicity.
Finally, it should be stated that the presented comparative study supports a whole view on a broad assemble of different parameters and influences. For this, it can be a helpful tool for practical applications developing 3D printed objects on textile materials made from cotton. | 2021-05-12T14:06:12.614Z | 2021-05-11T00:00:00.000 | {
"year": 2021,
"sha1": "420b9403177ba12aae15b5ac1cf32ece3a95ea1a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10965-021-02567-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "420b9403177ba12aae15b5ac1cf32ece3a95ea1a",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
265167262 | pes2o/s2orc | v3-fos-license | Upcycling imperfect broccoli and carrots into healthy snacks using an innovative 3D food printing approach
Abstract Vegetables are healthy foods with nutritional benefits; however, nearly one‐third of the world's vegetables are lost each year, and some of the losses happen due to the imperfect shape of the vegetables. In this study, imperfect vegetables (i.e., broccoli and carrots) were upcycled into freeze‐dried powders to improve their shelf‐life before they were formed into food inks for 3D printing. The rheology of the food inks, color analysis of the uncooked and cooked designs, and texture analysis of the cooked designs were determined. The inks with 50% and 75% vegetables provided the best printability and shape fidelity. 3D printing at these conditions retained a volume comparable to the digital file (14.4 and 14.3 cm3 vs. 14.6 cm3, respectively). The control, a wheat flour‐based formulation, showed the lowest level of stability after 3D printing. The viscosity results showed that all the food inks displayed shear‐thinning behavior, with broccoli having the greatest effect on viscosity. There was a significant color difference between uncooked and cooked samples, as well as between different formulations. The hardness of the baked 3D‐printed samples was affected by the type and content of vegetable powders, where carrot‐based snacks were notably harder than snacks containing broccoli. Overall, the results show that 3D food printing can be potentially used to reduce the loss and waste of imperfect vegetables.
in this case, can be upcycled using 3D food printing technology.
Imperfect vegetables can be utilized as an alternative ingredient in 3D-printed foods if the final food ink meets acceptable extrusion rheological parameters.
3D printing, a way of additive manufacturing, also referred to as digital fabrication technology, is a quickly emerging technology with many capabilities.The adoption of this technology poses many advantages, such as increased production speed, lower production costs, customer customization, decreased need for global transport, and decreased distribution time and cost (Shahrubudin et al., 2019).
Extrusion-based 3D printing is currently the most applicable approach for the food industry.The food "inks" are extruded through a nozzle in layers to form a 3D food product.There are three different potential forces used for extrusion-based technologies, including screw, piston, or compressed air.The extrusion-based 3D printing technology has been shown to work for a wide range of food materials, including chocolate, custard creams, pastes, sugar candies, starch, and alginate-pectin (Ahmadzadeh & Ubeyitogullari, 2022, 2023;Outrequin et al., 2023;Rysenaer et al., 2023).The extrusion technique uses a robotic arm that moves along a surface with a cylindrical syringe that dispenses the food paste.There are certain parameters that must be considered when developing the food paste, such as the capability of the food paste to be extruded through the nozzle, the ability for the food paste to have a sufficient viscosity for the layers to stack without defects, and the resolution of the final product due to the stability and definition of the food paste (Le-Bail et al., 2020).
3D food printing technology has made its way into the food industry with the capability of impacting manufacturing processes.
The technology poses advantages for individualized nutrition, raw material utilization, and customizable product design (Ahmadzadeh et al., 2023;Shen et al., 2023;Wu et al., 2023).Some of the notable capabilities of 3D food printing include personalization, on-demand production, and, as highlighted in this research, reduction of food loss and waste (Derossi et al., 2021).3D printing can bring products to the customer, which reduces the reliance on shipping, packaging, and distribution, alleviating the food supply chain of immense pressure (Shahrubudin et al., 2019).In addition, the increased attractiveness of printed foods could reduce the resistance of children and other age groups to consume certain foods.This caters to the idea that the breakdown of a food product that is unappealing to the eye and then rebuilt to improve consumer attractiveness and acceptance.
Previously, freeze-dried mango powder, in addition to flour, water, and olive oil, was used to develop a dough formulation.The goal of that research was to determine the optimum ratio of ingredients for printability (Liu et al., 2019).Derossi et al. (2020) aimed to study the porosity fraction of a similar dough filament and the ability to develop cereal snacks where the formulation was optimized for ideal extrusion-based printing (Derossi et al., 2020).Food materials are categorized as natively printable, traditionally non-printable, and alternative (Pulatsu & Lin, 2021).The alternative category is ingredients that become more palatable and attractive after undergoing 3D food printing, such as insect powders and algal components.
Vegetables are traditionally considered non-printable materials, considering their high moisture content and lack of suitable mechanisms, such as gelation, agglomeration, or solidification (Pulatsu & Lin, 2021).However, if the vegetable's moisture content is decreased and combined with an ingredient capable of agglomerating, it becomes part of the alternative category.Even though several studies have investigated the 3D printability of various food components, upcycling imperfect vegetables via 3D food printing has not been fully explored.Among vegetables, carrots and broccoli are especially lost or wasted to a greater extent (FAO, 2011(FAO, , 2018;;Melini et al., 2020).
Therefore, the objective of this study was to determine the optimal ratio of imperfect carrot and broccoli for ideal 3D food printing.Specific objectives were to (i) optimize the 3D printing conditions for different carrot/broccoli ratios, (ii) characterize the properties of the food inks by analyzing their viscosity, and (iii) determine the printability, color, microstructure, and texture of the 3D-printed snacks.
This study aimed to develop a snack cracker that utilizes imperfect vegetables as alternative ingredients that add value both nutritionally and economically.
| Materials
Imperfect carrots and broccoli were provided by Taylor Farms.Allpurpose wheat flour (with 76.7% (w/w) carbohydrate and 10.0% (w/w) protein contents based on the manufacturer's specifications), extra virgin olive oil, and sodium chloride were all obtained from a local grocery store.
| Vegetable powder preparation
The imperfect vegetables were refrigerated for no more than 2 days before their processing steps began.Before blanching, the vegetables were sliced into small pieces (~4 cm in dimension).The vegetables were then steam-blanched in a steamer (Dixie, M-6 Steam Blancher-Cooler) at 90°C for 3 min to inactivate peroxidase (Kidmose & Martens, 1999).After blanching, the vegetables were packed into Ziploc bags and frozen for at least 24 h.Next, the samples were freeze-dried at −45°C and 7.3 Pa (LABCONCO).The produce was held at these conditions for 48 h.
The dried produce was then milled into a fine powder with a Blizer 2 food processor (Robot Coupe USA Inc.).The fine powder was then placed in 30-to 50-g increments in a Meinzer II sieve shaker (Advantech, OH, USA) for about 15 min or until most of the powder had passed through.A 60-mesh sieve with 250 μm openings was used to separate out the larger particles.The powder with particle size ≤250 μm was then placed into a Ziploc bag and stored in a refrigerator (4°C) until further use.
| Paste formation
Table 1 includes the amounts of vegetable powder, flour, salt, and olive oil that were included in each paste (Derossi et al., 2020).Once all ingredients had been added to an empty beaker, they were mixed before water was added to the desired consistency.The amount of water was adjusted between 25 and 50 mL based on the vegetable ratio to achieve the required consistency.The samples were mixed by hand with a spatula until the sample was a homogenous paste.
At this point, the paste was ready for printing, color analysis, and rheology.
| 3D food printing
An extrusion-based 3D food printer (Foodini, Natural Machines, Spain) was used to print the 3D food products.A 1.5-mm nozzle was utilized to print a flower shape with a height of 6 mm and 4 layers.
The geometry was 3D-printed a total of six times.Figure 1 shows the 3D printing process.The printing parameters, such as print speed and extrusion rate, were selected for the best printability.The printability was assessed by comparing the dimensions of the prints to those of the digital model (Figure 2).
| Post-printing processing
The 3D-printed samples were baked in a smart oven with pure light technology (Brava, CA, USA) at 177°C for approximately 8 min depending on the sample.
| Viscosity measurement
A controlled-stress rheometer (AR 2000 Rheometer; TA Instruments) was used to determine the viscosity of the samples.The rheometer was calibrated for inertia before each use, as well as the gap height with the attachment.A 40-mm steel sand-blasted attachment was utilized.The range for shear rate was set at 0.1-100 1/s, and the measurements were recorded at 25°C.
| Macroscopic and microstructural properties
The images acquired after 3D printing were compared to the digital 3D model to evaluate the printing accuracy.The volume of the printed samples was also measured and compared to the volume of the 3D model.A ruler was used to establish the scale bar in the photographs.
The microstructure of the snacks printed with a 50% vegetable ratio and the control sample was investigated using an FEI NovaNanolab200 Dual-Beam system equipped with a 30-kV SEM FEG column and a 30-kV FIB column (FEI Company).Thin cross sections of freeze-dried 3D-printed samples were coated with a gold layer using a sputter-coater (EMITECH SC7620 Sputter Coater).
Finally, SEM imaging was performed at a 15 kV acceleration voltage and a current of 10 mA.
| Color analysis
The color of the samples was determined using a colorimeter (Minolta CR-300, Konica Minolta).The colorimeter was calibrated with a white tile (L* = 97.12,a* = +5.25,b* = −3.49)provided with the equipment before each use.The color of the samples was measured before and after baking, where L*, a*, and b* were recorded.A total of six readings were carried out for each food ink, and the results were reported as mean ± standard deviation.
| Texture analysis
The cooked samples were analyzed for their texture using a TA-XT2i Systems, Ltd.).The hardness of the cooked 3D-printed samples was determined (Jia et al., 2020).A 5-kg maximum load cell was used to calibrate force before the experiments.The clearance between the flat compression plate and the base was set at 60 mm.A cylindrical probe with a diameter of 4 cm was used for the compression.The cooked samples were compressed to a 50% strain with a pre-test speed of 1.5 mm/s, a test speed of 1.0 mm/s, and a post-test speed of 1.0 mm/s.
| Statistical analysis
Statistical analysis was conducted using SPSS Statistics software.
The color and texture data were analyzed using a one-way ANOVA with Tukey's multiple comparison test at a significance level of 0.05.
| Printability
On the basis described in a similar formulation (Derossi et al., 2020), broccoli and carrot powders were added in various amounts as a substitute for wheat flour.Wheat flour was not completely substituted due to the need for an agglomerating ingredient because vegetable powders serve best as an alternative ingredient.The food inks developed from Table 1 appeared to have similar characteristics as cookie or bread doughs.The settings for the printing were determined based on the preliminary 3D printing experiments at different carrot/broccoli formulations.For successful 3D printing, it is critical to keep the ink homogenous and eliminate any air pockets in the cartridge.To evaluate printing accuracy, the geometric features of the printed objects, including volume, were assessed using digital image analysis and compared to those of the digital 3D geometry.Figure 3 depicts images of 3D-printed objects with varying ratios of wheat flour, carrot, and broccoli.In addition to visually evaluating shape accuracy and resolution, the volumes of the 3D-printed samples were estimated and compared to the volume of the digital 3D geometry (Figure 4).Higher and lower volumes in comparison to the volume of the 3D design suggested a lower build quality.The printing performance of the flour dough was inadequate (Figure 3a), as evidenced by the significantly low resolution and the inability to maintain shape in the matrix that was printed without incorporating vegetables, which was probably due to the viscoelastic gluten network (Masbernat et al., 2021).The printed object had a good level of shape retention when 75% carrot and broccoli (75BC) was used (Figure 4).However, the physical stability of the broccoli-flour samples extruded through the nozzle was lower than that of the carrot-containing counterpart samples, resulting in larger volumes of the 3D-printed products compared to the digital 3D geometry.This observation highlighted the significance of matrix strength in 3D printing.By combining carrot and broccoli with wheat flour, the dimensional stability was significantly improved when 75% or 50% vegetable formulation was employed (p < .05).However, when a lower vegetable ratio (25%) was used, the paste's structure prevented the formation of a good shape after printing, as evidenced by the noticeable lines observed after printing (Figure 3d).3D printing of 50% and 75% carrot/broccoliflour formulations yielded the best results (volumes of 14.4 and 14.3 cm 3 , respectively) with the fewest geometrical errors and volumes comparable to the digital file (14.6 cm 3 ) (p > .05)(Figure 4).
F I G U R E 2
The 3D model used for printing.
the ink to flow smoothly through the nozzle.The viscosity curves for the samples containing vegetables were similar (Figure 5).However, the results showed that broccoli had a greater effect on viscosity than carrots at shear rates >10 1/s.Specifically, 25BC, 50BC, and 75BC exhibited lower viscosities at shear rates higher than 10 1/s when compared to their carrot-only counterparts.This is likely due to the higher particulate structure in the broccoli samples compared to that of carrots.This could explain the decreased printing accuracy with increasing broccoli concentrations in the ink formulation (50B, and 75B; Figure 4).It has been reported that adding carrot powder to wheat flour significantly increases the system's water absorption capacity, which could be attributed to an increase in fiber content as a result of the addition of or increasing the level of carrot powder.
Our findings revealed that the broccoli-containing samples required more water to reach a certain consistency than their carrot-containing counterparts, which could be explained by broccoli's higher fiber content (Ying et al., 2021).Dried broccoli particles have been shown to swell up to 7.6 times their original size when absorbing water.The rheological behavior of dough systems made with wheat flour is considerably influenced by this swelling capacity (Ahmad et al., 2016;Silva et al., 2012).When high-volume fractions are added, the system behaves as a cellular material rather than a gelled matrix with tained in this study (Moelants et al., 2013;Sharma et al., 2017).The control sample, on the other hand, indicated a lower shear effect than the other samples and did not print well due to the high adhesiveness of the ink (Figure 5).The characterization of the flow behavior for the vegetable inks is consistent with food inks developed from spinach and kale leaf purees, where the purees also used in 3D food printing displayed a shear-thinning behavior (Pant et al., 2023).
| Microstructural properties
SEM images of the samples 3D-printed with inks containing 50% carrot and/or broccoli are shown in Figure 6.The cross-sectional structure of the samples revealed that the control sample made from wheat flour had a more granular structure compared to the samples containing vegetables.The wheat flour dough's microstructure included a protein (gluten) matrix and starch granules of varying sizes embedded into the protein matrix (Dahesh et al., 2016).The high elasticity of the gluten matrix in wheat flour dough caused the control sample to lose shape while printing.When the SEM images from 50B, 50C, and 50BC were compared to the control, noticeable differences in the microstructure of the inks were observed (Figure 6).According to the literature, fibers have a gluten dilution effect, resulting in a less porous structure in baked goods (Polaki et al., 2010).When 50% broccoli (50B) was added, an open structure was observed, showing that the gluten matrix became discontinuous and a number of starch granules leaked out, whereas 50% carrot (50C) and 50% carrot/broccoli (50BC) samples revealed a dense structure and starch granules are still connected to the gluten.It could be explained by the carrot's different ratio of soluble and insoluble fibers and their effect on the gluten matrix (Li et al., 2023).
The observed morphological differences between the control and vegetable-containing samples could lead to differences in snackquality characteristics.These values showed that the produced snacks had colors close to the respective vegetables used.Overall, the colors of the baked samples were less intense (Table 2), which could be due to pigment conversion.However, after baking, all the snacks had an acceptable, pleasing color.
| Texture analysis
Textural properties are important factors in determining product quality as they affect human perceptions of texture.As shown in Figure 8, the maximum force from the compression test was determined and reported as the hardness of baked snacks.This value reflects the initial bite that a consumer would take of a food sample (Han et al., 2017).If the hardness of the snacks exceeds an optimal threshold, the flavor might be impacted, resulting in a diminished level of crispiness.When compared to the control sample, adding <75% carrot powder increased the strength of the ink and the hardness of the snack, while increasing the carrot amount to 75% decreased the hardness.The same trend was observed in broccoli-containing samples by increasing their ratio.However, there was no significant difference in hardness between the 25B and control samples (p > .05).50B and 75B indicated significantly lower hardness compared to the control (p < .05).In addition, carrotbased snacks were considerably harder than broccoli-containing snacks (Figure 8).
The water-insoluble proteins in wheat flour are responsible for the rheological and structural properties of the dough, including elasticity and structural strength, as well as viscosity and fluidity.
Dough made from wheat flour is a soft gel with a unique network structure that is viscoelastic and extensible.According to the literature, soluble dietary fibers effectively absorb water and wrap starch granules distributed across the gluten network structure.
This function prevents many protein molecules from getting tightly entangled, preventing the formation of a cohesive spatial network.
Furthermore, the addition of soluble dietary fibers to flour dilutes the gluten protein, affecting the formation of the gluten matrix (Jia et al., 2020).For broccoli-containing samples, the snacks' hardness decreased, probably due to the reduced gluten network structure.
Additionally, when broccoli was added to the snacks, higher porosity and open structure were observed, as demonstrated in SEM images (Figure 5), resulting in lower hardness.This correlation between porosity and hardness has also been observed in biscuits in a previous study (Umesha et al., 2015).
The difference between carrots and broccoli can be attributed to how their soluble and insoluble fiber contents affect the gluten matrix.According to the literature, both forms of fiber can interact with gluten proteins via various mechanisms, where the chemical reactivity of soluble and insoluble fibers is important in gluten protein aggregation (Zhou et al., 2021).Certain soluble fibers, such as pectin, are more reactive.This increased reactivity could be attributed to the number and accessibility of functional groups involved in the interaction with gluten proteins and water.Insoluble fibers having lower reactivity, such as cellulose, appear to be incorporated into the structure as fillers or physical barriers, which primarily exhibit steric effects (Zhou et al., 2021).Jia et al. (2020) investigated the effects of soluble dietary fiber on the physical properties of biscuits.
They performed the texture profile analysis to analyze the texture of the products and found that adding soluble fibers reduced the hardness of the biscuits, which is consistent with the results we observed after adding broccoli and increasing the vegetable ratios in the snacks.Furthermore, the hardness (~50-450 N) measured in our study for vegetable snacks is comparable to the hardness (~84-350 N) reported for 3D-printed cereal-based snacks developed by Derossi et al.( 2021), where they 3D-printed wheat flour-based parallelepiped-shaped objects (Derossi et al., 2021).
| CON CLUS IONS
The loss of nearly one-third of the world's vegetables, despite their sound nutritional value, could be reduced by finding a way to utilize the vegetables that have an imperfect appearance.3D food printing technology is capable of changing the appearance of ugly vegetables into unique snack products.In this study, imperfect broccoli and carrots were freeze-dried and then turned into food-grade inks suitable for 3D printing applications.All the inks showed shear-thinning behavior, making them ideal for extrusion-based 3D food printing.The control sample had a low degree of shape integrity.As more vegetable powders were added, the printability of the inks improved.
Samples containing 25% vegetable powder revealed noticeable lines, indicating inferior resolution.On the other hand, samples with 75% vegetable powder flooded together to produce a more homogenous appearance.The volume of the sample printed with 75% carrot/ broccoli flour was comparable to the volume of the 3D digital model (14.3 cm 3 vs.14.6 cm 3 ).Although there was a significant difference in color between raw and cooked samples, cooked samples still exhib-
ACK N OWLED G M ENTS
We appreciate the financial support provided by the University of Arkansas Honors College Faculty Equipment and Technology Grant and the USDA National Institute of Food and Agriculture, Multistate Project NC1023, Accession number 1025907.
CO N FLI C T O F I NTE R E S T S TATE M E NT
There are no conflicts of interest to declare.
Figure 5a -
Figure5a-c depicts the viscosity of food inks prepared with 25%, 50%, and 75% (w/w) vegetable powders, respectively.The addition of different vegetable powders significantly increased the viscosity of the ink when compared to the control at low shear rates.The printing inks demonstrated a decrease in viscosity as the shear rate increased, confirming the presence of interactions that can be broken by the application of shear stress.The shear-thinning, or pseudoplastic, behavior is correlated with the ability of inks to be easily extruded during 3D printing(Jiang et al., 2019) because the force required to print the ink decreases as shear is introduced, causing
Figure 7
Figure 7 depicts the pictures of the 3D-printed snacks after baking, and Table 2 summarizes the color profile of the raw food inks and the cooked 3D-printed snacks.Significant differences (p < .05) in the color parameters of the snacks with different formulations were noted.The snacks' color differences were complementary to the distinct appearances of their vegetable-based substitutions.Among both uncooked and baked snacks, the control had the highest lightness (90.41 and 65.06, respectively), as expected, followed by carrot-incorporated snacks.The samples demonstrated corresponding decreases in lightness values after baking, indicating slight color changes during baking.The value for redness, "a*," was highest in uncooked carrot-containing snacks and decreased significantly after cooking (p < .05).Because of the higher yellowness values "b*," the overall color of carrot-incorporated snacks was orange.The uncooked broccoli snacks had a green color, as evidenced by the negative "a*" value, indicating a green shade.
Texture Analyzer equipped with Exponent software (Stable Micro Means with different capital letters within the same row and color parameter and means with different lowercase letters within the same column are significantly different (p < .05).Data are given as the means ± standard deviations. Note: | 2023-11-15T16:02:25.634Z | 2023-11-13T00:00:00.000 | {
"year": 2023,
"sha1": "a37e393a01db60878d470a5c4a434c74d419b566",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/fsn3.3820",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5572edb2f467936a5d6255d385d6624c8740f915",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
255855650 | pes2o/s2orc | v3-fos-license | The technology of tongue and hard palate contact detection: a review
The tongue and hard palate play an essential role in the production of sound during continuous speech. Appropriate tongue and hard palate contacts will ensure proper sound production. Electropalatography, also known as EPG, is a device that can be used to identify the location of the tongue and hard palate contact. It can also be used by a speech therapist to help patients who have a speech disorder. Among the group with the disease are cleft palate, Down syndrome, glossectomy, and autism patients. Besides identifying the contact location, EPG is a useful medical device that has been continuously developed based on the patient’s needs and treatment advancement. This article reviews the technology of electropalatography since the early introduction of the device. It also discusses the development process and the drawbacks of the previous EPG systems, resulting in the EPG’s upgraded system and technology. This review suggests additional features that can be useful for the future development of the EPG. The latest technology can be incorporated into the EPG system to provide a more convenient method. There are some elements to be considered in the development of EPG’s new technology that were discussed in this study. The elements are essential to provide more convenience for the patient during speech therapy. New technology can accelerate the growth of medical devices, particularly on the development of speech therapy equipment that should be based on the latest technological advancements available. Thus, the advanced EPG system suggested in this article may expand the usage of the EPG and serve as a tool to provide speech therapy treatment services and not limited to monitoring only.
Overview of EPG
Speech disorder assessment is commonly conducted using an auditory perception, which is based on the anatomical and physiological knowledge of the tongue movement of speech-language therapists (SLTs). During a treatment, SLTs have to imagine the tongue movement from one position to another during continuous speech production. An experienced SLT is required to provide speech analysis using this technique [1]. Due to the nature of the auditory perception technique, the speech therapy process, particularly in some countries, is not thoroughly comprehensive because of the shortage of experienced SLTs [2]. However, a technique was introduced in the year 1970 to identify tongue and hard palate location known as electropalatography (EPG).
EPG provides the recording of dynamic speech features, and thus enables the detection of sound production. At the same time, the movement of the tongue and hard palate contact can be identified from the EPG patterns [3]. EPG is used to diagnose and analyse the tongue and hard palate contacts pattern during continuous speech production in real-time [4]. Each consonant produces during a continuous speech has its unique characteristic of contact pattern. The essence of the contact pattern depends on the location of the tongue and hard palate contact [5]. The location of the tongue and hard palate contact is detected using electrode sensors that are embedded on an artificial palate. These electrode sensors detect the contact and send the signal to a computer.
In addition, EPG is an established instrument used in phonetic and clinical research [6]. Several studies proved that EPG is a highly useful tool for speech research, diagnosis, and treatment of a range of speech disorders [7]. From 1970, there were three commercial companies involved in manufacturing the artificial palate, which was Rion Co Ltd, Japan (Rion system), Kay Elemetrics, USA (Kay Palatometer), and Reading University (Reading system) [8]. Complex and high production cost of the palate caused a number of palate modifications to be discontinued, such as the Rion electrograph (DP-01) in 1990 [9], and the Palatometer produced by Fletcher et al. in 1988 [10]. Currently, there are three commercially available companies, which are: 1. SmartPalate system (CompleteSpeech), formerly known as Logometrix. 2. LinguaGraph (Rose Medical Solution) and 3. WinEPG (Articulate Instruments Ltd).
SmartPalate system developed from the Kay Palatometer initially sold by Kay Elemetric was discontinued after Professor Fletcher decided to build his own company. Under LogoMetrix Corporation, the research and development were continued, and the company changed to CompleteSpeech, and SmartPalate became one of the products [11]. In the LinguaGraph system, the Reading palate is connected to the Lingua-Graph unit, worn around the neck [8]; whereas, in the WinEPG system, the Reading palate and Articulate palate were connected with the WinEPG measurement hardware manufactured by the Articulate Instruments Ltd [4]. 20:17 Clinical and ongoing studies on patients have improved EPG to make it more convenient, cheaper, and more accurate. Starting from the production of flexible circuits in the 1970s, which is patented by the Rion Co Ltd, the flexible circuit productively reduced the manufacturing time and cost. This flexible circuit is much more effective than the traditional Reading system and Palatometer that are made by rigid acrylic and acrylic vacuum as a thin sheet material shape where the electrode contact is embedded in the acrylic resin [4]. Table 1 shows the general timeline of EPG until the latest modification. After the Rion Co Ltd patented the flexible circuit board (Fig. 1a), Rion Co Ltd more focussed on the improvement of display device by introduced a signal-receiving electrode from the linguapalatal contact pattern with a preliminarily set standard linguapalatal contact pattern for specific phonation [12]. Rion Co Ltd then extended the EPG to transmitted linguapalatal contact pattern as electromagnetic waves and displayed as dynamic patterns [13].
The use of flexible devices is continued by Hardcastle et al. in 1989 [4], where the 'T' bar cut-out in the centre has been introduced, as shown in Fig. 1b. 31 tracks from each electrode (31 tracks at left and 31 tracks at right) connected separately and exit at the corner of the mouth. A double-sided copper plate was used to avoid short circuits between the tracks. An insulating layer is placed in between two copper surfaces; hence some tracks can be imprinted on both surfaces. However, Hardcastle et al. [18] then declared that there are disadvantages in this flexible EPG pattern. Some of the disadvantages are that the sharp pattern and design are difficult to fit in the user's mouth, and the flexible palate material is still thick and uncomfortable. Hardcastle et al. then only made some improvements in software and hardware with the recent version of the Reading system (EPG3).
The conventional palatometer (Fig. 1c) patented by Fletcher et al. in 1978 [19] that uses thin sheet material with the electrode located by melted of copper into punched holes was improved by using a flexible circuit with electrode combining with nasometer apparatus patented in 2005. The flexible circuit contains three intercoupled lobes, such as a butterfly shape for electrodes replacement and formed in a concave configuration, as [14,15] 1979 Japan, the Rion Co Ltd patented a flexible insulative circuit board and improvement in the fabrication method [16] 1978 US, Development of Palatometer by Fletcher [17] 1981 Japan, the Rion Co Ltd patented EPG by introducing a single signal-receiving electrode by a single detector [12] 1982 The Rion Co Ltd again patented their EPG by improving transmission and receiver from the EPG to the display device [13] 1989 T-shape cut-out by Hardcastle et al. with flexible circuit design [4] 1991 The Reading palate improvement in hardware and software design [18] 2005 Fletcher produced patented three-lobed structure [19] 2007 Wrench developed and patented articulate-style palate [3,20] shown in Fig. 1d. These unique lobes make it suitable to all size and shape either smaller or bigger palate. These three-lobe structured palates manufactured starting by taking an impression of the user, followed by constructing a stone model based on the impression. A base plate made by soft plastic material is then prepared by the stone model's thermoforming process to form a palatal body. After several cutting processes and conforming the shape to the patient, a flexible printed circuit containing at least 110 sensors is attached using adhesive such as cyanoacrylate adhesive to the base plate. Some adjustments are made to conform to the user's shape and to ensure the palate fits the user. The flexible circuit is connected to a processing and display equipment by leads containing electrical connectors. Unlike the other palate design, this three-lobe structure palate did not wrap the leads from the back, but directly out the front of the flexible printed circuit. Two embodiments are prepared with a similar manufacturing process, as mentioned earlier, and the configuration is displayed in split-screen [19]. One of the screens represents contact from the electrode sensor by the user, while another screen is generated from computer software or another user [22]. One disadvantage of this design is that the ready-to-use flexible circuit with the permanently sited on the strip does not precisely follow the [24]. In 2007, Wrench et al. [3] introduced a new palate design called Articulate palate. The Articulate palate is made after considering many factors concerning the issues from the previous artificial palate (Kay Palatometer and Reading palate) such as contact layout, contact size, material, safety, fit adjustment, lead length, and exit point. The Articulate has attracted interest by improving the quality of speech through the thin palate, and placing contacts at the velar region makes it more reliable. However, some design factors required improvement. They found out that removing the Articulate palate is challenging since the design made is focused on fitting the palate to its place and avoid any movement during pronunciation. The design also should be investigated further as the saliva pooling problem still occurs. Birkholz et al. [25] designed the artificial palate by combining and modifying both Reading and Kay Palatometer palate design using Adam clasps and a cover for the teeth. The acrylic resin that covers molar and premolars was removed and the posterior part is retained using Adam clasps, while the acrylic resin is thermoformed and covered around the incisors and canines to fix the anterior part. Therefore, the idea of this design could be further investigated for the improvement of the Articulate palate.
Importance of EPG in speech therapy
Speech-language and communication difficulties are common in subjects suffering from speech sound disorder (SSD), auditory processing disorder, Down syndrome, cleft palate, and glossectomy. Some of the cases are due to the anatomy of the tongue and the hard palate of an individual. This is likely to impact daily communication and adversely affect the quality of life [26].
SLTs traditionally used a conventional method, which is auditory-based transcription. During the treatment session, SLTs will ask the subject to produce the sound, which was then recorded. SLTs will replay the sound and teach the subject to produce the correct sound based on the place of articulation of the sound. SLTs must have the knowledge and expertise to determine the articulation place during the production of the sound [1].
EPG has been used for more than 50 years to monitor and improve the articulation patterns of SSD speakers. The effectiveness of EPG in treating an SSD speaker is proven by a study conducted by Carter and Edwards (2004) [27]. There were ten speakers from various backgrounds and ages selected for this study. The finding indicated that the treatment performed for 10 SSD speakers using EPG showed improved sound production [28].
Besides, EPG also helps to improve the production of sound among hearing disorder speakers. A study was conducted to investigate the use of EPG as a therapy tool in enhancing the production of speech for a patient who has a cochlear implant [29, 30]. The finding shows a positive result where there was an improvement in producing velar plosive consonants after 5 weeks of treatment. In placing more emphasis, Bacsfalvi et al. [31] conducted a study to examine the use of EPG for adolescents with hearing impairment. Three subjects diagnosed with severe hearing impairment were chosen to enrol in this program. The remediation treatment has been set for 6 weeks. Their results showed there was an improvement in the articulation influenced by the treatment methodology. Another type of clinical condition associated with a speech disorder is Down syndrome. Down syndrome is a disorder involving genetic malformation. People with Down syndrome have a problem with physical growth, and speech disorders mainly contributed to the large tongue size [32]. In 1993, a study was conducted to investigate the differences in contact patterns between the tongue and hard palate of a young adult's speech pattern with Down syndrome [33]. The finding states that people with Down syndrome have unclear articulation during continuous speech. They also have difficulties with phonological delay.
In a similar study, EPG is used as a therapy to reduce speech problems for Down syndrome patients. Besides, EPG is also used to examine the pattern of consonants [s] and [t] for Down syndrome patients [34,35]. The outcome indicates that many errors that occur during the production of these consonants. Thus, treatment was planned for the patients involving several sessions to obtain a regular pattern for the production of consonants /s/ and /t/.
On the other hand, cleft palate patients also used EPG as a treatment for articulation. Cleft palate constitutes a significant health problem. According to Hart et al. [36], between 100 and 500 children born each day all over the world are diagnosed with cleft palate. Cleft palate affects the production of speech, and it will become a significant problem during communication. EPG is also used as a therapeutic instrument in cleft palate patients in several languages, including Japanese [37][38][39], Cantonese [40], and Swedish [41]. Meanwhile, speech therapy for the cleft patient not only focuses on children, but also on adults. In his study, Fletcher [42] proved that there are differences in speech production and oral motor skill in an adult without palatal corrective surgery.
SmartPalate system (CompleteSpeech)
The founder of CompleteSpeech, Prof. Samual G. Fletcher, commercialized the SmartPalate as a tool to display tongue to palate contact in real-time as an objective to practise target sound. This artificial palate consists of three main components: a mouthpiece, a DataLink processor, and a SmartPalate Software (Fig. 2). A customized mouthpiece contains 124 gold plate electrodes that captured the tongue placement and sends the tongue to palate contact information to the DataLink processer. SmartPalate software converted the information processed by the DataLink and display the result as a visual representation on the computer screen. This SmartPalate is suitable to be used from children over 8 years to adults to overcome speech barriers such as the pronunciation of r, l, ch, j, t, d, s, z, k, g, and, sh sounds.
LinguaGraph (Rose Medical Solution)
LinguaGraph is a commercial EPG system that is user-friendly and suitable for clinical and home-based therapy. Almost similar to the SmartPalate, LinguaGraph also has three components, which are EPG palate, Linguagraph unit, and computer (Fig. 3). The Reading palate is prepared by taking the upper teeth impression of a speaker. A qualified dentist typically takes the upper teeth impression. The impression is then sent either to Rose Medical Solution to prepare the EPG palate or to orthodontic technologists using Electropalatography (EPG) palate kit, which is sold separately by the Rose Medical Solution complete with an instruction manual. The EPG unit is then connected to the Lin-guaGraph unit to monitor and improve articulation patterns by the adjustable controller to vary the sensitivity of the devices before displaying the pattern on a screen.
WinEPG (Articulate Instruments Ltd)
Another commercially manufactured EPG system is called WinEPG. WinEPG is a Microsoft window version of an EPG system. The components of WinEPG are more complicated and consist of more than three elements, as shown in Fig. 4 by the multiplexer and send amplified signals to the main EPG unit. Peaks are detected, and the pre-set reference level is compared to identify the contact made. The signal from the EPG unit was transferred to the computer and display using Articulate Assistant 1.18 ™ . Articulate Assistant 1.18 ™ makes it possible to analyse the data in real-time.
Tongue position tracking device (TPTD)
TPTD is the recent research development of EPG as an objective to overcome the problem of the previous artificial palate, especially on the wire connection from electrode exit through the mouth to the display instrument shown in Fig. 5. Several attempts have been made in previous design to ensure users feel comfortable and able to pronounce naturally during treatment. One of the efforts to solve the problems is by wrapping the wire directly out to the front instead of wrapping out from the back of the mouth and hanging the wire to avoid pressing tension on the lead. Moreover, the diameter or thickness of tube wire has also been improved using a flexible circuit imprinted with the electrode circuit. Yet there are still barriers from the wire such as the wires should be long enough, and difficult for the user to move freely throughout the treatment process. Thus, it would be best to design an artificial palate in transmitting information wirelessly. A combination of EPG and glossometry enables TPTD to determine contact patterns and measuring the distance between the tongue and hard palate. This is another advantage of EPG as a method to track any tongue motion that not involve in any contact to the palate when pronouncing phonemes. The TPTD consists of three main components which are retainer housing layer, sensors, and electronic. The retainer is made using specific impression material (Sildent TM Putty) to produce the user's impression. Unlike the previous palate model, the TPTD palate model uses resin (urethane resin) instead of stone and only took about 1 h to cure. The 0.8 mm of splint material (acrylic thermoplastic) is thermoformed to follow the model shape. Electronic module and electrode are positioned, and again another splint material thermoformed and sealed all the electronic and sensor inside the retainer.
Pressure mapping with textile sensors
Baldoli et al. [45] conducted an innovation of EPG palate. The latest innovation incorporated textile-based sensing technologies. The prototype comprises 62 piezo-resistive textiles as sensors to detect the tongue and hard palate contact. Besides, the palate was fixed using a glue which is commonly employed as a denture adhesive. This system used Articulate Assistant ™ 1.18 as the software to analyse the contact pattern. The reading data was measured by comparing the pressure sensitivity of the sensor. The finding shows that soft sensing effectively measures the tongue and hard palate contact during a continuous speech.
Articles selected to compose this review were gathered from Scopus, PubMed, IEEE Xplore, and Science Direct databases. A total of 48 articles and three websites published between 1976 and 2020 were selected in this study. The articles mainly discussed the EPG's design and development and focused on the EPG applications for speech therapy treatment. The use of EPG for speech production in the language was excluded from this study. After reviewing the articles, the future technology of EPG was proposed in this study.
Future technology of EPG
The design of future EPG is suggested to be flexible, portable, and user-friendly. The new proposed EPG technology is expected to improve the current EPG drawback, especially for a patient who had limited physical movement. EPG is a tool used to help speech therapy processes among patients with various backgrounds such as Down syndrome, paralysed, and autism. In late 2018, Zin et al. has done a study to detect the tongue and hard palate contact for the paralysed patients [46,47]. The researcher faced difficulties, particularly during the recording procedure. The EPG used for the research considered not user-friendly for patients who had limited physical movement. Therefore, they propose a more advanced EPG technology that may prevent the patient from being bogged down by the wires, while allowing them to record continuous speech patterns in a very comfortable condition. A smart system that incorporates the EPG enables the user to explore the benefit anywhere and everywhere. Additionally, the artificial palate can be customized matching patient's need for either therapy, learning or monitoring procedure. EPG has now become well established in many experimental phonetic laboratories and speech. Most of the problems associated with the technique in the early stages of EPG development include unwanted capacitance, effect between closely bunched wires, saliva bridging of adjacent electrodes, and suitable material for the palate [3]. Further improvement of the EPG device is ongoing to ensure the system is safe, robust, and incorporate the latest or new technology. EPG can be upgraded and incorporates current technology in the medical field, such as Bluetooth technology, telemedicine, and mobile application.
The primary purpose of utilizing the latest technology is to transfer data from patient to computer or from one computer to another computer. Estimating by 2020, there will be more than 50 billion medical devices will be connected to the internet using wireless technology. This is because wireless technology can reduce operating costs and offer low data at minimal power for wireless sensors and actuators network applications [48].
A new architecture for the technological development of EPG is also proposed in this paper. The architecture of the advanced EPG system can be classified into two parts: the development of the hardware system and the development of the software system. Additionally, the hardware consists of the EPG palate and an embedded electronic circuit.
During data collection, the subject will be asked to place the EPG palate on an upper palate inside the mouth. Electrodes will be scanned by the electronic circuit, and the presence of the electrodes signal will identify tongue-hard palate contact. The electrodes signal will be transmitted to the computer for display, storage, and analysis via Bluetooth communication. The electrodes signal will be processed in the computer and presented as a meaningful contact pattern data. The tongue and hard palate contact will be displayed on the computer in real-time and thus provide realtime feedback and analysis to the user and speech therapist. Figure 6 shows the diagram of the advanced technology of EPG.
In the Reading system, the Reading palate is soldered to a connector board and will be plugged into a board reader called multiplexer in the EPG3 system. The multiplexer is plugged to the main unit, and data is transferred to a computer. The multiplexer is hanging around the subject's neck during data recording, and the Fig. 6 The diagram of the advanced technology of EPG subject cannot move freely [4]. Additionally, the hanging multiplexer may cause discomfort to the patients. The new technology of EPG will tackle this issue by adopting wireless technology to transfer the contact signal data during the articulation in real-time. The EPG sensors are soldered to the microcontroller board and attached to a headset. There will be no wire hanging around the patient's neck during the speech recording. Figure 7 shows the parts of the advanced technology of the EPG system. Besides adopting wireless technology, another suggestion is improving the artificial palate. As known, artificial palate with an embedded electrode was used to detect the contact between the tongue and hard palate during the articulation. However, an advanced EPG palate can be upgraded in terms of the contact sensor's design-for example, the quantity, size, and location of the silver electrodes.
This study suggests the new design of artificial palate, which consists of 30 silver electrodes in five horizontal rows. The first row contains four electrodes, second to fourth row includes six electrodes, and the last row comprises eight electrodes. The arrangement of the first row starts right after the incisor teeth, and it is known as the alveolar area. The post-alveolar area is behind the bicuspid teeth. Meanwhile, the palatal area consists of third and fourth rows, which starts between the first and second permanent molar teeth. The fifth row is the velar area that begins at the third permanent molar teeth. More importantly, the proposed design can be customized based on the needs of the user and the function of the EPG palate, either for therapy, monitoring or learning. Hardcastle et al. [4] stated that the number of electrodes depends on the study's objective, such as in their research, which involve children who have smaller palate size, one or two rows electrodes were eliminated. This statement was also supported by Flege et al. [49], which arranged 64 electrodes in six rows for the productions of /s/ and /t/. Besides, the previous artificial palate, such as the Reading palate consists of 62 silver electrodes [4], the Rion system consists of 63 gold electrodes [3], and Kay Palatometer system consists of 100 gold electrodes [10]. Hence, it is highlighted that articulation places are more critical compared to the number of electrodes being placed on the artificial palate.
In addition, the new design of artificial palate suggests a new size of electrodes which is the diameter of the electrode is 4 mm, and it is soldered to a 20 cm length of copper wire. The purpose of this enlargement is to ensure the electrode is able to fill up the palate zones. Hardcastle et al. [18] in their study stated that the average time in the process of manufacturing the Reading palate started from dispatch of plaster impression to the delivery of the Reading palate were estimated as eight days per palate. Thus, by reducing the quantity of the materials, the manufacturing cost will also be reduced.
Fabrication method
Fabrication of EPG's advanced technology is divided into three parts, which is the fabrication of the EPG palate, development of the electronic circuitry, and development of the software interface. The fabrication of the advanced technology of EPG palate is similar to the manufacturing of a Reading palate. Starting from taking an upper arch impression to the adaptation of the EPG palate onto the hard palate, as shown in Fig. 8. The advanced EPG electronic functioned to detect the tongue and hard palate contact during speech and transfer the signal to the computer through Bluetooth communication. The circuit consists of electronic components such as a microcontroller, capacitor, silver electrodes, MHz Crystal oscillator, and a Bluetooth system. The microcontroller is the most important electronic component, which acts as the brain of the electronic circuit. The microcontroller defines the input pins and the output signals. The microcontroller processes the analogue signals received by each of the electrodes embedded on the artificial palate. Other electronic components, such as a capacitor, crystal oscillator, and several others, are required to support the microcontroller.
The software component is categorized into the front-end and back-end. The front-end includes a graphical user interface (GUI) that allows users to interact with the advanced EPG system through a graphical icon. GUI also enables data to be displayed in real-time. Besides, the GUI is designed to record the contact during the production, save the contact data to the computer, and allows contact pattern data to be further analysed.
The contact pattern between the tongue and hard palate data from the microcontroller is transferred to a computer through a Bluetooth module. GUI can also be designed to access the contact pattern data using a COM port connection. The back-end of the software is the program written to process the signals into meaningful data. The program may also include signal processing options such as filtering, amplifying, and standard referencing. The program can also be used to run analysis and command based on user selection in the GUI.
Important characteristics
Based on the selected study, an important characteristic should be incorporated to design and develop new EPG technology. It is vital to ensure that EPG's future technology becomes more convenient for patients with limited movement capability. This suggestion was highlighted by Wrench et al. [3], and some are matched with the report by Hardcastle et al. [4]. The requirement and suggestion are: 1. Thickness: thin palate allows the user to pronounce such a normal speech to obtain an accurate result without any interference. 2. The number of contacts: the number of contacts varies from 62 to 124 electrodes.
In comparison, there are more disadvantages to produce a larger number of connections. The cost will be higher, and the electrode coverage is difficult to distinguish between singleton since the contact pattern is almost similar [50]. 3. Robustness: palate must be robust enough to use multiple times without changing in shape, although the palate is thinner. 4. Contact size: the smaller the contact diameter, the more accurate the contact position. However, conductivity will be lower. 5. Contact material: the tarnish factor is critical because the application used in the mouth is exposed to the saliva. Either gold or silver, both are a good conductor. However, silver is preferable since the palate will be used in a short period. 6. Flexible circuit material: polyimide is usually used as the base insulative layer, while the copper conductor often is plated either with gold or silver. Silver plating is preferable since the process does not use the poisonous chemical and silver-loaded epoxy to coat the silver plate contact. 7. Safety: the use of nontoxic materials is essential to ensure no chemical reaction will harmful the user. Acrylic resin is the common dental base. Acrylic resin is used to cover the flexible circuit and simultaneously provide a smooth surface and minimize sharp edges. 8. Adjustability: the artificial palate should fix or accommodate to small movement during the speech treatment. The shape also must be usable for various ages, especially for children, which grows within the treatment time. 9. Target patient or user: every language tends to control different tongue movement and position during pronunciation. Therefore, the electrode must be placed in the right spot. For example, Keating et al. suggest placing two electrodes in the middle of the front two incisors for French, Korean, and Taiwan speakers as compared to English speakers [51], as shown in Fig. 9.
Further detailed comparison of the artificial palate between Kay Palatometer, Reading, CompleteSpeech Palatometer, and Articulate palate is shown in Table 2.
The primary purpose of developing the advanced EPG system is to provide a more convenient device for the end-user. The new EPG design must consider a few characteristics such as cost-effectiveness, comfort for the user, and material safety. Table 3 shows the features and the description of the advanced EPG system.
Conclusion
EPG has been found useful in both diagnosis and rehabilitation of the range of the speech disorder. The primary purpose of EPG is to detect the contact pattern of the tongue and hard palate. However, further improvement of the EPG device is ongoing to ensure the system is safe, robust, and incorporates the latest technology available in the market. EPG can be upgraded and incorporates current technology in the medical field, such as Bluetooth technology, telemedicine, and mobile application. Three additional features have been suggested for improving the EPG system in the future, including safety, user-friendly, and cost-effectiveness. However, the improvement may be expanded further and not limited to these three features explained in this review. Combining the latest technology to the EPG system allows transferring data from users to therapists and vice versa. Additionally, the cloud system can also be introduced to store data and easily share medical data among therapists. Simultaneously, real-time monitoring becomes possible and hopefully will ensure effective speech treatments for the patients. Table 3 The critical features of the advanced technology of the EPG system
Safety
The materials used in developing advanced EPG is nontoxic and biodegradable. The materials used for the artificial palate are acrylic resin, silver electrodes, and copper wire. Acrylic resin has been approved by the FDA and is widely used in dental applications such as retainer and denture. In addition, silver electrode and copper wire are also widely used as sensors in a medical application such as EEG and EMG. Besides, the electronic component used in the development of new EPG must prevent electric shock User friendly During data recording, the multiplexer is hanging around the subject's neck, and the subject cannot move freely. The design of new EPG is more friendly to the environment and the user. The use of Bluetooth technology may avoid the need of the patient from being bogged down by the wires, and the patient can record continuous speech patterns in a very comfortable condition. The other advantage of the new EPG is saving the workspace area compared to the previous EPG, which needs a large workspace to place a personal computer, sound system, microphone, and main unit of the EPG. The new technology EPG is a portable unit, and reasonably small space is needed during the recording Cost-effectiveness Hardcastle et al. [18] in their study stated that the average time in the process of manufacturing the Reading palate started from dispatch of plaster impression to the delivery of the Reading palate was estimated as 8 days per palate. It may affect the labour cost and time.
The new EPG may reduce the manufacturing cost and time by reducing the manufacturing process and materials used | 2023-01-17T14:52:53.159Z | 2021-02-06T00:00:00.000 | {
"year": 2021,
"sha1": "59f00ea36f2a19e6084414a4160e8eb8054351d9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12938-021-00854-y",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "59f00ea36f2a19e6084414a4160e8eb8054351d9",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
} |
137644114 | pes2o/s2orc | v3-fos-license | Inkjet printing - the physics of manipulating liquid jets and drops
Over the last 30 years inkjet printing technology has been developed for many applications including: product date codes, mailing shots, desktop printing, large-area graphics and, most recently, the direct writing of materials to form electronic, biological, polymeric and metallic devices. The new non-graphical applications require higher print rates, better resolution and higher reliability while printing more complex, non-Newtonian and heavily solids-loaded liquids. This makes the understanding of the physics involved in the precise manipulation of liquid jets and drops ever more important. The proper understanding and control of jet formation and subsequent motion of the jetted materials requires physical studies into material properties at very high shear rates, acoustic modes in print heads, instabilities of jets, drop formation, drop motion, stretching of fluid ligaments, the role of polymers in jet break up, electrical charging of drops and the aerodynamic and electrostatic interaction of jets and drops in flight. Techniques for observation, measurement and analysis are evolving to assist these studies. This paper presents some examples of the application of physics to understanding and implementing inkjet printing, including recent work at the Cambridge Inkjet Research Centre.
Introduction
Most people today are familiar with desktop inkjet printers which have transformed our ability to print text, create colourful business graphics, print web pages and reproduce photographs of very high quality. The core of these devices is the inkjet print head which generates and places millions of tiny coloured ink drops (volume ~10 pL) in a matrix pattern to give the impression of sharply-defined text or a subtly-graded full-colour image. The ability to do this is the culmination of many years development effort. To make an inkjet printer work reliably requires contributions from many engineering disciplines and a good knowledge of the physical principles that lie behind them. The engineers and physicists who develop inkjet systems need to understand fluid mechanics, acoustics, electrostatics, optics, imaging, colour, ink chemistry, fluid-surface interactions, micro-engineering, materials, electronics, software and, of course, plumbing. Some of the problems in inkjet printing particularly relating to physics will be considered in more detail in this paper.
The generation and manipulation of liquid drops has fascinated scientists for many years. Early experiments by Jean-Antoine Nollet in 1749 demonstrated the effects of static electricity on a stream of drops. In the 19 th Century Lord Rayleigh conducted many experiments observing the creation of drops from jets, as well as interactions between drops [1,2]. Worthington, in the early 20 th Century, studied the phenomena of jets emerging from drops falling into liquid surfaces [3].
Applications
Home and office printing is a major application for inkjet. Almost everyone who has used a PC will at some time have printed their work with an inkjet printer. Predictions of the 'paperless' office have proved unfounded as the combination of personal computer and high-quality colour printing has encouraged more rather than less personal printing. However, the industrial application of inkjet printing began before the office inkjet and continues today. Inkjet systems are used to print information such as sell-by dates on to products and to address magazines and junk mail. Inkjet arrays are used in place of conventional printing technology to produce short run colour graphics on paper, plastics and board. These exploit the great advantage of inkjet printing that images are electronically stored so that the costly, and time consuming, plate-making and change-over processes are not needed to print new images. In more recent industrial applications, instead of printing images onto paper or other substrates, inkjets are being used as part of the product fabrication process to print structural or functional materials. Figure 1 shows how inkjet applications have developed, and continue to develop, with time. The first commercial applications were for marking and coding. The establishment of inkjet as a home/office printing tool represents the second generation. The third generation, taking off now, involves the industrial application of inkjet for both printing and manufacturing. Items such as mobile phones and electric razors are beginning to appear which incorporate displays fabricated by inkjet printing. As part of their production process, light emitting polymers are inkjet-printed into each of the many thousands of individual pixels which form the display. The use of this technology for much larger flat-panel displays has been demonstrated, and a number of companies and research groups are exploring the printing of active and passive electronic devices. Printed electronics can be placed directly on to, and integrated with, the associated substrate (for example, the casing of a mobile phone, an RFID tag on a product or as part of the packaging which interacts with the consumer). 3-D objects can be built up by overlaying successive solid printed 'images'. This technique is already successfully used for rapid prototyping but can also be used to make functional parts; the process can in principle be used to deposit a range of different materials in precise locations to form a complex, composite structure. This is encouraged by the development of more robust print heads and the increasing range of printable fluids and suspensions.
Some of these applications put extreme demands on the precise functioning and reliability of the inkjet systems. A misplaced drop in an address on an envelope is usually acceptable; a blank pixel in a display is not. Hence there is continued interest in understanding how inkjets work and in facing the challenges posed by a wide range of sometimes complex inks and substrates.
Inkjet Technologies
The principles of inkjet printing split into two major and a few minor categories. The major division is between the continuous and drop-on-demand processes. As the name implies, the continuous inkjet process starts by forming drops from a continuously flowing jet of ink which is forced out of a nozzle under pressure. Figure 2 illustrates a single-jet system. As discussed below, disturbances at a particular wavelength along the jet will grow and eventually cause the jet to break up into drops. By imposing a regular disturbance at the correct frequency (for example with a piezoelectric transducer) this break-up can be controlled and a very uniform stream of drops produced. Certain drops from this stream are then selected individually for printing. A common means of selection is to use electrically-conducting ink, and to charge the drop inductively as it is forming by having an electrode nearby held at an appropriate potential. When the drop breaks from the stream the induced charge cannot flow along the liquid column and is retained on the drop; the electrode can then be switched to a different voltage to charge the next drop being formed. The drops then pass through a fixed electric field which deflects the charged drops by an amount which depends on their charge. Uncharged drops are captured and the ink reused, while charged drops are directed onto the substrate, and the level of charge controls the position at which they strike it. A single jet can, in this way, print a line of characters by charging drops to various levels to form a line, say, 7 or 15 drops high. The characters are built up by moving the substrate and printing successive lines. In a more complex system, four or more arrays of continuous jets, each array printing a primary colour and each jet addressing one row of pixels along the substrate, can be used to print full-colour images very rapidly.
When a stream of drops is created in this way it is common to find that, as well as the principal drop, smaller, satellite drops are created from the ligament as it parts. These satellite drops will, depending on the details of the applied disturbance and the ink characteristics, either recombine with the main drop or be deflected to unwanted places and cause poor printing and printer failure. There have been some attempts to create small satellite drops deliberately and use them for high-resolution printing.
The second major inkjet technology is termed 'drop-on-demand'. Drop-on-demand print heads usually have an array of nozzles, each of which ejects ink drops only when required to form the image. Figure 3 shows schematically how each nozzle works. An actuator of some kind creates a rapid change in the cavity volume and imparts some momentum to the ejected drop. This is a dynamic process, in which wave propagation in the ink and the geometry of the cavity behind the nozzle have significant effects. Although other methods have been explored, the two most common means to trigger the ejection are the creation of a vapour bubble within the ink using a heater pad ('bubble jet') or the distortion of a piezoelectric ceramic element. As the droplet of ink is ejected it first emerges as a jet, followed by a ligament or tail which is still connected to the ink in the nozzle ( Figure 4). At a later stage the ligament parts: some ink returns to the nozzle and the rest of the tail joins the drop, or possibly breaks up into smaller satellite drops (figure 5). Before another drop can be ejected the cavity must refill and any acoustic disturbance must have attenuated enough not to affect the formation of the next drop. Drop-on-demand technology is in some ways simpler than continuous as it does not require the external drop selection and recovery system; however the techniques needed to make the print head, particularly with many fine nozzles, are very demanding. One issue confronting any drop-on-demand system is the need for the ink to dry or solidify on the printed surface, but not to dry in or clog the nozzle. This can be addressed at the print head, by appropriate cleaning and capping for example, and at the substrate by using low volatility inks and absorbing substrates, heaters, dryers or UV-curable inks.
Inkjet images are normally formed by printing drops at discrete locations in a matrix on the substrate. The spacings of these drops determine the resolution of the printer. To create the illusion of different grey levels or continuous colour very small drops are printed at an appropriate density on a fine matrix (i.e. at high resolution) which are then integrated by the eye to produce a impression of the required shade or colour. Some drop-on-demand print heads, instead of producing one drop, are able to produce a rapid sequence of small drops which are then all delivered to one pixel position. The relative velocities for the drops in these packets may be arranged so that they merge together before reaching the substrate. Hence the grey level or colour can be changed by varying the number of drops in the packet, and hence the total volume of ink which constitutes the pixel.
One further inkjet process worth mentioning is the electrostatic technique ( Figure 6) in which electric fields are used to create liquid streams and drops. When a sufficiently high potential difference is applied between a liquid in a nozzle and a nearby plate, a conical surface is formed (Taylor cone) and liquid can jet from its tip. Several groups have used this phenomenon as the basis for a printing process.
Observation of jets and drops
The observation of inkjet jets and drops on the very small length scales and short times involved poses particular experimental problems. Figure 7 shows one method. A camera is used to observe drops emerging from a nozzle illuminated from behind by a suitable light source. There will be electronics to control the print head and trigger print events and, depending on the exact configuration, this may be linked to the control of the illumination and the camera. As much of the jetting process is repeatable, multiple-flash stroboscopic techniques can be used to make useful observations. In this technique the illumination is flashed many times per image frame in synchronisation with drop generation. Hence each frame recorded represents the superimposed images of many similar events. This technique can be used to study the evolution of drop formation over time, by changing the phase of the illumination relative to the drop ejection event. For stroboscopic imaging the flash duration is typically ≈ 1 μs. As ink drops and satellites are typically 1 to 100 μm in size with velocities between 5 and 20 m s -1 , significant movement can take place during the exposure which will blur the image. Any events which are not repeatable in position and timing will also result in additional blurring. Alternatively, a high-speed framing camera can be used, with either continuous illumination or synchronised flash, to observe single events as they occur. To capture a drop formation event the framing rate of the camera needs to be around 1 MHz. Although cameras with this capability are available the pixel resolution tends to be poor and often only a small number of consecutive frames can be captured. With a flash of sufficiently short duration and high intensity, single events can be captured by opening the camera shutter, timing the flash to coincide with the event, and then closing the shutter. This allows the use of a camera with high optical resolution so that images with both high temporal and high spatial resolution can be obtained. Figure 8 compares the results for both stroboscopic and single-flash illumination, with otherwise similar equipment. A 20 ns duration light source at the Inkjet Research Centre was used to capture this and other images in this paper. By taking successive images and incrementing the delay time, a pseudo-sequence can be built up which shows the evolution of these highly repeatable events. Images obtained in this way contain a great deal of information, and techniques have been developed to analyse the images automatically, to locate the jets and drops and to retrieve and analyse the data. For example, Figure 9 shows measurements of jet tip profiles just as a drop is emerging from a nozzle [4]. Figure 9. Jet tip profiles for ink drops emerging from a 50 μm nozzle at various times (μs) following emergence [4].
The formation of jets and drops
An extended cylinder of liquid, such as a jet issuing from a nozzle, is in an unstable state. Small disturbances will grow and cause it eventually to collapse into more energetically favourable spherical drops. In the early 19 th century, Savart was the first to study perturbations growing on a jet of water [5]. In 1849 Plateau [6] showed that, on an infinite cylinder of fluid with radius r, disturbances with wavelength, λ > 2πr will reduce surface energy and hence tend to grow. In 1879 Lord Rayleigh realised that the growth of the disturbance, driven by surface tension, competed with inertia and was able to show that the most rapid growth happens when λ ≈ 9r. For a liquid of low viscosity, this therefore tends to be the value around which the drop size distribution is centred if a jet is left to break up spontaneously. In a continuous inkjet printer a disturbance is imposed on the jet, usually using a vibrating piezoelectric element, so that the jet is forced to break up at close to this optimum wavelength. The frequency of this disturbance is therefore λv where v is the jet velocity and λ ≈ 9r.
Geometry dictates that the radius of the resulting drops is approximately 2r. created slightly later which, in this case, recombines with the main drop in front of it after a few periods. Satellite creation is very common in inkjets although, by judicious selection of parameters, it can be minimised or eliminated altogether. Various regimes of satellite creation can be observed [7]. Satellites can be forward-merging, as illustrated here, backward-merging or the satellite may travel at the same velocity as the main drops without merging (so-called 'infinite' satellites). The linear analysis of Rayleigh does not predict the formation of satellites at all. Subsequent theories have predicted satellites but do not explain their detailed behaviour. Pimbley et. al. [8], for example, consider a jet emerging from a nozzle with an imposed sinusoidal velocity disturbance by using a onedimensional non-linear model solved to second order. By changing the disturbance amplitude, they show a qualitative agreement with observation predicting forward-merging, backward-merging or 'infinite' satellites. Unlike linear analysis, this predicts that, depending on various parameters including disturbance amplitude, break-up will tend to occur at one end or other of the ligament joining the forming drop to the jet. In the image shown in Figure 11 the ligament is about to break at the end closest to the rest of the jet and then will break later at the other end. During the time that the ligament and drop are still connected (the satellite interaction time) they are drawn towards each other by surface tension forces and so, after separation, the drop and the satellite which forms from the ligament will be moving towards each other and will later merge. Similarly, when the ligament breaks first at the drop end, the resulting satellite is backward-merging. The 'infinite satellite' case arises when the ligament breaks at both ends simultaneously. In some circumstances the satellite interaction time can be long enough to frustrate the formation of the satellite and the ligament then merges with the drop before a satellite can form.
For inkjet printers using continuous jets, satellite formation must be controlled to avoid disruption of the printing process. If satellites are generated then it is preferable that they forward-merge before entering the electrostatic deflection field. Once they have entered the field, the higher charge-to-mass ratio of satellites means that they are more strongly deflected and may, for example, hit the field plates; ink will then build up and cause electrical breakdown and printer failure. For a given ink, satellite formation is normally controlled pragmatically by changing the amplitude of the driving disturbance. This amplitude will change the satellite behaviour and also changes the distance over which the jet forms into drops, the break-up length, in a way which is not predicted by theories based on effectively stationary liquid columns or jets with a uniform velocity profile. Figure 12 show typical behaviour in which, as the disturbance is increased, the break-up length reduces to a minimum and then increases again, followed perhaps by further oscillations.
Work by Luxford [9] and Lopez et. al. [10] suggests that this behaviour can be explained by considering how the jet velocity profile (and the way it changes after leaving the nozzle) affects the growth of the disturbance on the jet and hence influences both satellite formation and break-up length. Disturbance amplitude BUL In Figure 10 it can be seen that once the satellite has merged with the main drop, then the drop oscillates for a while before eventually the oscillation decays. Rayleigh [1] showed that for a drop radius r, surface tension σ and density ρ the resonant oscillation frequency ω is given by: It can also be shown that, for a liquid of viscosity η, the oscillation will decay with a time constant [11]: (2) Figure 13 shows experimental measurements of drop elongation (E = vertical diameter / horizontal diameter), derived from images similar to those in Figure 10. Also plotted for comparison is the expression: This function has been fitted to the data by adjusting the amplitude factor (F), the phase (φ), the frequency (ω) and the time constant (τ ). Except at early times this provides a good fit to the data and allows estimates of the liquid's surface tension and viscosity to be made. At early times higher order modes of oscillation are present which complicate the behaviour. In a drop-on-demand printer a 'jet' is formed as the ink emerges from the nozzle. The actuator is driven by an electrical waveform with a shape experimentally determined to suit the structure of the print head and the properties of the ink. This waveform produces either movement in the piezoelectric actuator or heats a resistive pad to create a vapour bubble. This is in turn transformed, via the acoustic response of the ink, the ink jet cavity and nozzle assembly, into a pressure variation in the liquid at the nozzle entrance. This will cause ink to be ejected. Normally the time for which the drive waveform is applied is significantly shorter than the time needed for the jet to emerge and the drop(s) then to form from the jet. The drop initially emerges from the nozzle at relatively high speed. Once the drive impulse has diminished the drop continues to move, still joined to the ink in the nozzle through a stretching ligament (Figure 4, above). As this ligament stretches the drop will decelerate, in part through dissipative energy loss from viscous forces, partly through the energy required to create new liquid surface as the ligament stretches, and also (although this is a minor contribution) through the effects of air drag.
At some point the ligament breaks, and then surface tension drives the newly-formed drop towards a spherical shape. If the ligament is long it may well break into one or more satellites (as seen in Figure 5 above) as well. The formation of satellite drops from the ligament is clearly a similar process to the formation of drops from a continuous jet.
The dominant forces which control the behaviour of jets and drops of Newtonian liquids arise from viscosity and surface tension. In comparing and analysing jetting and break-up phenomena, it is useful to describe the conditions in terms of appropriate dimensionless groups. The Reynolds number Re, defined by Re = ρDV/η, describes the ratio between inertial and viscous forces in a fluid with dynamic viscosity η and density ρ, at a velocity V and a characteristic length D, here taken to be the jet or drop diameter. The Weber number We, where We = ρDV 2 /σ and σ is the surface tension, describes the ratio between kinetic energy and surface energy. It is sometimes more useful to consider the value of the Ohnesorge number Oh to describe the relative importance of viscous and surface forces, where Oh = We 1/2 /Re. For non-Newtonian fluids which are of increasing interest as applications of inkjet printing become wider, still other dimensionless groups can be used to incorporate the effects of viscoelasticity, such as the Weissenberg number Wi = λV/D where λ is the characteristic relaxation time of the fluid [12]. One common performance measure for a drop-on-demand print-head is the velocity of the drops. This can be used to compare the uniformity across the whole array and also to determine the variation of performance with drop frequency. Ideally the rate at which drops are printed should not affect either their volume or their velocity. In practice there is an upper limit on the rate of drop firing after which the printer will fail, for example, by not being able to replenish the ink in the nozzle chamber quickly enough. Before this ultimate limit is reached there is likely to be variation in drop volumes and velocities because there is insufficient time for the nozzle to reach an equilibrium state before the next drop is fired. The details of this behaviour will depend on the design of the printer. The need to pack nozzles closely together to increase the printing resolution means that there is often cross-talk between adjacent nozzles. In some print-head designs, adjacent nozzles share actuators (for example, in a common wall) and hence the sequence of firing has to be constrained to accommodate this.
Charge and deflection
In a continuous inkjet system the most common method by which drops are selected for printing is by electrostatic charging by induction from a nearby electrode. The conductive jet, the forming drop and the charging electrode form an R-C circuit in which the resistance and the capacitance both change with time. The resistance increases as the drop ligament diameter diminishes, becoming infinite at the point of drop break-off. To ensure that the charge on the drop is sufficient and well-controlled, it is important that the forming drop begins to charge soon after the previous drop has parted. Figure 14 illustrates how the charge on a capacitor increases with time (for a constant RC). It is preferable that the break-off occurs in the 'plateau' region B rather than the slope region A where the charge will be less and the level of charge is more sensitive to small changes in timing. This timing or 'phasing' is usually achieved by detecting the charge on the drops as they move past a separate detection electrode (placed, for example, just below the charging electrode). Clearly the time constant of the R-C circuit must be such that the plateau region is reached well within the period of drop formation. The conductivity of the ink must be high enough to ensure this. The chemistry of the ink can disrupt drop formation and charging. For example, the incorporation of long-chain polymers tends to inhibit drop break-off, leading to very thin and hence high-resistance ligaments. Once charged, the drops move through a constant electric field and are deflected. The deflection, d for the geometry shown in Figure 15 can be estimated from: where q is the charge on the drop, m is its mass, E is the electric field and v the velocity of the drop (assumed to be constant) in the vertical direction. This will give an under-estimate of the true deflection as aerodynamic retardation will slow the drop in flight, and field-fringing will provide forces beyond the top and bottom edges of the field plates. Conditions are usually more complex than this simple model assumes, as several drops are in flight at once. These drops will interact aerodynamically and will also repel each other electrostatically. While equation (4) suggests that the deflection is proportional to the charge on the drop, if other drops are nearby then they also influence the deflection and it may even be impossible to achieve a particular deflection because of these interactions. The charging sequence needed to print a specific pattern is normally found by initial estimate or calculation, followed by experimental iteration until sufficient drop placement accuracy has been achieved.
Vibrations and acoustics
The vibrational and acoustic behaviour of inkjet print-heads and inks play an important role in the performance of these devices. This behaviour can be analysed in various ways. Antohe et. al. [13] considered the application of a simple trapezoidal drive waveform to a drop-on-demand print-head with long channels in which the side walls flex. The side walls are made from a piezoelectric ceramic (PZT) constructed and poled to achieve the required flexing movement. Figure 16 shows the arrangement schematically. The rising front of the drive waveform increases the volume of the cavity and creates a negative pressure in the channel. This is followed by a positive pressure wave moving from the refill end of the channel. The rear of the drive waveform then causes a positive pressure wave in the cavity. By adjusting the duration of the drive pulse these waves can be timed to interfere constructively at the nozzle end and hence boost the resultant drop velocity. More complex waveforms are used to further adjust and improve the drop ejection. In a design producing multiple drops per pixel, the waveform for the whole drop packet is developed to optimise ejection of the complete stream of droplets. Some researchers [14,15] have used equivalent circuit models for various parts of the system, expressed in terms of the acoustic impedance of each component such as the nozzle, pressure chamber and actuator. These can then be used to evaluate and optimise the way in which each component in the system influences the behaviour of the whole. Finally, much can be learned by using numerical modelling techniques such as finite element analysis and computational fluid dynamics to study vibrations, acoustics and flows within inkjet systems.
Drop-surface interactions
All applications of inkjets involve deposition of liquid on to a surface: these surfaces can be treated to enhance certain qualities of the final print. The application requirements within different markets vary considerably. In graphics printing either the substrate or the print head array is moved to exploit interactions between placement of successive drops to help evenly spread or cover areas, at least to the extent that the visual impression in the final product is satisfactory. However, for deposition of functional materials, as in displays and printed electronics, the precision of subsequent drop placement in a location may be enhanced by exploiting the 'coffee stain' patterns by which solid-loaded drops dry, and capillary flow forces can be used to help define track widths and thickness. Jetting, spreading and drying of the deposited material also depend on the type of fluid used: this may be aqueous or solvent-based, may be chemically reactive or UV-curable, and may contain polymers, or ceramic or biological particles, with a range of possible sizes. Dyes and surfactants, anticlogging agents and stabilisers also play their part in these potentially very complex 'inks'.
Since inkjet applications generally require controlled drop placement and the elimination of spurious effects, the conditions for drop-surface interactions must be controlled to avoid splashing. Surface features, such as ridges, on otherwise flat substrates have a crucial role in controlling the lateral flow of liquid after the impact of a drop. Areas that are pre-wetted, for example if they have been previously printed, also show different behaviour from a dry surface. Substrates that are porous by nature or allow diffusion of the liquid into the bulk provide rather different types of drying behaviour than impervious surfaces on which the ink is held. The control and timing of substrate drying after inkjet printing plays an important role in industrial applications.
CIJ
The rich range of phenomena which occur on the impact of a droplet against a solid surface is the subject of active research [16]. A simple description can be based on the values of the Weber and Ohnesorge numbers, as shown in Figure 17. Conditions in inkjet printing lie predominantly in regime I [17], where the initial spreading of the drop occurs rapidly, resisted primarily by fluid inertia. Viscous effects may play a role later in the process as the speed of spreading falls.
Conclusions
Classical physics has an important role to play in many aspects of inkjet printing. The proper understanding and control of jet formation and subsequent motion of the jetted materials requires physical studies into liquid properties at very high shear rates, acoustic modes in print heads, instabilities of jets, drop formation, drop motion, stretching of fluid ligaments, the role of polymers in jet break up, electrical charging of drops and the aerodynamic and electrostatic interaction of jets and drops in flight. Techniques for observation, measurement and analysis are evolving to assist these studies. | 2019-04-28T13:14:08.822Z | 2008-03-01T00:00:00.000 | {
"year": 2008,
"sha1": "08f5be462f699a32cf7781767361b325c479952f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/105/1/012001",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "29559f770350e2ab44419e385842e36162e3389e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
250665622 | pes2o/s2orc | v3-fos-license | Nickel-copper alloy tapes as textured substrates for YBCO coated conductors
NiCuCo alloy tape was studied as textured substrates for YBCO coated conductors application. The addition of a small amount of cobalt was pursued in order to enhance the microstructure of the NiCu alloy. The use of different thermal treatments during the recrystallization process permitted to obtain area densities of cube orientation as high as 95%. The substrate was thoroughly characterized by means of x-ray diffraction, EBSD and SEM analyses. Further, the mechanical properties and the magnetic behaviour of this substrate have been investigated and compared with those exhibited by Ni, NiW and NiCu tapes. The suitability of this alloy substrate for YBCO coated conductors has been tested through the deposition of a conventional CeO2/YSZ/CeO2 buffer layer architecture using a Pd transient layer. Apart from passivating Ni-Cu-Co substrate, the use of a Pd transient layer produces a relevant texture sharpening in the out-of-plane orientation and the full width at half maximum of the ?-scan drops from about 9° of NiCuCo to 2° of Pd layer. This sharp texture is transferred to the YBCO film and the results indicate that NiCuCo alloy is a promising alternative substrate for the realization of YBCO coated conductors.
Introduction
Ni-Cu alloys have been studied by several authors to obtain substrates with intermediate characteristics between Ni and Cu, since they form a continuous solid solution for all the relative concentrations [1][2][3]. In particular, high Cu concentrations have been studied, since Ni-Cu is non magnetic at 77 K for Cu concentrations above 54 at.%. Ni-Cu alloys can be textured generally quite well, even though the cube tends to weaken at high concentrations. Besides, since the oxidation resistance of Cu-rich Ni-Cu alloys is as weak as pure Cu [4], some ternary alloys like Ni-Cu-Al and particularly Ni-Cu-Mn(Fe), namely constantan, have been recently studied. For the former, YBCO films were successfully grown and J c as high as 2 MA/cm 2 , could be obtained only after sputtering the substrate surface to remove Al 2 O 3 particles formed during recrystallization and using a novel TiN/MgO/LMO buffer layer architecture [4]. Constantan has been studied by several authors since it is a non-magnetic, commercially available alloy [3,[5][6][7]. The low J c exhibited by YBCO film grown on such a substrate have been attributed to the surface roughness and/or oxidation phenomena [7]. 4 To whom any correspondence should be addressed. In the following, a detailed characterization of the ternary alloy NiCu 48.5 Co 3 (NiCuCo) substrate is presented and compared with Ni, NiCu 50 (NiCu) and NiW 5 (NiW) substrates. Further, its suitability as substrate for coated conductor application is shown through the deposition of a buffer layers architecture and YBCO films as well.
Experimental details
Ni, Cu and Co pieces, with nominal purities of 99.95+% (Sigma-Aldrich), were melted in an argon-arc furnace with water-cooled copper hearth. First a Ni 50 at.% Cu (NiCu) alloy was prepared. Part of this alloy was re-melted and an amount of 3 at.% Co was added to form a ternary alloy (NiCuCo). In addition, Ni 5 at.% W (NiW) alloy was produced. Further details on tape rolling are reported elsewhere [8]. Pd buffer layer was deposited by means of electron beam evaporation at temperatures ranging from 450 to 550 °C. The CeO 2 /YSZ/CeO 2 buffer layer architecture and the YBCO film were grown by pulsed laser deposition. Further details about buffer layer and YBCO film deposition are reported elsewhere [9]. Structural and morphological properties of the samples were analysed by means of X-ray diffraction, scanning electron (SEM) and atomic force microscopy (AFM). A Pseudovoigt fit was used for both ωand ϕ-scans in order to evaluate the full width at half maximum (FWHM) of the distributions. The microstructure was investigated by means of electron backscattering diffraction (EBSD). The area of interest was sampled with a resulting pixel area of about 10 µm 2 . Hardness measurements have been performed using a Leitz-Wetzlar Vickers microhardness tester using a 200 g load. Stress-strain curves were measured at room temperature using an Instron 5500R testing machine with a 25 mm extensometer applied onto 100 mm long substrates prepared according to the ASTM E8M-99 test method for subsize specimen. The magnetic behaviour of the samples was studied by means of a vibrating sample magnetometer (VSM). Electric resistance measurements as a function of temperature R(T) of YBCO films were performed in a liquid nitrogen dewar by means of d.c. four-probe method. The critical current values were determined from the V-I characteristics with the standard 1 µV/cm electric field criterion.
Texture development
The development of cube texture in the studied Ni-Cu alloys is dependent on the adopted recrystallization heat treatment. In fact, a conventional annealing at 900 °C for 4 hours leads to a cube area fraction as high as 93% for NiCuCo and 51.1 % for NiCu (figure 1).
(a) (b) Conversely, the two-step annealing (TSA), proposed by Sarma et. al. [10] and modified with a final plateau of 900 °C, leads to a remarkably smaller cube area fraction of 86% for NiCuCo, whereas better (see table 1). These results are likely due to the fact that the annealing parameters in the TSA method were optimized for NiW. Finally, a cube area fraction as high as 95% was obtained with NiCuCo when enhancing the temperature to 1000 °C, in agreement with what obtained by other authors [7,[11][12]. From table 1 it can also be seen that substrates with larger cube areas exhibit at the same time a smaller area fraction of cube twins, in agreement with what previously reported [8,11]. In-and out-of-plane distributions of the (00l) orientation have been evaluated by means of (111) ϕscans and (002) ω-scans along both transverse (TD) and rolling directions (RD). All the distributions resulted to be almost Gaussian and the relative full width at half maxima (FWHMs) are reported in table 1. These data confirm that, for a given material, sharper textures are obtained for larger cube areas, in agreement with what previously reported for Ni-based alloys [8,11]. Hence, high temperature annealing is beneficial for the development of a strong, sharp cube texture, besides high temperatures favour the thermal etching phenomenon. Therefore, in the present work the recrystallization treatment has been limited to 1 hour at 900 °C, which has been found to be sufficient to stabilize the microstructure of NiW substrates [12]. In the following, substrates annealed for 1 hour at 900 °C will be referred to as recrystallized substrates. The grain size for Ni-Cu alloy substrates annealed for 4 hours at 900 °C is 26±5 µm for NiCuCo and 31±5 µm for NiCu, i.e. as in NiV or NiCr for the former and as NiW for the latter [13]. AFM measurements on recrystallized NiCuCo substrates provided grain boundary depth values for low-and high-angle boundaries of about 33 and 185 nm, respectively, i.e. very close to those reported for pure Ni substrates [14]. The intra-grain average and rms roughness are about 15.4 and 21.5 nm, respectively, i.e. larger than what reported for NiW rolled in the same conditions [8]. The measured roughness is the same if grain boundaries are included in the measured region.
Since the starting raw materials, as well as the whole thermo-mechanical process, remained the same for both Ni-Cu alloys, the enhanced cube texture development can reasonably be attributed to the addition of Co.
Mechanical properties
The addition of Co influenced the mechanical properties of the NiCu alloy, since the measured hardness on recrystallized substrates increased from 89 HV of NiCu to 103 HV of NiCuCo. This latter value indicate that the mechanical strength is comparable to that reported for Ni 2 at.% W alloy substrate [15]. The tensile properties of NiCuCo have been evaluated by means of stress-strain measurements and compared to those of NiW (figure 2). The yield strength (YS) measured at 0.2 % strain is 120 MPa, i.e. slightly higher than that exhibited by constantan alloy [5]. Though far from the tensile properties exhibited by NiW alloy, nevertheless the increased strength of NiCuCo alloy with respect to pure Ni tapes is relevant and indicate that this ternary alloy substrate could be successfully employed in a reelto-reel deposition system. Figure 3a shows the mass magnetization as a function of the temperature M(T) for the two Ni-Cu alloys. It is evident that the addition of Co has a remarkable effect on the magnetic properties of NiCu alloy. The Curie temperatures T C were extrapolated at M=0 using a fit curve M ∝ (T C -T) 1/3 [16] and
YBCO film deposition
Test samples of YBCO film deposited on the standard CeO 2 /YSZ/CeO 2 buffer layer architecture were realized, either with or without the interposition of an additional Pd layer. The use of a Pd transient layer on the substrate revealed to be beneficial for both surface passivation and cube texture sharpening. and permitted the realization of coated conductors on challenging substrates such as NiCrW alloy [17] and to enhance the superconducting properties of YBCO films deposited on NiW [9]. In fact, in the case of Ni and NiW substrates a remarkable sharpening of the out-of-plane distribution of cube orientation was reported [18][19]. This behaviour was found also in Pd films deposited on both NiCu and NiCuCo alloy substrates, since the FWHM of the ω-scan along TD drops from 9° of Ni-Cu to less than 2°. Figure 4 shows the θ-2θ scan for YBCO/CeO 2 /YSZ/CeO 2 architecture grown on both bare and Pdbuffered NiCuCo substrate. In both cases films are well adherent, compact and without any crack. Both YBCO films are c-axis oriented and the only differences are the presence of broadened NiCuCo peaks caused by the diffusion of Pd into the substrate and the formation Cu oxide, revealed by a small CuO peak, in the sample without Pd. The out-of-plane distribution of (00l)YBCO is strongly influenced by the presence of the Pd transient layer. In fact, the FWHM of the ω-scan of (005)YBCO dropped from 6.3° to 3.6° when a Pd layer is introduced. This latter value is however larger than that exhibited by the as-deposited Pd layer, indicating that an interdiffusion process occurred during the thermal ramp to 850 °C for CeO 2 caplayer and YBCO deposition, thus degrading the surface of the template. Figure 5 shows the R(T) of YBCO films deposited on both architectures. The critical temperatures T c values are 87.6 and 86.9 K for the sample with and without Pd, respectively. The measured critical current density J c for the sample with Pd is 0.31 MA/cm 2 . Figure 6 shows the J c (B) for this sample. The rapid J c decrease in the low-field region is due to the grain boundaries, as previously shown [20]. These relatively poor properties of YBCO film are satisfactory taking into account that the YBCO coated conductors samples have been realized following the deposition process optimized for Ni-W tape.
Conclusions
A detailed characterization of the novel NiCu 48.5 Co 3 alloy tape was presented and its suitability as textured substrate for YBCO coated conductors application has been tested. Though the obtained transport properties of YBCO film grown on this substrate are still limited and the substrate needs an improvement of the microstructure, the obtained results are encouraging and demonstrate the feasibility of YBCO coated conductors on Ni-Cu based substrates. | 2022-06-28T04:58:35.516Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "d35805ebbf31f30de8ba4f4a597d3d2bf05322e9",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/97/1/012188",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d35805ebbf31f30de8ba4f4a597d3d2bf05322e9",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
14037297 | pes2o/s2orc | v3-fos-license | Vortex flow during early and late left ventricular filling in normal subjects: quantitative characterization using retrospectively-gated 4D flow cardiovascular magnetic resonance and three-dimensional vortex core analysis
Background LV diastolic vortex formation has been suggested to critically contribute to efficient blood pumping function, while altered vortex formation has been associated with LV pathologies. Therefore, quantitative characterization of vortex flow might provide a novel objective tool for evaluating LV function. The objectives of this study were 1) assess feasibility of vortex flow analysis during both early and late diastolic filling in vivo in normal subjects using 4D Flow cardiovascular magnetic resonance (CMR) with retrospective cardiac gating and 3D vortex core analysis 2) establish normal quantitative parameters characterizing 3D LV vortex flow during both early and late ventricular filling in normal subjects. Methods With full ethical approval, twenty-four healthy volunteers (mean age: 20±10 years) underwent whole-heart 4D Flow CMR. The Lambda2-method was used to extract 3D LV vortex ring cores from the blood flow velocity field during early (E) and late (A) diastolic filling. The 3D location of the center of vortex ring core was characterized using cylindrical cardiac coordinates (Circumferential, Longitudinal (L), Radial (R)). Comparison between E and A filling was done with a paired T-test. The orientation of the vortex ring core was measured and the ring shape was quantified by the circularity index (CI). Finally, the Spearman’s correlation between the shapes of mitral inflow pattern and formed vortex ring cores was tested. Results Distinct E- and A-vortex ring cores were observed with centers of A-vortex rings significantly closer to the mitral valve annulus (E-vortex L=0.19±0.04 versus A-vortex L=0.15±0.05; p=0.0001), closer to the ventricle’s long-axis (E-vortex: R=0.27±0.07, A-vortex: R=0.20±0.09, p=0.048) and more elliptical in shape (E-vortex: CI=0.79±0.09, A-vortex: CI=0.57±0.06; <0.001) compared to E-vortex. The circumferential location and orientation relative to LV long-axis for both E- and A-vortex ring cores were similar. Good to strong correlation was found between vortex shape and mitral inflow shape through both the annulus (r=0.66) and leaflet tips (r=0.83). Conclusions Quantitative characterization and comparison of 3D vortex rings in LV inflow during both early and late diastolic phases is feasible in normal subjects using retrospectively-gated 4D Flow CMR, with distinct differences between early and late diastolic vortex rings. Electronic supplementary material The online version of this article (doi:10.1186/s12968-014-0078-9) contains supplementary material, which is available to authorized users.
Background
Vortex formation within the left ventricular (LV) blood flow has been suggested to critically contribute to efficient blood pumping function [1]. A vortex can be described as a group of fluid particles with a swirling motion around a common axis. Among different types of vortices, vortex rings (also known as toroidal vortex) are abundant in nature because of their compactness and stability [1][2][3].
In the LV, in healthy subjects, both in vivo and in vitro studies have reported vortex ring formation during early diastolic filling, originating at the distal tip of the mitral valve (MV) leaflets [1,[4][5][6][7][8][9][10][11]. In a three dimensional (3D) view, this vortex ring appears as a closed tube with torus-like shape distal to the mitral valve orifice. In a two dimensional (2D) four-chamber view a 3D vortex ring appears as a counter-rotating vortex pair, one distal to the anterior MV leaflet and another distal to the posterior leaflet. Such vortex formation may help in efficient MV closure [5], efficient diastolic filling, minimizing kinetic energy loss [4,6] and preventing thrombus formation [7]. An altered (early filling) vortex formation have been shown to develop in patients with diastolic dysfunction and dilated ischemic cardiomyopathy, suggesting a relation between abnormal vortex formation and LV dysfunction [7,8]. On the other hand, in normal subjects, discrepancies arise in literature and little is known about vortex formation during late filling. Experimental studies using computational fluid dynamics (CFD)-based simulations of LV inflow have reported the formation of a vortex ring distal to the MV during late LV filling, [12][13][14][15][16][17]. In contrast, in vivo studies have reported only the formation of a single anterior vortex during late filling (i.e. not a vortex ring because of the absence of a posterior vortex) [6,9,[18][19][20][21] or even the absence of any vortex [18]. While CFD simulation can provide higher temporal and spatial resolution than in vivo data, application of CFD techniques also involve simplifications of the geometry and dynamics of the left ventricle and mitral valve leaflets which might result in inaccurate modelling of the true blood flow.
4D Flow cardiovascular magnetic resonance (CMR) with retrospective cardiac gating can acquire all the three directional velocity components (in-plane and through-plane) of the blood flow relative to the three spatial dimensions and over the whole cardiac cycle, providing a powerful tool for evaluating blood flow patterns during both early and late left ventricular filling in-vivo [6,22,23]. Previous studies have shown the feasibility of using 4D Flow CMR for vortex flow analysis [6,19,22,[24][25][26][27]. These studies mainly focus on vortex formation during early filling inflow but not late filling inflow. While paramount for establishing normal ranges defining LV vortex flow, standardized quantitative characterization of the 3D shape and location of normal vortex flow are currently lacking. Different from visualization-based vortex identification, vortex core detection techniques [28][29][30] base their vortex identification on the underlying physical properties of a vortex instead of only visual assessment, therefore, provide more objective vortex definition. CFD experiments have shown that LV vortex ring originates from the inlet jet through the mitral valve orifice during early LV filling [1,4,10,13], therefore, the shape of the formed vortex is expected to resemble the shape of the originating valvular opening [4,13]. Hence, we hypothesized that similar behavior could be identified in vivo where a more oval opening of the MV during peak late filling results in a more elliptical vortex ring compared to the one originating from a more circular valve opening during peak early filling. Accordingly, the aims of our study were to apply retrospectively-gated 4D Flow CMR and quantitative 3D vortex core analysis to 1) Assess feasibility of in vivo vortex flow analysis during both early and late diastolic filling in normal subjects 2) Establish normative quantitative parameters characterizing 3D LV vortex flow during both early and late ventricular filling in normal subjects.
Study population
Twenty-four healthy volunteers (9 males, mean age 20±10 years; age range 9-44 years), without history of cardiac disease, abnormalities on ECG or echocardiography were included. The study protocol was approved by the institutional review board and written informed consent was given by all subjects or their legal representatives.
4D Flow CMR protocol
All subjects underwent 4D Flow CMR using a 3T digital broadband multi-transmit CMR system (Ingenia, Philips Medical Systems, Best, The Netherlands), with maximal gradient amplitude 45 mT/m and maximal slew rate 200 T/m/s. For signal reception, a 60cm Torso coil was used in combination with the FlexCoverage Posterior coil in the tabletop, combining a maximum of 32 elements. A 3D time-resolved volume acquisition of the whole heart was performed with velocity encoding in all three directions with velocity sensitivity (VENC) of 150 cm/s. The acquired volume data was reconstructed in timeresolved manner (30 cardiac phases per cardiac cycle) into 2.3×2.3×3-4.2mm 3 (three subjects were scanned with 3 mm slice thickness). Retrospective cardiac gating was performed with Vector ECG triggering. Scan parameters: echo time 3.0 ms, repetition time 9.9 ms, flip angle 10°, field-of-view 400 mm, number of signal averages 1. VENC 150 cm/sec. Acceleration was achieved by Echo Planar Imaging with EPI factor 5. Free breathing was allowed and no respiratory motion compensation was used. Commercially available concomitant gradient correction was used for phase offset correction.
3D vortex core identification using the Lambda2-method
In this study, vortex cores in the LV cavity were detected over the diastolic phases using the Lambda2 (λ 2 )-method [28]. The Lambda2-method is an objective method that identifies 3D vortex cores based on their physical fluid dynamics properties, and is considered the most accepted vortex detection technique [31]. In short, the Lambda2method uses the fluid's velocity gradient properties to obtain a scalar value, λ 2 . In a loose sense, this obtained scalar reflects the pressure due to velocity gradients after excluding the effect of the irrotational part of the flow. The vortex cores are then identified as the regions with extreme negative λ 2 -values. These identified vortex cores can be visualized by use of isosurfaces of isovalue T λ 2 , which is an application-dependent threshold. More technical background of applying the Lambda2-method on 4D Flow CMR, including the choice of the isovalue threshold, has been described earlier [32].
Vortex core analysis workflow 4D Flow CMR data were analyzed with in-house developed software based on Matlab (Version R2012a, Mathworks Inc., Novi, MI). First, the LV endocardial boundaries were manually delineated using MASS research software (Version 2013EXP, Leiden University Medical Center, Leiden, The Netherlands). Subsequently, the Lambda2method was applied to the 4D Flow CMR data to identify the vortex structures within the LV blood pool. Early (E) filling and atrial (A) filling phases were defined from the flow rate-time graph after transmitral velocity mapping in combination with retrospective valve tracking [33]. For every subject, the vortex ring core (if detected) of peak early filling and peak late filling was used for further quantitative analysis. As described in previous work, the Lambda2 isovalue threshold (T λ 2 ) was defined as T λ 2 ¼ K μ (with K as a real number and μ as the λ 2 average of voxels with λ 2 < 0) with K chosen as the value providing the most circular vortex ring core having the least attached trailing structures [32]. The parameter K was chosen separately for every filling phase. The shape and location of the peak early (E) and late (A) -vortex ring cores were further quantitatively analyzed using the parameters explained below. In the remainder, the vortex cores detected at peak early filling and peak late filling will be denoted as E-vortex and A-vortex, respectively.
3D quantitative characterization of diastolic vortex ring core
The 3D location and orientation of the vortex ring core were quantified using a standardized 3D local cardiac (cylindrical) coordinate system, abbreviated by CLR. Every vortex ring core center was localized using its circumferential (C), longitudinal (L) and radial (R) coordinates and orientation relative to the LV as defined and illustrated in (Figure 1). The shapes of the vortex ring cores were quantified using a dimensionless circularity index (CI) defined as the ratio between the vortex's short (D1) to long (D1) diameters, i.e., CI = D1/D2 (See Figure 2a).
Intra and interobserver reproducibility
One observer repeated the same measurements after one week to allow assessment of intraobserver reproducibility. Two independent observers repeatedly performed measurements on all subjects to assess interobserver reproducibility of derived parameters. The observers manually defined the Lambda2 threshold (T λ 2 ) as explained above. Then, C,L,R coordinates and orientation of vortex ring cores for both early and late filling were quantified.
Vortex-Mitral flow association
To investigate the relationship between the geometry of the vortex ring core and the inflow jet area through the MV, the area of MV opening was assessed at two levels using retrospective valve tracking [33]. In short, at the same phase as the selected vortex, a plane was positioned at the annulus level and a second plane at one centimeter distal to the annulus ( Figure 2: (b)) as an approximation of the tip level of the opened MV leaflets. These planes resulted in two cross-sectional images with through-plane velocity encoding in which the flow through the opened MV was outlined ( Figure 2: (c), (d)). The outlined regions were then used to calculate the circularity index of the inflow area at the mitral annulus level (CI MV ) and at the valve tip level (CI MV_tip ), as the ratio between the short-to long-diameters of the outlined region. The correlation between the vortex circularity index CI vortex of the diastolic vortex ring cores (Eand A-vortex ring cores pooled together) and each of CI MV and CI MV _ tip were then evaluated.
Statistical analysis
Statistical analysis was performed using SPSS Statistics software (version 20.0 IBM SPSS, Chicago, Illinois). Quantitative parameters were presented as mean ± standard deviation or median and inter-quartile ranges (IQR) where appropriate. Differences between E-vortex ring and A-vortex ring parameters were compared using paired Student t-test. Spearman's correlation test was used to assess the relationship between vortex ring shape and mitral inflow area shape. Inter-and intraobserver reproducibility were determined by the interclass correlation coefficient for absolute agreement, the absolute and relative unsigned difference between measurements (with paired t-test) and the coefficients of variance defined as the standard deviation of the difference divided Definition of the local cardiac coordinate system (C, L, R) relative to the LV: The LV long-axis is defined as the line from the mid of the mitral valvular opening to the LV apex. The long-axis was calculated separately per filling phase (i.e. one for early filling and another for late filling). The center of the vortex ring was projected on this long-axis. The distance of the projected point to the MV and to the vortex center defined the vortex's longitudinal (L) and radial (R) coordinates as illustrated in (a), respectively. Both L and R distances were normalized to the long-axis length and to the basal endocardial radius (measured on a reformatted short-axis slice), respectively to provide dimensionless parameters. Circumferential (C) Coordinate is defined as the angle between the septal landmark (the anterior attachment of the RV free wall with the LV) and the vortex center as illustrated in cross-sectional view (b). the vortex ring orientation (α) measured as angle between the LV long-axis vortex and a fitting plane of the vortex ring, where an orientation of 90°means a vortex ring is perpendicular to the LV long-axis as shown in (c).
by the mean of both measurements. A p-value <0.05 was considered statistically significant.
Subject characteristics
Clinical characteristics of the study population are shown in Table 1. Three subjects with an absent A-vortex ring core are described separately.
Characterization of 3D LV vortex ring cores
In all twenty-four subjects, during the E-filling, a compact quasi-torus-shaped vortex ring core ( Figure 3: (a-c)) started to form distal to the mitral valve leaflets shortly after the onset of the E-filling and continued its development during the period of E-filling acceleration, reaching its full development with the E-filling approaching its peak ( Figure 4: f1-f5). During E-filling deceleration and diastasis, the vortex core deformed into a complex shape which tended to align with the LV long-axis ( Figure 4: f6-f10) while progressing towards the apex. Only a remaining residual of the vortex ring core, located at the mid-ventricular level could be observed at the onset of atrial contraction and this remnant of the E-vortex ring core could not be observed anymore at end diastole ( Figure 4: f17,f18). In the majority of subjects (twentyone subjects, 88%), during the late diastolic filling, a new isolated compact and more asymmetrically shaped vortex ring core was formed at the ventricular basal level with a more dilated anterior side (i.e., the part close to the aortic outflow tract) and more compressed posterior side ( Figure 3: (d-f )), reaching its complete formation while approaching peak late filling ( Figure 4: f15-f17). The A-vortex ring core was persistently present until the end of diastole without major dissipation and was still located at the basal level ( Figure 4: f18,f19). For the three remaining subjects, (subjects A, B and C in Table 1) no vortex ring core was present during late diastolic filling. Samples of the Lambda2-based detected peak early and late diastolic formed vortex ring cores are shown in Figure 3 and are depicted together with streamlines visualization of the velocity vector field in Figure 5. A time-sequence of the 3D vortex detection during the diastole is shown in Figure 4 (Additional file 1 and Additional file 2).
3D Quantification of LV vortex ring core parameters
The quantified CLR parameters are presented in Table 2. The centers of the 3D vortex ring cores during early and late filling were located at the LV basal level, but the rings during A-filling were significantly closer to the mitral valve compared to the rings during E-filling (E-vortex L=0.19±0.04 versus A-vortex L=0.15±0.05; p=0.0001). The centers of the vortex rings during both E-and A-filling were located in the anterior and anterolateral segments (E-vortex: C=89±23°, A-vortex: C=100±23°; p=NS). Afilling vortex center was located closer to the ventricle's long-axis during A-filling compared to E-filling (E-vortex: R=0.27±0.07, A-vortex: R=0.20±0.09, p=0.048). Both Eand A-vortex ring cores were similarly orientated relative to the LV long-axis (E-vortex 71.0±9°versus A-vortex 74±4°; p=NS). E-vortex rings were significantly more circular in shape compared to A-vortex rings (E-vortex: CI=0.79±0.09, A-vortex: CI=0.57±0.06; p<0.001).
Inter and intra-observer variation
Results of inter-and intra-observer analysis for assessment of relative vortex core position and orientation are presented in Tables 3 and 4. Inter-observer analysis revealed intraclass correlation coefficient higher or equal to 0.96 (all p<0.001), with mean relative unsigned differences ranging between 1.5% and 7%, which was not statistically significant different. The coefficient of variation ranged between 1% and 3%. Intra-observer analysis showed intraclass correlation higher or equal to 0.97 (all p<0.001), with a mean relative unsigned difference ranging between 0.5% and 3%, which was not statistically significant different. The coefficient of variation ranged between 1% and 8%.
Vortex-Mitral flow association
The Spearman correlation coefficient between the shapes of the vortex ring (CI vortex ) and the MV inflow jet at the level of the annulus (CI MV ) was R=0.66 (p<0.001). The correlation coefficient between CI vortex with the shape of inflow jet at the tip of the valve leaflets (CI MV_tip ) was higher with R=0.83 (p<0.001) ( Figure 6).
Discussion
To our knowledge this is the first work to provide standardized quantitative characterization and comparison Similarly, identified peak late diastolic vortex ring core is shown in (d), (e) and (f). The core of the peak early filling vortex ring appears with a quasi-torus-like shape, more circular and symmetrical compared to the core of peak late filling vortex ring which appears more elliptical in shape and asymmetrical with dilated anterior side and compressed posterior side. Lambda2 isovalue threshold T λ2 ð Þ ¼ 3μ was used to define the isosurfaces of vortex ring cores (with μ as the λ 2 average of voxels with λ 2 < 0). of the 3D LV vortex rings during both early and late diastolic filling in normal subjects. Using retrospectivegated 4D Flow CMR and 3D vortex core analysis, using the Lambda2-method, we observed the formation of a separate compact 3D vortex ring in vivo during late diastolic filling with different characteristics from the vortex ring formed during early filling. Our experiments quantitatively confirmed the close correlation between the shape of the formed vortex ring and the shape of the inflow area through both the mitral annulus and the tip of the opened MV leaflets.
LV vortex ring formation and dynamics with emphasis on late diastolic filling
Several studies have demonstrated the presence of rotating flow distal to the MV corresponding to a compact vortex ring during the early diastolic filling. This vortex formation has been related to the normal shape and function of the LV and its alteration has been suggested to be associated with pathologies of the left ventricle [1,[4][5][6]9,11,18,24,25]. In agreement with previous studies, in all subjects a compact 3D vortex ring core was identified distal to the mitral valve during the early filling Figure 4 Time-sequence of the Lambda2-detected 3D LV vortex structures (visualized as isosurfaces in red color) over all acquired diastolic phases of a sample normal subject, with E-filling onset (x), peak (y) and end (z), and A-filling onset (u), peak (v) and end (w). Diastasis is the duration between z and u. Every dot in the cardiac curve corresponds to a time point of the cardiac cycle in which a 4D Flow volume was acquired. With the start of diastolic phase (f1),the start of the presence of a compact ring-like shaped vortex ring during early-(f3) and late (f7) diastolic filling, the most developed vortex ring formed during early-(f5) and A-filling (f18), the start of vortex stretching or elongation in direction parallel to the LV long-axis (f10) and end of late filling while compact vortex ring is still identifiable (f19). Lambda2 isovalue threshold T λ2 ð Þ ¼ 3μ was used to define the isosurfaces of vortex ring cores (with μ as the λ 2 average of voxels with λ 2 < 0). To avoid cluttered view, only large scale vortex cores of 1 cm 3 or larger are visualized.
phase of diastole [6,9,10,18]. In previous studies, vortex analysis within LV flow has been primarily devoted to the early phase of the diastolic filling [1,[4][5][6][7][8][9][10]15,21,34,35]. Discrepancies exist in literature when defining or evaluating vortex formation during late diastolic filling, where CFD simulation reports vortex ring formation [12][13][14][15][16][17] and in vivo studies report no vortex ring formation but only a single vortex (rotating flow) distal to the anterior MV leaflet [6,9,[19][20][21]36,37] or no vortex at all [18]. Some of the discrepancies among in vivo studies can be a result of limitations of the employed flow acquisition and/or analysis approach. 4D Flow CMR has intrinsic advantages over other in vivo flow imaging modalities such as Doppler Echocardiography or 2D phase contrast CMR, by allowing acquisition of all three directional velocity components and over the three spatial dimensions. Moreover, 4D Flow CMR provides the feasibility of retrospective flow acquisition therefore allowing acquisition of flow over both early Same frames were superimposed with 3D vortex ring cores identified using Lambda2-method and showing good overlap between the 3D Lambda2-detected vortex cores and the cores of corresponding 2D streamlines' counter-rotating vortices during both peak early (c) and peak late filling (d). Lambda2 isovalue threshold T λ2 ð Þ ¼ 3μ was used to define the isosurfaces of vortex ring cores (with μ as the λ 2 average of voxels with λ 2 < 0). and late diastolic filling phases instead of only the early filling phase as with prospective flow acquisition. Previous studies have successfully employed 4D Flow CMR to visualize and study LV vortex flow [6,19,22,[24][25][26][27]36,37]. In these studies no explicit analysis of vortex ring formation during late diastolic filling have been performed and relatively low temporal resolution (50-70 ms) were generally used [24,27,36] while higher temporal resolution of 30 ms was used in this study to help capturing flow over the short duration of the late filling (five late filling phases were reconstructed on average).
In the current study, in agreement with CFD findings [12][13][14][15][16][17], in the majority of subjects (twenty-one subjects, 88%), a compact vortex ring core formed distal to the mitral valve during late diastolic filling. This ring formed at the basal level at the time when the remnant of the dissipating E-vortex ring core was located more apically, indicating that the A-vortex ring is a newly formed vortex as a result of the atrial contraction inflow and not just a continuation of the E-vortex. The A-vortex ring core was asymmetrically shaped in the anterior-posterior direction with a dilated anterior side, making most of the A-vortex flow being located close to the left ventricular outflow tract. This supports the postulation of Kilner et al. [6], about an expected role of the rotating flow beneath the mitral valve during the A-filling in aiding the redirection of the late diastolic inflow from the left atrium towards the left ventricular outflow tract, helping in an optimized ejection of blood. Therefore, with the revealed consistent formation of compact late diastolic vortex ring in vivo, extending the analysis of diastolic vortex formation to the late diastolic filling (instead of currently being limited to early filling) might help providing more understanding of the hemodynamics of the coupling between diastole and systole and associated pathologies. This emphasizes the importance of using retrospective cardiac gating when aiming for LV diastolic vortex flow analysis, where late filling phase can be acquired instead of the prospective-gating where late filling phase is generally missing.
The absence of vortex ring formation during late filling in three subjects (Table 1) might be attributed to their age related high heart rate and subsequent limited diastasis duration which might not allow developing the ventricular pressure gradient required for vortex formation [1].
Quantitative characterization of 3D diastolic vortex rings
Previous studies have successfully employed flow visualization techniques to identify LV vortex flow [6,[19][20][21][22][23]26], quantify vortex volume [24] or evaluate early filling vortex formation [35]. However, to our knowledge, there have been no in vivo studies providing quantitative 3D characterization of the location and the shape of vortex flow during both early and late diastolic filling phases.
Defining the true boundary of a vortex is challenging task, especially in 3D space, as it is highly dependent on the identifier. Most in vivo studies identify a vortex based on visual assessment of the visualized flow [9,11,20,21] which is generally an observer dependent definition. Instead, vortex cores are generally regarded as a robust and well localized approximation of a vortex [2,3,28,38] and can provide more objective physical definition of a vortex. Different methods can be used for vortex core identification [13,[28][29][30], however, the Lambda2-method is considered the most accepted 3D vortex identification technique [1]. Vortex core analysis has been used before to detect vortices inside the heart but mainly for Table 3 Inter observer analysis for C, relative L, relative R and orientation of vortex ring cores visualization purposes [10,13,27,29,32,39]. In this work, we employ the 3D vortex cores identified using the Lambda2-method to derive quantitative parameters to characterize normal vortex ring formation during both peak early and peak late filling. In our experiments, following [32], Lambda2 isovalue threshold (T λ 2 ) in the range of [1,6] μ (i.e. K = [1,6], with μ as the λ 2 average of voxels with λ 2 < 0) allowed identification of a separate compact vortex ring core (when detected) in all subjects. The strong inter-and intra-observer agreements (Tables 3 and 4) indicate the robustness of the method with respect to Lambda2 threshold selection. The vortex ring core is significantly closer to the mitral valve annulus (longitudinal position) at the late filling peak compared to early filling, which can be attributed to the lower velocity and shorter length of inflow jet during late filling compared to early filling. The relatively closer position of the vortex ring core to the LV long-axis (radial position) at the late filling, can be explained using the confirmed correlation between shapes of the vortex ring core and the mitral valve opening, where a restricted opening of the mitral valve during late filling results in a vortex core center closer to the long-axis compared to full valvular opening at the early filling. Since vortex ring originates from the inlet jet at the distal tip of the mitral valve (MV) leaflets [1,10,13], vortex ring forms parallel to the inclined MV orifice [40]. Therefore, in normal subjects, similarly oriented MV orifice of early and late filling (relative to the ventricle's long-axis) results in similarly oriented vortex rings (i.e. similar vortex orientation planes). Consequently, circumferential location (C), which is calculated using the vortex orientation plane, is similar as well between vortex rings of both early and late filling. The strong correlation between the vortex ring shape with the shape of the inflow area at the tip of the opened MV leaflets confirms the relationship between the mitral valvular opening and shape of formed vortex ring as reported earlier by CFD studies [4,13]. To the best of our knowledge, this is the first in vivo study to quantitatively confirm this correlation.
The relatively small variation between normal subjects in derived parameters ( Table 2) indicates good consistency of results. Therefore, the method defines normal quantitative ranges for diastolic vortex rings and might in future help evaluating whether changes in valve morphology or ventricular dilatation alters the location and shape of the formed vortices.
Clinical implications
The suggested LV-normalized vortex parameters might help to provide more insights about the normal vortex formation and provide normative parameters to compare the 3D vortex flow between controls and patients. This could help to understand the hemodynamics of patients with impaired LV relaxation and restrictive filling, where the E/A ratio is abnormal. In addition, the close correlation found between vortex formation and the flow at the tip of the opened mitral valve leaflets suggests that patients with impaired leaflet function, as can be seen in patients with left ventricular dysfunction [41] after mitral valve repair and with mitral valve stenosis, could develop aberrant vortex rings, which possibly reduces efficiency of intra-cardiac flow. Therefore, further study is warranted to investigate the effect of mitral valve surgery on vortex formation during LV filling.
Study limitations
Limitations of this study include a relatively small number of healthy subjects and lack of comparison with patients. However, an objective detection of possible anomalies in the vortex flow of patients should be preceded by finding reliable quantitative measures defining the reference normal vortex flow. The current study was performed in a relatively young population (age range 9 -44 years). Global diastolic function parameters, as the E/A ratio remain relatively stable during the second, third and fourth decade of life [42], which explains why we did not observe age related differences. As diastolic function is known to decrease later in life [43] future studies are required to compute normal values in an elderly population. Limitations of 4D Flow CMR include the relatively long scan times (typically between 8-10 minutes with heart rate 60-80bpm), and the need of averaging the data over several cardiac cycles. This time-averaging would potentially result in smoothing the low scale flow structures and does not, generally, account for flow variations due to heart rate variations. In this study, a relatively low spatial CMR resolution of 2.3×2.3×3-4 mm 3 was used. However, it was our aim to evaluate large scale vortex ring cores which are expected to have volumes significantly larger than the MR voxel size. In three volunteers, a higher resolution of 2.3×2.3×3 mm 3 was used, which did not result in significant different findings from the other subjects. Further methodological and quantitative analysis on the effect of acquisition resolution may be helpful but was beyond the scope of this work. Identified Lambda2-based vortex cores could not be used for volumetric measurements (e.g. vortex volume or size) as applying different Lambda2 isovalue thresholds can result in different volumes for the same vortex core. Therefore, the vortex parameters derived in this study were chosen as not to be dependent on vortex volume. 4D Flow data was acquired using free breathing scans and no motion compensation was applied. Nevertheless, no motion artifacts were visually observed in the velocity data, and since all subjects underwent the same scan protocol, potential inter-and intra-subject effects on the measurements might be assumed to be similar among all subjects.
Conclusion
In summary, this is the first in vivo study using 4DFlow CMR to confirm previous CFD findings of vortex ring formation during late filling and to provide standardized parameters that allow quantitative characterization of vortex flow during both early and late left ventricular filling. The derived quantitative parameters provided consistent measurements within the studied population and strong correlation was found between the shape of the formed vortices and the shape of the inflow area at the level of both the mitral annulus and the tip of the opened MV leaflets. This study provides reference parameters defining normal vortex flow, which may allow objective quantitative evaluation of vortex flow in patients with cardiac disease.
Additional files
Additional file 1: Three dimensional Left ventricular diastolic vortex formation and development over the entire diastole in a sample healthy subject. Movie showing 3D left ventricular vortex formation and development, as detected by the Lambda2 method and visualized as isosurfaces in red color, over all 19 acquired diastolic phases in a sample normal subject, with E-filling onset (x), peak (y) and end (z), and A-filling onset (u), peak (v) and end (w). Diastasis is the duration between z and u. Every dot in the cardiac curve corresponds to a time point of the cardiac cycle in which a 4D Flow volume was acquired. To avoid cluttered view, only large scale vortex cores of 1 cm 3 or larger are visualized. Lambda2 isovalue threshold T λ2 ð Þ ¼ 3μ was used to define the isosurfaces of vortex ring cores (with μ as the λ 2 average of voxels with λ 2 < 0).
Additional file 2: 3D Vortex cores superimposed on streamline visualization over the entire diastole in a sample healthy subject.
Movie showing vortex cores of Additional file 1 superimposed on streamline visualization of a segmented LV cross-sectional view showing good overlap between the 3D Lambda2-detected vortex cores and the cores of corresponding 2D streamlines' counter-rotating vortices during both early and late diastolic filling. | 2015-09-23T00:31:53.000Z | 2014-09-27T00:00:00.000 | {
"year": 2014,
"sha1": "ef215e6af35bbe8904d7c044c84c682b82e9acae",
"oa_license": "CCBY",
"oa_url": "https://jcmr-online.biomedcentral.com/track/pdf/10.1186/s12968-014-0078-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f12be667d0861169f7be5ad973a5d79dd996b6da",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231583201 | pes2o/s2orc | v3-fos-license | Interstellar Interferometry: Precise Curvature Measurement from Pulsar Secondary Spectra
The parabolic structure of the secondary or conjugate spectra of pulsars is often the result of isolated one-dimensional (or at least highly anisotropic) lenses in the ISM. The curvature of these features contains information about the velocities of the Earth, ISM, and pulsar along the primary axis of the lens. As a result, measuring variations in the curvature over the course of a year, or the orbital period for pulsars in binaries, can constrain properties of the screen and pulsar. In particular the pulsar distance and orbital inclination for binary systems can be found for multiple screens or systems with prior information on $\sin(i)$. By mapping the conjugate spectra into a space where the main arc and inverted arclets are straight lines, we are able to make use of the full information content from the inverted arclet curvatures, amplitudes, and phases using eigenvectors to uniquely and optimally retrieve phase information. This allows for a higher precision measurement than the standard Hough transform for systems where these features are available. Our technique also directly yields the best fit 1D impulse response function for the interstellar lens given in terms of the Doppler shift, time delay, and magnification of images on the sky as seen from a single observatory. This can be extended for use in holographic imaging of the lens by combining multiple telescopes. We present examples of this new method for both simulated data and actual observations of PSR B0834+06.
INTRODUCTION
Pulsars allowed for the first detection of gravitational radiation, and provide a promising tool for precise gravitational source astrometry (Boyle & Pen 2012). Two current major limitations are the insufficiently accurate distances to pulsars, which would allow for coherent GW measurement; and interstellar propagation effects which degrade the precision of pulsar timing. Coherent combination of the pulsar intrinsic term doubles the detection sensitivity, and improves the astrometric localization by orders of magnitude, likely localizing GW sources to arc minute precision. An improved understanding of the electromagnetic propagation, the positions and magnifications of scattered images, opens up two avenues: interstellar diffraction limited astrometry and descattering of propagation effects. From the observed intensity field, phase retrieval aims to recover the linear impulse-response function that, convolved with an emitted ★ E-mail: dbaker@cita.utoronto.ca pulse yields the observed electric field. This technique is also called holography (Walker & Stinebring 2005). The dynamic wavefield, or its Fourier conjugate the conjugate wavefield, encodes information about the magnification and time delay of images currently contributing to the interference pattern from the plasma lens, and enables its utilization as a giant interstellar interferometer. For a more complete discussion of these objects see Sec. 2. Pen et al. (2014) achieved 50 picoarcsecond relative astrometry using the wavefield decomposition. To date, the phase retrieval approaches have been heuristic, working for special cases using lucky initial guesses and solving non-linear equations with unknown convergence properties, potentially leading to local minima instead of global solutions. This paper presents a systematic approach: by mapping onto an eigenvalue problem, the existence, uniqueness, optimality and degeneracy of the solutions become explicit.
When pulsar emission is scattered by structures in the interstellar medium (ISM), the resulting scintillation pattern can be used to provide precise measurements of the ISM and the pulsar. For our purposes, we focus on two measurements of interest: pulsar velocities in binary systems, and the structure of the lensing region.
Since the timescale of scintillation depends on the velocity of our line of sight through the ISM, variations in this timescale can be used to measure velocity changes due to the orbital motion of pulsars in binaries Lyne & Smith (1982). For systems with large orbital velocities, relative to their centre of mass motion, these changes can be of order one and can be measured using the correlation timescale of the scintillation pattern at different orbital phases as per Rickett et al. (2014). In order to study systems with less dramatic variation, more precise techniques are required. The discovery of scintillation arcs by Stinebring et al. (2001) presents such an opportunity. In some cases, these features are believed Walker et al. (2004) (or have been shown Brisken et al. (2010)) to be the result of linear or highly anisotropic groups of images on the sky at a fixed distance, giving a quadratic relation between the time delay and Doppler shift, and and our analysis will assume this to be the case. The interference of these images produces a main arc, from interference between lensed images and the line of sight image, as well as inverted arclets from the interference of pairs of lensed images.
The curvature of the main arc, as well as the arclets, is inversely proportional to the square of the effective velocity of the system. For PSR J0437−4715, Reardon (2018) used a Hough transform to measure the power along arcs in the secondary spectrum to measure these curvatures over several years. The annual modulation of the curvature completely encodes the orientation of the screen as well as the effective distance (defined in Sec. 2). For binary systems, the variation over the binary period will additionally depend on the pulsar distance, binary inclination, and orbital orientation. From measurements for a single screen this information is encoded in a phase and amplitude and so cannot give all three parameters. Fortunately, sin( ) is sufficient to restrict parameters to one of two orbits mirrored about the screen on the sky and at a fixed distance. In cases where a second screen can also be detected, which Putney & Stinebring (2006) observe in multiple systems, modeling the evolution of both screens can resolve the degeneracies and yield both the distance and inclination. Broadly, each screen has five measurable parameters, an amplitude and phase of modulation for both the Earth and pulsar orbits and a constant from the proper motion and screen velocity. However, each screen introduces only three new unknown parameters: screen distance, orientation and velocity. In Reardon (2018), the nearly parallel screens prevent measurement of the distance. Fortunately, the detection of additional screens has allowed for distance measurement with order ten per cent uncertainty (private communication). Improvements in the curvature measurements for the individual observations should allow for further improvement. For isolated pulsars similar techniques, using only curvature modulations from the Earth's orbit, can help constrain screen distances and orientations.
In this paper, we present a technique for measuring curvatures with greater precision by using not just the power along the main arc, but the phases and amplitudes of the main arc and inverted arclets. This is achieved by mapping the arcs into a space where they are represented by linear features proposed by Sprenger et al. (2021),dubbed the − transformation, and discussed here in Section 3. In Section 4, we present a sample application of this method to some simulated data to show how it can improve precision on curvature measurements.
An additional advantage of the − transformation is that it can be used to solve the phase retrieval problem for pulsar scintillometry. Though much progress has been made in studying pulsar scintillometry through the dynamic and secondary spectra, the loss of phase information in the formation of the dynamic spectrum from the square modulus of the electric field imposes serious restrictions. The problem of phase retrieving this phase information in the context of pulsar scintillometry was first discussed by Walker & Stinebring (2005). By reconstructing the phase of the wavefield we gain information about the locations and signal delays of the individual images causing scintillation. Apart from giving information about the lensing structure in the ISM, it is hoped that this information can be used to better understand changes in the measured time of arrivals caused by time evolution in the scattering medium between epochs Walker & Stinebring (2005). In studying seven years of PSR J0613-0200 data, Main et al. (2020) find the apparent strain due to variations in scattering to be ℎ ≈ 10 −15 at 15 nHz, where Aggarwal et al. (2019) had previously reported an apparent signal due to an unmodelled signal in this pulsar in the NANOgrav 9-year dataset. Though below the current single pulsar limit of 9.7×10 −15 , they argue that as PTA upper limits improve these effects may limit precision.
In Section 2, we discuss a simple one-dimensional model of scintillation. Section 3 shows how − transformations can be used for phase retrieval when only the dynamic spectrum is available, and in Section 5 we demonstrate its effectiveness on simulated data.
In Section 6 we apply these methods to an observation of PSR B0834+06 from Brisken et al. (2010) using Arecibo. This data shows clear inverted arclets, which lend themselves well to this style of analysis. The secondary spectrum also includes clear deviations from the one-dimensional model, as seen by collection of power offset from the main arc near 1ms. We show how modelling the one-dimensional structure responsible for the main arc and arclets can be used to probe this region using phase retrieval.
THEORY OF THE 1D LENS
Since our method for phase retrieval and curvature measurement is based on the assumption of a one-dimensional screen, we briefly introduce how such a structure leads to the observed spectra and how it relates to the physical properties of interest. For a pulsar located at a distance , we assume some lensing structure in the ISM, localised at some distance , producing a line of images of the pulsar on the sky. For any given image along the line with angular offset¯, the pulsar signal will experience a geometric delay and Doppler shift, at observational wavelength , of for pulsar, Earth and ISM velocities ì ,ì ⊕ and ì ISM respectively; and ì is the unit vector from the line of sight to the lensed image on the sky. It is clear that for a collection of images their time delay depends quadratically on the Doppler shift as where is the curvature of the observed arc in the secondary spectrum and is given by Since the typical time delays and Doppler shifts in L, 1-2 GHz, and P, 230-470 MHz, bands are on the order s and mHz, curvatures are naturally given in s mHz −2 . However, in keeping with previous works we will be quoting curvature in s 3 , which fortunately converts as 1 s mHz −2 = 1 s 3 . The geometry of such a system with three images, including the unlensed image, is shown in Fig.1.
In addition to these geometric effects, each image also has a complex magnification¯determined by the local properties of the lensing medium. We will call the magnification as a function of the impulse response, as it shows the response of the lensing medium to a single coherent source.
The interference of these images produces the observed electric field at Earth. The average intensity of this field, usually over several pulses in time, as a function of time and radio frequency is known as the dynamic spectrum ( , ) whose Fourier transform˜( , ) is called the conjugate spectrum. Underlying the dynamic spectrum is the complex wavefield ( , ) whose square modulus is the dynamic spectrum. For consistency, we call the 2-D Fourier transform of this object the conjugate wavefield˜( , ). For a one dimensional lens, the conjugate wavefield will be zero everywhere except along the parabola given by Eq. 5 where it will be equal to the complex magnification of each image. The wavefield can also be related to the scattering time function by a Fourier transform along frequency, giving the scattering time function for each time bin. Since the intensity is the amplitude squared of the wavefield, the conjugate spectrum must be the convolution of the conjugate wavefield with its complex conjugate transpose. For a one dimensional lens this will result in the familiar arc and inverted arclets. For more complicated lensing structures the resulting secondary spectrum may be quite different. However, if there exists a dominant linear feature on the sky, the spectrum may still be approximated as the convolution of its parabola with the entire conjugate wavefield. An example of this behaviour is seen in the millisecond feature of Fig.8 and discussed in Section 6.
− TRANSFORMATION
Since, or our analysis, the parabolic features in the secondary spectrum arise from one-dimensional structures on the sky, there exists a space in which the features are linear; and so easier to work with. Mapping into this space was first proposed by Sprenger et al. (2021) via their − transformation, which we present here in units of Doppler shift as opposed to sky position Where and are the Doppler shift and delay of points in the conjugate spectrum respectively, is the curvature of the main arc, and 1 and 2 are scaled sky coordinates at which the images would interfere at the given and . Our scaled can be converted into angles on the sky with We define the flux preserving map on the conjugate spectrum as By mapping the conjugate spectrum instead of the secondary spectrum, we preserve the phase information which allows for a coherent curvature search as well as the possibility of recovering the wavefield. To see how this is achieved, we consider a collection of images along a line on the sky with angular offsets from the line of sight to the pulsar given by¯, where the bar denotes the true angular position of the images as opposed to their curvature-dependent position in a − map. At a fixed frequency, each of these images have both a magnification and phase rotation that are combined into the complex magnification¯. Together, these magnifications give the magnification vector ì. A simple schematic of the geometry is shown in Fig.1. If we define a grid in the corresponding scaled ; then transforming into − space, with the correct curvature, gives and so we can express the − spectrum as the outer product of the magnification vector with itself. This vector is then the only eigenvector of this matrix with a nonzero eigenvalue. In order to find the correct curvature and magnification vector, we fit our data by minimizing the 2 defined by where 2 , gives the noise as each point in the − spectrum and , is the − spectrum for curvature . For a fixed , the local minima satisfy Since the dynamic spectrum is real, the conjugate spectrum and by extension the − spectrum are Hermitian. Under the assumption that the noise level is constant for all points, Equation 15 simplifies to Hence, minima correspond to eigenvectors that are scaled such that their norm squared equals the corresponding eigenvalues. To determine the global minimum, we rewrite Equation 14 as and so, using that the local minima all satisfy the eigenvector condition above, where , is the n th eigenvalue for curvature . Therefore, the best fit magnification vector at a given curvature corresponds to the largest eigenvalue, and the best fit curvature is the one whose − spectrum has the largest eigenvalue. Since our model takes the outer product of the magnification vector, the solution is not unique under phase rotations which will not impact our curvature fit but is addressed in Section 5 when determining the wavefield. It should be noted that we have assumed that the noise in − space will be white and of constant variance . In general, this will not be the case as the normalization of the − matrix will scale the stationary noise of the conjugate spectrum and the correlation of points will depend on how the spectrum is sampled.
CURVATURE FITTING ON SIMULATED DATA
To present a proof of concept of the method, as well as the details of the procedure, we simulate the scintillation pattern of a one-dimensional screen. The simulation generates a Gaussian distribution of image positions along a line. Each of them is treated as a stationary phase point, where the combination of dispersive and geometric delays remains constant over some region on the screen at a reference frequency, with a random magnification and phase. For each time in the simulated dynamic spectrum, the phase evolution for each image over frequency is determined and then combined. As time progresses and the pulsar moves, the relative geometric delays of the images change producing the changing field. The dynamic spectrum is then calculated from the average amplitude squared of the electric field and Gaussian noise is added to each point.
Our simulation uses the parameters given in Table 1, where the expected curvature at 320 MHz is 1.244 s 3 . A one hour 2.5 MHz chunk of the simulation, 1/16 ℎ of the band and 1/4 ℎ the observation time, is shown in Fig. 2.
Since the curvature of the arc evolves in frequency and the relative phases of the images evolve over time, a coherent curvature fit requires that we use only a small region of the dynamic spectrum such that the conjugate wavefield, or equivalently the scattering time function of the lens, remains relatively constant. Using smaller chunk sizes also reduces the resolution of the secondary spectrum. For small deviations from a one dimensional line of images, choosing a coarse enough resolution will cause the images to appear one dimensional and subject to our analysis. For such a chunk, curvature is measured as follows: (i) The mean subtracted dynamic spectrum is zero padded to account for the assumption of periodicity in the FFT and increase the resolution of the conjugate spectrum.
(ii) The conjugate spectrum is generated with an FFT.
(iii) Determine the grid on which − spectra will be generated. The extent of the grid is determined by the position of the peak of the most outlying arclet of interest, while the resolution is chosen in order to oversample the secondary spectrum.
(iv) Generate the − spectrum on the fixed grid of − space for a given curvature.
(v) Perform an eigenvector decomposition and save the largest eigenvalue.
(vi) Repeat steps (iv) and (v) over a range of curvatures.
(vii) Fit a parabola to the peak of eigenvalue vs curvature.In this work, we fit this parabola using all curvatures within approximately ten percent of the peak. We find that asymmetries in the eigenvalue vs curvature curve may bias the fit if too large a region is used.
The results of this approach on a single chunk can be seen in Fig. 2. Though fitting only requires finding the dominant eigenvalue, we include models for the − , secondary, and dynamic spectra as a sanity check. These models are created by taking the outer product of the dominant eigenvalue for the best fit curvature transformation with itself, scaled by the eigenvalue. Inverting the transformation gives us the conjugate spectrum, from which the dynamic and secondary spectra are generated in the usual fashion. Since the − model is built under the assumption the wavefield remains constant, that is to say the locations and magnifications of images, over the chunk being analyzed, examining the model dynamic spectrum can help determine the appropriate bandwidth and duration for our chunks. When too large a chunk is selected, the model tends to Figure 2. Curvature measurement on a single chunk of the simulated data. The shown − spectra are for the best fit curvature, and the models are generated from its dominant eigenvector. All models use the same colour scale as their respective data. − spectra have been flipped vertically for readability, but the 1 = 2 diagonal is truly the main diagonal of the matrix. The red line on the secondary spectra plots shows the best fit curvature.
accurately reproduce the brightest scintles and become less accurate further away. To see why, we consider dividing the chunk into smaller regions and expressing it as the sum of these regions zero padded to the original size. The − matrix of the whole chunk is then the sum of the − matrices of the smaller regions. Any region with bright scintles will have a larger contribution to this combined matrix and force the result towards its response function.
If the response function evolves over the chunk, then other sections will be less well recovered. Since each − curvature fit is for only a portion of the dynamic spectrum, we can combine multiple chunks to further improve precision.From Equation 5, ∝ 2 and so we can fit the curvatures to some reference frequency as In order to estimate the error on the measured curvatures we take the mean and standard error from the curvatures of all chunks at the same frequency. In the case of our simulated data, we have four measurements at each curvature. Fitting to our reference frequency of 320 MHz (Fig. 3) gives 320 = 1.2449 ± 0.0007 s 3 which differs from our expected value of 1.244 s 3 by less than a tenth of a percent. Using a Hough Transform method, as per Reardon (2018), applied to the same data yields 320 = 2.3 ± .03 s 3 . The likely cause of the larger bias here is the asymmetry of the lens. As seen in Fig 2, there is additional power inside the best fit arc for arclets at negative , and outside the arc for those with positive . This indicates that there is more power in the images with negative in the conjugate wavefield. Since the inside left of the parabola in the secondary spectrum is due to the interference of these brighter images with each other, it will dominate over features outside the arc on the right. As a result, there is a tendency to pull the arc inward and measure a higher curvature. For cases where the arc is more symmetric or narrower the Hough transform method has been seen to accurately measure curvatures. Our new coherent curvature fit reduces the curvature estimation error by more than an order of magnitude, while also reducing the bias due to asymmetric arcs, which will aid in future measurements of pulsar physical parameters such as mass and distance using the methods described in Reardon et al. (2020) and van Kerkwijk et al. (2011).
PHASE RETRIEVAL ON SIMULATED DATA
Once we have a measurement of the reference curvature, we can recreate the wavefield phase. Phase retrieval is performed on the same sized chunks of data as for curvature fitting. For each piece of the data, the best fit curvature for the central frequency from the curvature model is used to generate the − matrix for which the response vector is determined using eigenvalue decomposition. The wavefield is then determined by using the inverse − map on the response vector with 2 = 0 to place images on the main parabola. However, since the eigenvectors are not unique under a constant phase rotation the recovered phases of these chunks cannot be directly combined. In order to solve this problem, we perform the reconstruction on overlapping chunks and rotate the phases of the recovered wavefields to agree within the overlapping regions, we refer to this as the 'mosaic'. For simplicity, we choose our chunks such that they overlap halfway in time and frequency with the adjacent chunks. Starting from the first chunk in time and frequency, we first apply a Hann window to each chunk, since the edges of the recovery are less reliable. The phase correction applied to the n th chunk is given by where is the current estimate for the wavefield overlapping with the chunk, and is the windowed wavefield of the chunk. The windowing gives a higher weighting to points away from the edges of chunks to reduce edge effects. We then update by adding to the appropriate region. Because of the Hann window, the final field is the weighted mean of rotated chunks.
The results of this approach are shown in Fig. 4. The model dynamic spectrum captures the structure of the data nicely, but due to the unknown phase rotation of the first chunk there is a constant shift in the phase model. For the final conjugate wavefield model, we use that the square root of the dynamic spectrum gives another measurement of the wavefield amplitude, and reapply those amplitudes to our recovered field. The noise properties of the recovered amplitudes and phases, as well as their dependence on measurement noise, are left for a future work.
PSR B0834+06
A useful test of our methods on real data, from which we can also investigate interesting science questions, are the 2005 observations of PSR B0834+06 from Brisken et al. (2010) in a 32 MHz band centered at 316.5 MHz for approximately 110 min. For this test, we restrict ourselves to the Arecibo data, though an extension of − transformation technique for VLBI is under development. The secondary spectrum from the lowest 4 MHz of the observation is shown in Fig. 5. The presence of clear inverted arclets makes this an excellent candidate for − mapping, while the island of power near 1 ms allows us to examine how non linear features impact recovery. Brisken et al. (2010) show that this feature does not lie on the main parabola, and can be mapped to a different linear screen on the sky. Since the eigenvector decomposition − matrices assumes a single curvature for the screen, we must discount this feature from our initial model. As most of the power in the main arc is below 512 s, we remove the island by rebinning the dynamic spectrum by a factor of four in frequency by averaging each consecutive group of four channels. As before, we divide the dynamic spectrum into sections and perform a curvature measurement on each one, with a characteristic result shown in Fig. 6. Using only 0.125 MHz of the band and 10.5 min of the data, we are able to measure a curvature to almost one per cent. For this chunk size, we can make more than 2500 curvature measurements from this dataset. These measurements are treated as independent as this approach yields a reasonable error estimate for the simulated data. The results of these individual measurements are combined in Fig. 7 by averaging all curvature measurements for chunks at the same frequency and using the standard error, fitting their curvature evolution gives 320 = 0.5422 ± 0.0003 s 3 with a reduced chi-squared value of red = 0.92 with 255 degrees of freedom.
Using this curvature result, we can now perform the mosaic phase retrieval described in Section 5. Since our rebinning of the dynamic spectrum removed the millisecond feature, we interpolate our recovered field to the original resolution before applying amplitudes from the dynamic spectrum. Effectively, this deconvolves the conjugate spectrum by the main arc and allows us to see features that were not part of the original one-dimensional model. shows this deconvolved spectrum. The narrow main arc, including features above the delay cutoff imposed by our rebinning, suggests a highly one-dimensional structure. Brisken et al. (2010) put the axial ratio of this main arc at > 27 from their VLBI analysis. However, several deviations from this structure, such as the millisecond feature and an additional feature offset from the main arc near −24 mHz or 300 s, are also visible after using the amplitude measurement from the dynamic spectrum. The millisecond feature is particularly well recovered, with multiple distinct images that can be seen to evolve with frequency in Fig. 9. The brightening of the central images in the lower left panel may indicate that multiple images are merging as frequency increases, though we leave a detailed analysis for a future work.
RAMIFICATIONS
In this work, we present a technique for precisely measuring the arc curvature in pulsar secondary spectra via the − transformation. By making use of the full phase information of the conjugate spectrum, as well as the shape of inverted arclets, we are able to improve on traditional methods by orders of magnitude. Measurement of arc curvatures, or equivalently the scintillation timescale, provides a probe for the transverse velocities of the pulsar and interstellar screen. These, in turn, have been used previously to measure inclinations, including their sense, in the double pulsar (PSR J0737−3039) by Rickett et al. (2014) and in PSR J0437−4715 by Reardon (2018). Other orbital parameters, such as the advance of periastron, are also measurable in this way Reardon et al. (2019).
Measuring inclinations is expected to be particularly interesting for certain black widow and redback systems which may have exceptionally high masses. van Kerkwijk et al. (2011) used light curve modelling of the companion of PSR B1957+20 to infer a pulsar mass of = 2.40 ± 0.12 M . However, difficulties in modelling the companion atmosphere lead to systematic uncertainty in the inclinations measured this way, which allow the mass to be as low as 1.66 M . Due to the low relative orbital velocity of the sys- tem, traditional methods of measuring the scintillation timescale or curvature changes over the orbit have not yielded results. A − style analysis may be able to detect these variations and provide an inclination. Alternatively, by including information about sin( ), the variations in curvature over the Earth and pulsar orbits can be used to measure pulsar distances. If one were to improve distance measurements to less than the wavelength of a gravitational wave, PTAs could make use of the 'pulsar term' in the signal, which can improve signal strength by a factor of two and lead to order of magnitude improvements in localizing sources (Corbin & Cornish (2010) In addition, this method provides an optimal solution for onedimensional phase retrieval or holography of the interstellar lens. This provides the relative phases, amplitudes and delays of the scattered images. Variations in the scattering delay can be observed on the scale of months to years when measured from the secondary spectrum, and may be underestimated when derived from scintillation bandwidth alone Main et al. (2020). Directly determining the delay for each image, may then be an important tool for removing systematic errors from PTAs. Furthermore, performing phase retrieval on VLBI data allows for imaging of the scattering medium as seen in Brisken et al. (2010) and Pen et al. (2014). Previously, phase retrieval has been achieved through iteratively adding images to a model conjugate wavefield (Walker & Stinebring (2005)), coherently stacking inverted arclets (Pen et al. (2014)), and cyclic spectroscopy (Walker et al. (2013)). However, these methods have only been successfully applied to a few systems. − methods will be useful in expanding the number of systems that can be analysed. Though the basic approach of − is based around the assumption of a one dimensional collection of images, it can still provide insight in more complicated cases. For systems similar to B0834+06, where the dominant linear screen is accompanied by a second offset feature, deconvolving the secondary spectrum using our one dimensional model can still isolate images from other structures. For two dimensional collections of images with high aspect ratios, using smaller chunk sizes can reduce our resolution in the secondary spectrum and may produce an effectively one dimensional problem. However, for more complicated lensing examples additional techniques will be required. The figure on the left shows the whole conjugate wavefield, including the millisecond feature, while the figure on the right shows a zoomed in version at 300 where a second group of images can be seen coming off the main arc. Since the dynamic spectrum was rebinned before performing − analysis, only the region below 512 , marked by a black line on the left, was originally modelled. Interpolation to the original resolution, and applying amplitudes from the square root of the dynamic spectrum, allows us to see nonlinear features and those above 512 . | 2021-01-13T02:16:06.273Z | 2021-01-12T00:00:00.000 | {
"year": 2021,
"sha1": "6facc3ae3f01045adb92f54b75f35ce4b6e2221c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2101.04646",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6facc3ae3f01045adb92f54b75f35ce4b6e2221c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
247938153 | pes2o/s2orc | v3-fos-license | Validation of a quantitative lateral flow immunoassay (LFIA)-based point-of-care (POC) rapid test for SARS-CoV-2 neutralizing antibodies
With the widespread use of coronavirus disease 2019 (COVID-19) vaccines, a rapid and reliable method to detect SARS-CoV-2 neutralizing antibodies (NAbs) is extremely important for monitoring vaccine effectiveness and immunity in the population. The purpose of this study was to evaluate the performance of the RapiRead™ reader and the TestNOW™ COVID-19 NAb rapid point-of-care (POC) test for quantitative measurement of antibodies against the spike protein receptor-binding domain of severe respiratory syndrome coronavirus 2 (SARS-CoV-2) in different biological matrices compared to chemiluminescence immunoassay (CLIA) methods. Ninety-four samples were collected and analyzed using a RapiRead™ reader and TestNOW™ COVID-19 NAb kits for detecting neutralizing antibodies, and then using two CLIAs. The data were compared statistically using the Kruskal-Wallis test for more than two groups or the Mann-Whitney test for two groups. Specificity and sensitivity were evaluated using a receiver operating characteristic (ROC) curve. Good correlation was observed between the rapid lateral flow immunoassay (LFIA) test system and both CLIA methods. RapiRead™ reader/TestNOW™ COVID-19 NAb vs. Maglumi: correlation coefficient (r) = 0.728 for all patients; r = 0.841 for vaccinated patients. RapiRead™ reader/TestNOW™ COVID-19 NAb vs. Mindray: r = 0.6394 in all patients; r = 0.8724 in vaccinated patients. The time stability of the POC serological test was also assessed considering two times of reading, 12 and 14 minutes. The data revealed no significant differences. The use of a RapiRead™ reader and TestNOW™ COVID-19 NAb assay is a quantitative, rapid, and valid method for detecting SARS-CoV-2 neutralizing antibodies and could be a useful tool for screening studies of SARS-CoV-2 infection and assessing the efficacy of vaccines in a non-laboratory context.
Introduction
Coronavirus disease 2019 (COVID-19) represents the largest public health emergency in the last two years. With the spread of COVID-19 vaccines, it has become of central importance for laboratories to assess immunity and protection against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and the use of antibody testing is an essential tool in the vaccination campaign to promote public health [1]. So far, data on the duration of immunity generated by SARS-CoV-2 infection or vaccination are still limited. More information on the response to vaccination could help to evaluate its efficacy and to determine whether booster shots are needed. A large number of different methods and technical approaches have been devised to measure the immune response and antibody kinetics to SARS-CoV-2 infection [2][3][4]. To evaluate the efficacy of COVID-19 vaccines and to monitor the level of protective neutralizing antibodies, it is necessary to develop a diagnostic tool that is easy to use and at the same time is accurate and provides useful information about the duration of immunity.
Lateral flow immunoassay (LFIA)-based point-of-care (POC) serological tests have been developed to detect anti-SARS-CoV-2 antibodies. In contrast to chemiluminescent serum tests, POC serological tests do not require technical personnel or laboratory equipment, are inexpensive, and provide results quickly. Furthermore, the risks associated with sampling and specimen preparation are greatly reduced while retaining high sensitivity and high specificity [5,6].
Handling Editor: William G Dundon.
3
Commercially available enzyme immunoassays can detect neutralizing antibodies with high diagnostic accuracy, whereas measuring neutralizing antibodies in vitro requires highly laborious assays performed in biosafety facilities and is limited to research institutions [7,8].
POC testing can be performed in a variety of settings, including physicians' offices, emergency rooms, urgent care facilities, school clinics, and pharmacies [9].
However, the benefit of these rapid serological POC LFIA-based tests has not been widely studied. Among the anti-SARS-CoV-2 antibodies detected in binding assays, neutralizing antibodies (NAbs) that block the interaction between SARS-CoV-2 and its human receptor ACE2 (angiotensin-converting enzyme 2) are of particular importance with regard to vaccination. A robust and rapid method for detection of SARS-CoV-2 neutralizing antibodies can be widely employed to investigate SARS-CoV-2 infection and assess the efficacy of vaccines.
The RapiRead reader is currently the world's smallest LFIA reader system for measuring spike protein receptor binding domain (S-RBD) antibody levels, and it shows good correlation with the World Health Organization International Standard (WHO-IS) for anti-SARS-CoV-2, with results given in binding antibody units (BAU).
The aim of this study was to evaluate the performance of the RapiRead™ reader and the TestNOW™ COVID-19 NAb test, using different biological matrices, and to compare the data obtained using these assays with those obtained using chemiluminescence immunoassay (CLIA) methods for SARS-CoV-2 S-RBD antibody detection.
Patients
Ninety-four samples were analyzed in this study. Serum samples were collected from 25 patients with SARS-CoV-2 infection confirmed by RT-PCR and 39 vaccinated health workers from Tor Vergata University Hospital of Rome who had received the second inoculation of the Pfizer vaccine at least 21 days earlier. For each individual, peripheral blood, EDTA plasma, and serum samples were collected; 30 presumed-negative serum samples collected before the SARS-CoV-2 outbreak, stored at -80 °C were used as a negative control.
The study was conducted in accordance with the guidelines of the local ethics committee (approval number: R.S.44.20) and the Helsinki Declaration, as revised in 2013.
TestNOW®-COVID-19 NAb
TestNOW ® -COVID-19 NAb (Affimedix Inc., CA, USA) uses the principle of immunochromatography for detection of NAbs. When the sample migrates through the membrane, the conjugate, consisting of colored RBD (the target) and colloidal gold, forms a complex with specific NAbs against SARS-CoV-2, if present in the sample. This complex migrates further along the membrane to the "T" (test) zone, where mouse anti-human IgG antibodies are immobilized on the nitrocellulose membrane on the cassette. There, the conjugate is "captured" by anti-human IgG antibodies bonded to the membrane, leading to the formation of a colored band, indicating a positive test result. The intensity of the colored band in the test line area depends on the concentration of NAbs present in the sample. A built-in control line (C) will always appear in the test window when the test has performed properly, regardless of the presence or absence of NAbs against SARS-CoV-2 in the specimen.
RapiRead™ reader
The RapiRead reader (Affimedix Inc., CA, USA) is utilized for reading the intensity of the colored band. For quantitative diagnostics, the intensities of the test lines are compared to a calibration standard and converted to an analyte concentration value. The instrument measures reflective optical density by taking multiple images through an LED (light-emitting diode) camera and recording reflectance of the test strip surface. The reader uploads the calibration file wirelessly using an RFID (radio frequency identification) card and can operate in stand-alone mode without a computer or external power source. The cutoff of TestNOW ® is ~ 30 BAU/mL, depending on the specific limit of detection (LOD) of the particular production lot. According to the manufacturer, values < 30 BAU/mL are considered negative; values of 30-250 BAU/mL indicate low protection; values of 250-500 BAU/mL indicate medium protection; and values > 500 BAU/mL indicate high protection.
Mindray SARS-CoV-2 S-RBD IgG
Mindray SARS-CoV-2 S-RBD IgG (Mindray S-RBD IgG) is a two-step CLIA for quantitative determination of SARS-CoV-2 S-RBD IgG in human serum or plasma, performed on the fully automated Mindray CL 1200i analytical system (Mindray Bio-Medical Electronic Co Ltd, Shenzhen, China). According to the manufacturer, the cutoff value is 12.16 BAU/mL, and the linear range is 3.65-1216 BAU/mL. Samples with values over 1216 BAU/mL were diluted 1:10 before measurement, allowing extension of the dynamic range of analysis to 12,160 BAU/mL.
Maglumi SARS-CoV-2 S-RBD IgG
Maglumi SARS-CoV-2 S-RBD IgG (Snibe S-RBD IgG) is an indirect CLIA for in vitro quantitative determination of IgG antibodies to SARS-CoV-2 S-RBD, performed using a fully automated Maglumi 800 analytical system (Snibe Diagnostic, Shenzhen, China). According to the manufacturer, the cutoff value is 4.33 BAU/mL, and the linear range is 0.78-433 BAU/mL. Samples with values over 433 BAU/ mL were diluted and measured at 1:10 or 1:20 (if necessary), allowing extension of the dynamic range of analysis to 8660 BAU/mL.
Statistical analysis
Descriptive analyses were performed, with measures of central tendency and dispersion for continuous variables and frequency distribution for qualitative variables. In the case of normally distributed data, they were represented by the mean ± standard deviation, and ANOVA with the Bonferroni post hoc test was used to determine the significance of differences between more groups. Otherwise, if only two groups were present, Student's t-test was used. In the case of non-normally distributed data, the data were represented as the median and the percentiles. The variables were compared using the Kruskal-Wallis test for more than two groups or the Mann-Whitney test for two groups. Shapiro-Wilk test was used for testing the normality of data with a 95% confidence interval (CI).
Receiver operating characteristic (ROC) curve analysis was used to determine the specificity and sensitivity relative to the cutoff suggested by the manufacturer.
The statistical significance level established for all tests performed was p < 0.05.
Results
In our study, RapiRead™ reader/TestNOW™ COVID-19 NAb data were compared with those obtained by two CLIA methods for SARS-CoV-2 S-RBD antibody detection using different biological matrices (peripheral blood, EDTA plasma samples, and serum samples).
The scatter plot in Figure 1A shows good correlation between the Affimedix rapid test and both CLIA methods, Snibe and Mindray. Different biological matrices were compared, and finger-prick blood, serum, and plasma samples showed no significant difference. Data were sorted from the lowest to the highest value, from 0 to 2500 BAU/mL. Figure 1B shows an enlargement of the first 20 samples, showing a good alignment up to about 500 BAU/mL.
Since good correlation was observed in the overall set of samples, it was evaluated whether linearity was maintained in the serum matrix between the RapiRead™/TestNOW™ system and the other two platforms, evaluated individually. Figure 2 shows that there was a strong correlation between the RapiRead™ reader/TestNow™ and CLIA methods when testing serum samples. The data were evaluated separately for all of the patients (n = 94) and just the vaccinated patients (n = 39). A significant correlation was observed between RapiRead™ reader/TestNow™ and Snibe Maglumi, with a correlation coefficient (r) of 0.728 for all of the patients and 0.841 for the vaccinated patients ( Fig. 2A and B).
Also, the relationship between TestNow™ and Mindray 1200i showed a significant correlation, with an r-value of 0.6394 for all of the patients and 0.8724 for the vaccinated patients ( Fig. 2C and D). Furthermore, the time stability of the POC serological test reader procedure was assessed. The manufacturer recommends reading the test results within 15 min. In this study, we considered two times of reading: 12 and 14 min. As shown in Figure 3, the correlation between two different reading times (12 and 14 min) with different matrices from the same individual (finger-prick blood, serum, and plasma) showed no significant differences, with correlation coefficients of 0.9994, 0.9996, and 0.9997 (p < 0.0001), respectively ( Fig. 3A-C). Figure 3D illustrates two samples with readings taken at time points after 14 min. The two samples with readings up to 42 min have coefficients of variation of 2.1% and 2.3% respectively, confirming the stability of the reading beyond the times recommended by the manufacturer.
Lastly, Figure 4 shows a receiver operating characteristic (ROC) analysis to assess specificity and sensitivity with respect to the cutoff suggested by the manufacturer. Data obtained using serum samples were evaluated: 30 control The ROC curve showed that the sensitivity and specificity were both 100%, with an area under the curve (AUC) value of 1.
Discussion
With the spread of SARS-CoV-2, rapid serological tests have been largely applied for detection and quantitation of antibodies [7]. Hundreds of point-of-care tests (POCTs) have been developed and are commercially available. Among the immunoassays, LFIAs are the fastest and most convenient tests, usually requiring only 15 min to complete. They can be performed by a professional, either in a laboratory or at a remote site, and can therefore complement existing NAb tests. In this study, we evaluated Test-Now™ COVID-19 NAb to assess its power of detection and performance. The tool proved to be the quickest and easiest way to detect neutralizing antibodies that block the interaction between SARS-CoV-2 virus and ACE2, with a good correlation with the chemiluminescence tests, using the different sample types from the same individuals: finger-prick blood, serum, and plasma. An additional value of this new device is a direct readout in BAU/mL, the international standard, following NIBSC (National Institute for Biological Standards and Control, UK) [10] or WHO-IS guidelines [11], and this allows an immediate comparison with other methods.
According to the manufacturer, measurements should be taken after an incubation time of 15 min, but we observed that acceptable results could be achieved over a longer range of times. The data did not show any significant difference between readings taken at 12 or 14 min. This wider range would improve the tool performance, guarantee the stability of results, and simplify the workflow.
Finally, ROC analysis confirmed the manufacturer's cutoff, with optimal sensitivity and specificity of 100%. Unfortunately, this study was limited by a small sample size, and the cutoff values should be confirmed with a larger cohort of patients in future studies.
The results demonstrate that TestNow™ COVID-19 NAb is a quantitative, valid, rapid, and simple method to detect SARS-CoV-2 NAb levels that could be employed widely for screening studies of SARS-CoV-2 infection assessing the efficacy of vaccines. The ability to insert the lateral flow cartridge into the instrument and obtain a quantitative readout can be used to complement its use as a stand-alone assay for measuring antibody levels in non-laboratory settings such as workplaces, hospitals, residential facilities, schools, or other locations where a rapid result is needed. | 2022-04-05T06:22:54.636Z | 2022-04-02T00:00:00.000 | {
"year": 2022,
"sha1": "b423b69c96251918b4d5346191f8e81e0b3f2629",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00705-022-05422-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c9181a16247fbcf1b512595aa953c0b39f849399",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203836244 | pes2o/s2orc | v3-fos-license | Probing fragmentation and velocity sub-structure in the massive NGC 6334 filament with ALMA
Herschel surveys of Galactic clouds support a paradigm for low-mass star formation in which dense filaments play a crucial role. The detailed fragmentation properties of star-forming filaments remain poorly understood, however, and the validity of the filament paradigm in the high-mass regime is still unclear. To investigate the density/velocity structure of the filament in the high-mass star-forming region NGC6334, we conducted ALMA observations in the 3mm continuum and the N2H+(1-0) line at ~3arcsec resolution. The filament was detected in both tracers. We identified 26 cores at 3mm and 5 velocity-coherent fiber-like features in N2H+ within the filament. The typical length of, and velocity difference between, the fiber-like features of the NGC6334 filament are reminiscent of the properties for the fibers of the low-mass star-forming filament B211/B213. Only 2 or 3 of the 5 velocity-coherent features are well aligned with the filament and may represent genuine, fiber sub-structures. The core mass distribution has a peak at ~10Msun. They can be divided into 7 groups of cores, closely associated with ArTeMiS clumps. The projected separation between cores and the projected spacing between clumps are roughly consistent with the effective Jeans length in the filament and a physical scale of about 4 times the filament width, respectively, suggesting a bimodal filament fragmentation process. Despite being one order of magnitude denser and more massive than the B211/B213 filament, the NGC6334 filament has a similar density/velocity structure. The difference is that the cores in NGC6334 appear to be an order of magnitude denser and more massive than the cores in Taurus. This suggests that dense filaments may evolve and fragment in a similar manner in low- and high-mass star-forming regions, and that the filament paradigm may hold in the intermediate-mass (if not high-mass) star formation regime.
Introduction
Herschel imaging observations of galactic molecular clouds reveal an omnipresence of filamentary structures and suggest that filaments dominate the mass budget of the dense molecular gas where stars form (André et al. 2010;Molinari et al. 2010;Hill et al. 2011;Schisano et al. 2014;Könyves et al. 2015). At least in the nearby clouds of the Gould Belt, detailed studies of the radial column density profiles have found a common inner filament width of ∼0.1 pc, with a dispersion of a factor < ∼ 2, when averaged over the filament crests (Arzoumanian et al. 2011Palmeirim et al. 2013;Koch & Rosolowsky 2015). Furthermore, most of the prestellar cores identified with Herschel are found to be embedded within such filamentary structures, showing that dense molecular filaments are the main sites of solar-type star formation (e.g., Könyves et al. 2015Könyves et al. , 2019Marsh et al. 2016). Overall, the Herschel findings in nearby clouds support a filament paradigm for solar-type star formation in two main steps The final data used in the paper (FITS) are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/632/A83 (cf. André et al. 2014André et al. , 2017Inutsuka et al. 2015): first, multiple large-scale compressions of interstellar material in supersonic flows generate a pervasive web of ∼0.1-pc wide filaments in the cold interstellar medium (ISM); second, the densest filaments fragment into prestellar cores by gravitational instability near or above the critical mass per unit length of nearly isothermal gas cylinders, M line,crit = 2 c 2 s /G ∼ 16 M pc −1 , where c s ∼ 0.2 km s −1 is the isothermal sound speed for molecular gas at T ∼ 10 K.
Since molecular filaments are also known to be present in other galaxies (cf. Fukui et al. 2015), this paradigm may potentially have implications for star formation on galaxy-wide scales. The star formation efficiency in dense molecular gas is indeed observed to be remarkably uniform over a wide range of scales from pc-scale filaments and clumps to entire galactic disks (Gao & Solomon 2004;Lada et al. 2010Lada et al. , 2012Shimajiri et al. 2017;Zhang et al. 2019), with possible deviations in extreme environments, such as the central molecular zone (Longmore et al. 2013). Assuming that all star-forming filaments have similar inner widths, as seems to be the case in nearby clouds , it is argued that the microphysics of filament fragmentation into prestellar cores may ultimately be responsible for this quasi-universal star formation efficiency Shimajiri et al. 2017).
The validity and details of the two-step filament paradigm remain controversial, however, especially beyond the Gould Belt and in high-mass star-forming clouds. In particular, an alternative scenario is proposed based on the notion of global hierarchical cloud collapse (e.g. Vázquez-Semadeni et al. 2019), which is especially attractive in the case of high-mass star formation to account for the structure of strongly self-gravitating hub-filament systems, where a massive cluster-forming hub is observed at the center of a converging network of filaments (Myers 2009;Peretto et al. 2013). In this alternative scenario, most if not all filaments would be generated by gravitational effects, as opposed to large-scale compression, and would represent accretion flows onto dense hubs.
The evolution and detailed fragmentation manner of starforming filaments are also poorly understood. In particular, the mere existence of massive, ∼0.1-pc wide, filaments with masses per unit length M line that exceed the critical line mass 1 of an isothermal filament M line,crit by one to two orders of magnitude is a paradox. Indeed, such filaments may be expected to undergo rapid radial contraction into spindles before any fragmentation into prestellar cores (cf. Inutsuka & Miyama 1997). A possible solution for this paradox is that massive filaments accrete background cloud material while evolving (cf. Palmeirim et al. 2013;Shimajiri et al. 2019), and that this accretion process drives magneto-hydrodynamic (MHD) waves, generating sub-structure within dense filaments, and leading to a dynamical equilibrium (Hennebelle & André 2013), where M line approaches the virial mass per unit length M line,vir = 2 σ 2 1D /G (Fiege & Pudritz 2000, where σ 1D is the one-dimensional velocity dispersion). The detection through molecular line observations of velocity-coherent fiber-like sub-structure within several nearby supercritical filaments (Hacar et al. 2013(Hacar et al. , 2018 may possibly be the manifestation of such a process. The physical origin of observed fibers is nevertheless still under debate (Tafalla & Hacar 2015;Smith et al. 2014;Clarke et al. 2017Clarke et al. , 2018, and it is not yet clear whether dense molecular filaments in massive star-forming regions have similar characteristics to those observed in nearby clouds. Our recent APEX/ArTéMiS 350 µm study of the massive star-forming complex NGC 6334 showed that the main filament of the complex has an observed line mass of ∼1000 M pc −1 , consistent within uncertainties with the estimated virial mass per unit length ∼500 M pc −1 , and an inner width ∼0.15 ± 0.05 pc all along its length , within ∼50% of the typical inner filament width observed with Herschel in nearby clouds . NGC 6334 is a very active starforming region at a distance d ∼ 1.7 kpc, which contains 150 OB stars and more than 2000 young stellar objects (Persi & Tapia 2008;Russeil et al. 2013;Willis et al. 2013;Tigé et al. 2017), for a total gas mass of ∼2.2 × 10 5 M . Here, thanks to the high angular resolution and sensitivity of Atacama Large Millimeter Array (ALMA) data at 3 mm, we investigate the density and velocity sub-structure of the massive NGC 6334 filament and compare the results with those obtained for nearby, lower-mass star-forming filaments.
The paper is organized as follows. In Sect. 2, we describe our ALMA observations of NGC 6334 in both the N 2 H + (1-0), HC 3 N(10-9), HC 5 N(36-35), CH 3 CCH(6 0 -5 0 ), H 2 CS(3 1,2 -2 1,1 ) lines and the 3.1 mm continuum. In Sect. 3, we show the spatial distributions of the detected molecular lines and continuum emission. We also analyze the velocity features observed in N 2 H + (1-0) emission and extract dense structures such as compact cores and fiber-like components. In Sect. 4, we discuss the evidence of a bimodal fragmentation pattern in the NGC 6334 filament, emphasize the presence of both unusually massive cores and fiber-like velocity-coherent components in the filament, and comment on the possible origin of these multiple velocity components. Our conclusions are summarized in Sect. 5.
ALMA observations of the NGC 6334 filament
We carried out ALMA Cycle 3 observations in Band 3 toward NGC 6334 with both the 12 m antennas (C36-2 configuration) and the Atacama Compact Array (ACA) 7 m antennas, as part of project 2015.1.01404.S (PI: Ph. André). We imaged the main filament in the NGC 6334 region using a 17-pointing mosaic with the 12 m array and a 8-pointing mosaic with the ACA array. The N 2 H + (1-0) line was observed in narrow-band mode at a frequency resolution of 61.035 kHz, corresponding to ∼0.2 km s −1 . The HC 5 N (J = 36-35) line was included in the same narrowband, high spectral resolution window. The 3.1 mm continuum emission was observed using three wide bands, each covering a bandwidth of 1875.0 MHz. The HNC (1-0), HC 3 N (10-9), CH 3 CCH (6 0 -5 0 ), and H 2 CS (3 1,2 -2 1,1 ) lines were included in these wide bands and observed at a frequency resolution of 31.250 MHz, corresponding to ∼100 km s −1 . The 12 m-array observations were carried out on 23, 24, 26 January 2016 with 42, 46, and 37 antennas, respectively, and projected baseline lengths ranging from 11.7 to 326.8 kλ. The ACA observations were carried out between 16 March 2016 and 8 July 2016 with 7-9 antennas and projected baseline lengths ranging from 4.5 to 48.2 kλ. In the calibration process, we made additional flagging of a few antennas that had too low gain or showed large amplitude dispersion in time. The bandpass calibration was achieved by observing the quasar J1617-5848 with the 12 m array and the three quasars J1924-2914, J1517-2422, and J1427-4206 with the ACA array. The complex gain calibration was carried out using the quasar J1713-3418 with the 12 m array and the quasar J1717-3342 with ACA. Absolute flux calibration was achieved by observing two solar system objects (Callisto and Titan) with the 12 m array and planets (Mars and Neptune) with ACA. Calibration and data reduction were performed with the Common Astronomy Software Application (CASA) package (version 4.5.3 for 12 m data calibration and imaging, version 4.6.0 for 7 m data calibration). For imaging, we adopted the Briggs weighting scheme with a robust parameter of 0.5, as a good compromise between maximizing angular resolution and sensitivity to extended structures. The resulting synthesized beam sizes (∼3 , corresponding to 0.025 pc) and rms noise levels for each line and the continuum are summarized in Table 1.
With a minimum projected baseline length of 4.5 kλ, our ALMA+ACA observations are estimated to be sensitive to angular scales up to ∼37 (corresponding to ∼0.3 pc) at the 10% level (Wilner & Welch 1994). For comparison, we expect the transverse size of any filament sub-structures to be smaller than the ∼0.15 pc inner width of the NGC 6334 filament . Likewise with a minimum projected baseline of 11.7 kλ, our 12 m-only observations are sensitive to angular scales up to ∼14 (corresponding to ∼0.1 pc) at the 10% level. For comparison, the scales of individual dense cores
Results and analysis
In this section, we show the results of our ALMA 3.1 mm continuum and N 2 H + (1-0) line observations, and then extract compact 3.1 mm continuum sources and fiber-like velocity-coherent structures from the ALMA data.
3.1 mm continuum emission
Our ALMA 3.1 mm continuum map of the NGC 6334 filament region is shown in panel a of Fig. 1 (color scale and contours). It is also overlaid as contours on a Spitzer 8 µm emission map in panel d of Fig. 1. Hereafter, we call the map resulting from the combination of APEX/ArTéMiS 350 µm and Herschel/SPIRE 350 µm (Russeil et al. 2013) data the ArTéMiS 350 µm map. The counterpart of the dense filament seen in the ArTéMiS 350 µm map can be recognized in the northern part of the ALMA 3.1 mm continuum map. In the southern part of the field, the filament is not clearly detected in the ALMA 3.1 mm continuum map, but a shell-like structure can be seen. Conversely, the shell-like structure is not seen in the ArTéMiS 350 µm map. A compact HII region associated with a 4.9 GHz radio continuum source labeled source D in Rodriguez et al. (1982) lies close to the contours of this shell-like structure. The shell-like structure in the ALMA 3.1 mm continuum map also coincides with bright, extended mid-infrared emission detected at 8 µm with Spitzer as shown in Fig. 1d. We conclude that the 3.1 mm continuum emission in the shell-like structure is most likely dominated by free-free emission from the compact HII region around the luminous embedded star associated with source D.
Molecular lines
3.2.1. Spatial distribution of the N 2 H + (1-0) emission Figure 1b shows the integrated intensity map of the N 2 H + (1-0) line emission around the NGC 6334 main filament. It can be seen that the prominent dusty filament in the ArTéMiS 350 µm map ( Fig. 1c) is very well traced by the ALMA N 2 H + (1-0) data. In addition, a few N 2 H + blobs can be recognized outside the main filament.
Spatial distributions of other molecular line tracers
The ALMA maps obtained in all other molecular line tracers, except HNC(1-0), also show the NGC 6334 filament (see . The map obtained in HNC(1-0) differs from the other maps in that it shows a rather scattered distribution of discrete blobs. Due to a lower effective excitation density (cf. Shirley 2015), the emission from the HNC(1-0) transition may be more extended than the emission in the other dense gas tracers observed here and may be resolved out due to interferometric filtering effects, even with ACA. The maps obtained in CH 3 CCH(6 0 -5 0 ), H 2 CS(3 1,2 -2 1,1 ), HC 3 N(10-9), and HC 5 N(36-35) emission appear to trace the same filamentary structure as detected in N 2 H + (1-0). The HC 5 N(36-35) line has a high upper state energy of 80.504 K, implying that some of the gas in the main filament has a high temperature and/or density. Indeed, André et al. (2016) estimated the average column density and average volume density of the entire NGC 6334 filament to be 1-2 × 10 23 and 2.2 × 10 5 cm −3 , respectively. This exceeds the (column) density values observed by Palmeirim et al. (2013) for the low-mass star-forming filament B211/B213 in Taurus (∼1.4 × 10 22 cm −2 and 4.5 × 10 4 cm −3 ) by an order of magnitude.
Observed line profiles
The N 2 H + (1-0) and HC 5 N(36-35) line observations were obtained in narrow-band mode with a velocity resolution of 0.2 km s −1 , which allows us to investigate the velocity structure of the NGC 6334 filament.
Here, we discuss the velocity profiles obtained in N 2 H + (1-0) and HC 5 N(36-35). Figure 2 shows the N 2 H + (1-0) and HC 5 N(36-35) spectra averaged over the pixels where significant emission was detected above the 5σ level. Two peaks at V sys = −2.6 and V sys = −1.0 km s −1 can be recognized in the HC 5 N(36-35) spectrum. We set the velocity scale of the N 2 H + (1-0) spectrum using the rest frequency of the isolated component of the N 2 H + (v = 0, J = 1-0, F 1 = 0-1, F = 1-2, 93.176265 GHz) hyperfine structure (HFS) multiplet as a reference. Hereafter, we call this isolated N 2 H + component "HFS 1" and the other components "HFS 2-7" (see Fig. 2). The peak velocity of HFS 1 (approximately −2.6 km s −1 ) is consistent with the peak velocity of the HC 5 N(36-35) emission. In addition, the N 2 H + HFS 1 line profile exhibits an emission wing at highly blueshifted velocities (up to −12 km s −1 , see Fig. 2). This blueshifted emission does not appear to be associated with the NGC 6334 filament itself since it is mainly detected outside the main filamentary structure (see In panel a, the magenta filled circle indicates the position of a radio continuum compact HII region (source D in Rodriguez et al. 1982), and green open circles indicate the positions of the compact 3.1 mm continuum sources identified with getsources (see Sect. 3.3.1). In panel a, the solid white contour marks the footprint of the main filament, defined as the intersection of the area within ±30 from the filament crest and the interior of the 5 Jy beam −1 contour in the ArTéMiS 350 µm map. In panels a and d, contours of 3.1 mm continuum emission are overlaid with levels of 2, 4, 6, 8, 10, 15, 20, 25, 30, 35, and 40σ, where 1σ = 0.2 mJy/ALMA-beam. In panel b, the N 2 H + intensity was integrated over the velocity range from −11.4 km s −1 to +16.8 km s −1 , including all hyperfine components. In panel c, contours of 350 µm dust continuum emission are shown with levels of 10, 12, 14, 16, 18, and 20 Jy/8 -beam, and dashed ellipses indicate the positions of the ArTéMiS dense clumps identified with GAUSSCLUMPS (see Sect. 4.1). In all panels, the crest of the NGC 6334 main filament as traced with the DisPerSE algorithm in the ArTéMiS 350 µm map ) is shown as a solid curve. 3.3. Extraction of compact continuum sources and "fiber-like" velocity structure from the data To extract compact continuum sources from the ALMA/ACA 3.1 mm continuum map and fiber-like velocity-coherent features from the ALMA/ACA N 2 H + (1-0) data cube, we made use of the getsources, getfilaments, and getimages algorithms (Men'shchikov et al. 2012;Men'shchikov 2013, 2017, andin prep.).
3.3.1. Compact source extraction from the ALMA 3.1 mm continuum data with getsources To identify compact sources in the ALMA 3.1 mm continuum map, we applied the getsources algorithm (e.g., Men'shchikov et al. 2012). getsources is a multi-scale source extraction algorithm primarily developed for the exploitation of multi-wavelength far-infrared and submillimeter continuum data resulting from Herschel surveys of Galactic star-forming regions (see Könyves et al. 2015), but can also be used with single-band continuum data and spectral line data. Source extraction with this algorithm has only one free parameter, namely the maximum size of the structures to be extracted from the images. Here, we adopted a maximum size of 15 (or ∼0.12 pc), which is comparable to the transverse full width at half maximum (FWHM) size of the filament as measured with ArTéMiS ). After running getsources on the 12 m-only data 2 and applying post-extraction selection criteria (see Appendix C), we identified a total of 40 candidate compact 3.1 mm continuum sources. As shown in Fig. 1a, 28 of these 40 sources are located within the main filament defined as the intersection of the area within ±30 (or ±0.25 pc) from the filament crest and the area enclosed within the 5 Jy beam −1 contour in the ArTéMiS 350 µm map (see also Fig. 3 and Table 2). Two of these 28 sources (IDs 15, 18 in Table 2) are probably affected by contamination from free-free emission as mentioned in Sect. 3.1. The positions and sizes of all of these 40 compact continuum sources are summarized in Table 2, along with their basic properties.
To investigate whether the compact continuum sources identified above are associated with N 2 H + emission, we also applied getsources to each N 2 H + velocity channel map (see Appendix C for details). We found that 23 of the 40 compact continuum sources are associated with N 2 H + emission in at least two consecutive channels. In addition, five of the 40 compact sources (IDs 3, 7, 13, 33, 35 in Table 2) are associated with N 2 H + (1-0) line emission in only one channel, but these sources did not pass our post-extraction selection criteria (see Appendix C). Although contamination of the 3.1 mm continuum data by freefree emission is an issue, compact sources robustly detected in both 3.1 mm continuum and N 2 H + emission are unlikely to be affected.
We estimated the mass (M tot ) of each compact continuum source, under the assumption that all of the 3.1 mm continuum emission arises from dust and that the emission is optically thin.
The mass was obtained from the integrated 3.1 mm flux density S tot 3.1 mm derived from two-dimensional Gaussian fitting in the 12 m+7 m continuum image 3 using the formula: where d, κ 3.1 mm , and B 3.1 mm (T d ) are the distance to the target, the dust opacity (per unit mass of gas + dust), and the Planck function at dust temperature T d , respectively. We adopted the same dust opacity law as the Herschel Gould Belt survey team, namely κ λ = 0.1 (λ/300 µm) −β cm 2 g −1 (cf. Hildebrand 1983;Roy et al. 2014) with β = 2 and here λ = 3.1 mm. If β = 1.5 instead of β = 2, the core masses would be a factor of ∼3 lower than the values listed in Table 2. For most sources, we adopted T dust = 20 K which corresponds to the median dust temperature derived from Herschel data along the crest of the filament Tigé et al. 2017). For source ID 1, we adopted a higher temperature value (T dust = 50 K) as this object coincides with a bright Spitzer 8 µm source and is most likely an internally-heated, relatively massive protostellar core. The 5σ mass sensitivity of the ALMA 12 m continuum data for compact sources is ∼2.0 M , corresponding to S tot 3.1 mm = 0.7 mJy, assuming β = 2 and T dust = 20 K. The average gas density (≡ n H 2 ) of each source was then derived assuming spheroidal cores as follows: where R major and R minor are the deconvolved FWHM sizes along the major and minor axis of the source, respectively (cf. Könyves et al. 2015). The median mass of the 26 compact continuum sources embedded in the main filament 4 is 9.6 +3.0 −1.9 M (lower quartile: 7.7 M , upper quartile: 12.6 M ) and their median volume-averaged density 1.6 × 10 7 cm −3 (lower quartile: 1.0 × 10 7 cm −3 , upper quartile: 2.2 × 10 7 cm −3 ). The 26 continuum sources embedded in the main filament are compact (with an estimated typical outer radius ∼5000 au) and have a very high volume-averaged density (∼10 7 cm −3 ), suggesting that they are on their way to form stars. Moreover, their spatial distribution closely follows that of the ArTéMiS 350 µm emission clumps (see Fig. 3 and Sect. 4.1). In the following, we therefore regard these 26 compact 3.1 mm continuum sources as dense cores.
Extraction of fiber-like structures from the N 2 H + data cube with getfilaments
Removal of unrelated velocity components. In the NGC 6334 region, we found evidence of the presence of several velocity components along the line of sight (see Sect. 3.2.3). To avoid contamination of the N 2 H + (1-0) HFS 1 line emission from the main velocity component at V sys = −2.6 km s −1 by the HFS 2-7 emission from other velocity components, we 3 We used the 12 m+7 m data to estimate the integrated flux densities and masses of the sources detected in the 12 m-only continuum map, in order to avoid missing-flux problems due to interferometric filtering of large scales as much as possible. For four weak unresolved sources, undetected above the 3σ level in the 12 m+7 m map, the 12 m-only data were used instead (cf. Table 2). 4 The median mass of all 40 compact continuum sources is 9.4 +3.4 −1.9 M (lower quartile: 7.5 M , upper quartile: 12.8 M ) and the median volume-averaged density 1.4 × 10 7 cm −3 (lower quartile: 0.8 × 10 7 cm −3 , upper quartile: 2.0 × 10 7 cm −3 ). Extraction of velocity-coherent fiber-like features with getfilaments. We extracted filamentary structures from the ALMA N 2 H + data cube after subtraction of compact N 2 H + emission using the getfilaments algorithm (e.g., Men'shchikov 2013). Each N 2 H + velocity channel map was decomposed into a set of spatially-filtered single-scale images from small (3 ) to large (16 ) scales. At each scale, the algorithm separated filamentary structures and compact sources, and extracted significant filamentary structures in the sourcesubtracted image component. All extracted filamentary structures were merged into a single image per velocity channel map. We selected only filamentary structures whose >90% pixels along the crest were detected above the >3σ level in each channel. As the velocity resolution of our ALMA N 2 H + data (0.2 km s −1 , that is, less than the isothermal sound speed in the cloud) should be sufficient to resolve the velocity width of the filament, we also imposed that a selected filamentary structure should be detected in at least two consecutive velocity channels. By comparing velocity-channel maps, we therefore associated filamentary structures with matching spatial distributions among channels (see Appendix D). Our procedure is similar to that used by Hacar et al. (2013) to trace velocity-coherent structures in data cubes, in the sense that both methods work in positionposition-velocity (PPV) space 5 . In this way, we identified five distinct fiber-like structures, labeled F-1 to F-5, whose crests are displayed in Figs. 4 and 5. Examples of individual N 2 H + (HFS 1) spectra across the two main fiber-like structures F-1 and F-2 are shown in Fig. 6. The spatial distribution of the structure labeled F-2 partly overlaps with the distribution of F-1 at −2.4 km s −1 . But the northern portion of F-2 lies slightly to the west of F-1, while the southern part of F-2 lies to the east of F-1 (see, e.g., Fig. 4b). Moreover, at positions where F-1 and F-2 overlap in the plane of the sky, the N 2 H + spectra clearly exhibit distinct velocity components (see, e.g., positions (c) and (d) in Fig. 6). Therefore, F-1 and F-2 seem to be distinct velocity-coherent features. The five fiber-like structures are likely associated with the dust continuum filament seen in the ArTéMiS 350 µm map. Similar sub-filamentary structures have been found in the lowmass star-forming filament B211/B213 in Taurus (Hacar et al. 2013) and have been called fibers in the literature (cf. Tafalla & Hacar 2015). At this point, we refer to the five features F-1-F-5 as fiber-like structures. (In Sect 4.3 below, we argue that only two of them, F-1 and F-2, may be genuine fibers, that is velocity-coherent sub-structures of the main filament itself.) 5 As only a small (≥5) number of velocity-coherent features appeared to be present in our N 2 H + data cube (see Table 3) and we had no experience with the FIVE method developed by Hacar et al. (2013), we preferred to follow a procedure based on channel-by-channel getfilaments extractions and careful visual inspection of the data cube (see Appendix D). Notes. (1) ID numbers of 3.1 mm compact sources associated with N 2 H + emission in two or more contiguous channels are labeled in bold.
(2) Results of the 2D Gaussian fitting on the 12 m+7 m map. For IDs 21, 29, 32, 35, the 12 m map was used since the 3.1 mm continuum emission is not detected with a >3σ on the 12m+7m map.
(3) Deconvolved FWHM source diameters along the major and minor axes. (4) We adopted T dust = 20 K for all sources, except for source ID 2 for which we assumed T dust = 50 K (see Sect. 3.3.1). (5) Volume-averaged gas density derived assuming spheroidal cores as in Eq.
(2). (6) Marginal N 2 H + detection since it is detected in only one channel for this source. (7) Deconvolved source size are not obtained since the structure is not resolved. The peak flux was used to estimate the mass.
Their typical length is 0.5 ± 0.4 pc (see Table 3, which also gives their velocity ranges). A portion of the dusty filament seen in the ArTéMiS 350 µm continuum map was not traced by getfilaments in the N 2 H + data, especially in the southern part of the field in Fig. 1 although the filament can be recognized by eye in the N 2 H + (1-0) integrated intensity map (Fig. 1b). The reason why this southern portion was not traced in the N 2 H + velocity channel maps with getfilaments is that the N 2 H + emission becomes very clumpy in this area and was subtracted out as a collection of point-like sources by getsources before identification of filamentary structures.
F-1 is the longest and F-2 the second longest of the five extracted velocity features. Both of these fiber-like structures are roughly parallel to the dust continuum filament seen in the ArTéMiS 350 µm map. In contrast, the crest of F-5 is roughly perpendicular to the dust filament. In the northern part of the N 2 H + map (north of −35 • 47 10 in Figs. 4 and 7a), the spatial distributions of F-1, F-2, F-4 partly overlap in the plane of sky. The three features F-1, F-2, F-4 are nevertheless separated from each other in velocity space (see Table 3). The crests of F-1, F-2, F-4 are also slightly shifted from one another by typically one ALMA beam (∼3 , see Fig. 7a). In the northern part, they are distributed in the sequence F-1, F-2, F-4 from east to west (see Fig. 7a). In contrast, the projected LSR velocities are in the order F-4 (∼−3.3 km s −1 ), F-1 (∼ −2.8 km s −1 ), F-2 (∼ −1.7 km s −1 ). Assuming F-1, F-2, F-4 are part of the same filament, this different ordering between the spatial and velocity distributions cannot be explained by a simple transverse velocity gradient across the parent filament due to either accretion onto and/or rotation of the filament. In the southern part of the field (south of −35 • 47 10 in Figs. 4 and 7b), F-4 is not detected and F-1, F-2 are distributed in the order F-2, F-1 from east to west (see Fig. 7b). These opposite spatial configurations for F-1, Table 3). In panel b, dark blue squares mark 3.1 mm compact sources associated with N 2 H + emission and embedded in F-1, light blue squares 3.1 mm sources associated with N 2 H + emission and embedded in F-2, yellow squares 3.1 mm sources embedded in F-1, F-2, or F-3 and marginally detected in only one N 2 H + channel, black squares 3.1 mm sources associated with N 2 H + emission in the main filament but not embedded in any N 2 H + velocity-coherent structure, and white squares 3.1 mm sources in the main filament not associated with any N 2 H + emission. The red triangles indicate 3.1 mm compact sources possibly contaminated by free-free emission. F-2 between the northern and the southern part of the dust filament are suggestive of an intertwined, double helix-like pattern (DNA-like) (see Fig. 3b).
We also extracted filamentary structures from the ALMA HC 3 N, HC 5 N, CH 3 CCH, and H 2 CS integrated intensity maps using getfilaments. We detected one filamentary structure in HC 3 N, one filamentary structure in HC 5 N, one filamentary structure in CH 3 CCH, and two filamentary structures in H 2 CS. These filamentary structures roughly correspond to the F-1 and/or F-2 "fiber-like" features identified in N 2 H + . Due to the lower effective sensitivity of the H 2 CS(3 1,2 -2 1,1 ) data, the NGC 6334 filament is broken up into two segments in H 2 CS. The H 2 CS filamentary structures coincide with the northern and southern parts of the filamentary structures traced in other lines. Fig. 1b). The red filled circles in the central panel also mark the positions of the four spectra. Colored horizontal segments indicate the velocity ranges of the various fiber-like structures (see Table 3). Owing to the lower line intensities and/or lower spectral resolution of our data in HC 3 N, HC 5 N, CH 3 CCH, and H 2 CS, we could not carry out as detailed a PPV analysis in these lines as in N 2 H + .
Discussion
Our ALMA study has revealed, for the first time, the presence of density and velocity sub-structure in the NGC 6334 filament. We discuss the two types of sub-structure in turn in the following.
Bimodal fragmentation in the NGC 6334 filament
While it was not clear whether dense cores were embedded in the NGC 6334 filament based only on the ArTéMiS 350 µm data ), our ALMA 3.1 mm continuum map with an angular resolution of ∼2.3 has allowed us to identify 26 candidate dense cores within the filament (cf. Sect. 3.3.1). These ALMA cores appear to be clustered in several groups closely associated with ArTéMiS clumps (see Fig. 3). Figure 8 shows a dendrogram tree obtained when applying a nearest neighbor separation (NNS) analysis to the population of ALMA dense cores (with the scipy function cluster.hierarchy.linkage and the single method). Here, we adopted two values of the NNS grouping threshold, 0.1 and 0.15 pc, consistent with both the typical inner width of the filament and the typical FWHM size of the ArTéMiS clumps (see Table 4). Based on the NNS analysis with a grouping threshold of 0.1 pc, the 26 dense cores can be divided into seven groups of cores (groups 1-7). Using a grouping threshold of 0.15 pc instead of 0.1 pc, groups 2, 3 join group 1 and groups 4, 5, 6, 7 merge into the same group (see Fig. 8).
Remarkably, the seven groups of ALMA compact cores roughly correspond to dense clumps visible as closed contours in the ArTéMiS 350 µm map of the NGC 6334 filament at 8 resolution (see Figs. 1c and E.1). For the purpose of this paper, we used GAUSSCLUMPS (Stutzki & Guesten 1990) to characterize the properties of the ArTéMiS clumps. We applied GAUSSCLUMPS with a detection threshold of 1 Jy/8 -beam corresponding to 5σ (where 1σ = 0.2 Jy/8 -beam). A total of seven clumps were identified in this way within the footprint of the main filament, defined as the intersection of the area within ±30 (or ±0.25 pc) 6 from the filament crest and the interior of the 5 Jy/8 -beam ArTéMiS contour (see solid white contour in Fig. 8), whose positions, sizes, and estimated masses are given in Table 4. These seven ArTéMiS clumps are also identified with the getsources (Men'shchikov et al. 2012; Men'shchikov 2013) and REINHLOD (Berry et al. 2007) source extraction algorithms, as described in Appendix F. They correspond to six groups of ALMA cores (groups 1, 2, 3, 5, 6, 7). More precisely, if we consider a group of ALMA dense cores to be associated with an ArTéMiS clump when all of the cores in the group lie within the closed contours of the clump (Table 4), then we find that each ArTéMiS clump consists of at least two ALMA cores (Fig. 3a). In each panel, the solid white contour marks the footprint of the main filament, defined as the intersection of the area within ±30 from the filament crest and the interior of the 5 Jy beam −1 contour in the ArTeMiS 350 µm map, within which randomly-placed sources were distributed. The dashed white contour marks the area within ±6 (or 0.05 pc) of the filament crest, defining the inner filament footprint also discussed in the text. Bottom row: NNS analysis results for the observed cores (panel b) and the randomly-placed sources (panels d and f). Colored circles mark core positions in panels a, c, e, with colors corresponding to the groups defined by the NNS dendrograms shown in panels b, d, f.
To test whether the observed clustering of ALMA cores and close association with ArTéMiS clumps may also be present in the case of randomly distributed cores, we inserted 26 sources at random positions within the footprint of the main filament (see solid white contour in Fig. 8) using the python module random and then applied the same NNS analysis. A total of 100 realizations of such random source distributions were constructed. Two examples of resulting NNS dendrogram trees are displayed in Fig. 8, where they are compared to the NNS tree obtained for the real ALMA cores. It can be seen that randomly distributed sources tend to be less clustered than the observed cores, that is, a higher number of randomly-placed sources are isolated (not grouped by the NNS analysis) compared to the observations. In contrast to the observed cores, there is also no clear association between the groups of randomly-placed sources and the ArTéMiS clumps. The clustering of the observed ALMA cores within the ArTéMiS clumps is highly significant, at the >5σ level according to binomial statistics: 16 out of 26 cores (>60%) lie within the FWHM ellipses of the clumps, while only 3 ± 2 (∼13%) of 26 randomly-placed objects would be expected within the clumps. Furthermore, the mean separation between observed cores (0.04 ± 0.01 pc) is smaller than the mean separation between randomly-placed sources found over 100 realizations (0.06 ± 0.01 pc). It is also apparent in the top row of Fig. 8 that the observed cores lie significantly closer to the filament crest than randomlyplaced objects within the filament footprint (solid white contour). Quantitatively, the distribution of offsets between the observed cores and the filament crest has a median value of only 0.02 pc (lower quartile: 0.01 pc, upper quartile: 0.03 pc) while the median offset between randomly-placed sources and the crest is 0.07 pc (lower quartile: 0.03 pc, upper quartile: 0.13 pc). In other words, the spatial distribution of observed cores is almost one-dimensional (1D) along the filament crest. To further test whether significant 1D grouping of the cores exists along the crest, we repeated the same experiment with randomlyplaced objects as described above but using a narrower footprint, defined as the area within ±6 (or 0.05 pc) of the filament crest (dashed white contour in Fig. 8). The transverse width of this inner filament footprint is ∼0.1 pc, which is comparable to the dispersion of observed core positions about the crest. In this case, the mean separation between randomly-placed sources becomes identical to the mean separation between observed cores (0.04 ± 0.01 pc). The observed ALMA cores nevertheless remain more closely associated with the ArTéMiS clumps than randomly-distributed objects along the filament crest: only 8 ± 3 of 26 randomly-placed sources would be expected within the clumps, while 16 are observed, a difference which is significant at the ∼3 σ level according to binomial statistics. We conclude that, even in 1D, there is marginal evidence of non-random core grouping along the filament axis.
The projected nearest-neighbor separation between the ArTéMiS clumps ranges from 0.2 to 0.3 pc, and the typical projected separation between ALMA dense cores embedded within a given ArTéMiS clump is 0.04 ± 0.01 pc. These observational findings are interesting to compare with theoretical expectations. When the line mass of a cylindrical gas filament is close to the critical mass per unit length (thermal case) or the virial mass per unit length (nonthermal case, more appropriate here -see end of Sect. 1), the filament is expected to fragment into overdensities with a typical separation of about four times the filament width according to self-similar solutions which describe the collapse of an isothermal filament under the effect of self-gravity (e.g. Inutsuka & Miyama 1992, 1997. With a typical inner width of 0.15 ± 0.05 pc from the ArTéMiS results , the NGC 6334 filament can therefore be expected to fragment with a characteristic separation of ∼0.6 ± 0.2 pc. This separation scale is roughly consistent with the projected separation of 0.2-0.3 pc observed between ArTéMiS clumps, assuming a plausible inclination angle of ∼30 • between the filament axis and the line of sight. On the other hand, the effective Jeans length λ J,eff or Bonnor-Ebert diameter D BE,eff within the filament and its clumps may be estimated as (cf. Bonnor 1956): where and G, ρ clump , σ N 2 H + , T k , m, and m N 2 H + are the gravitational constant, the density of each ArTéMiS clump (see Table 4), the velocity dispersion measured in N 2 H + , the gas kinetic temperature, the mean molecular mass, and the mass of the N 2 H + molecule, respectively. Assuming a gas kinetic temperature of 20 K, which corresponds to the median dust temperature derived from Herschel data along the crest of the filament Tigé et al. 2017) and using the velocity dispersion measured in N 2 H + for each ArTéMiS clump (σ N 2 H + = δV FWHM / √ 8 ln(2), where δV FWHM is the FWHM linewidth), the effective Jeans length in the clumps of the filament is estimated to range from ∼0.04 to ∼0.3 pc (median: 0.08 pc). This is roughly consistent with the typical projected separation between ALMA dense cores (0.04 ± 0.01 pc) assuming random projection effects within the ArTéMiS clumps. These two characteristic separation scales are suggestive of two distinct fragmentation modes within the NGC 6334 filament: (i) a cylindrical fragmentation mode into clumps or groups of cores with a separation of ∼4 times the filament width, and (ii) a spherical Jeans-like fragmentation mode into compact cores with a separation on the order of the effective Jeans length. Similar bimodal fragmentation patterns were first reported by A&A 632, A83 (2019) Fig. 9. Mass distribution of the 26 ALMA cores identified in the NGC 6334 main filament (magenta) compared to a scaled version of the prestellar CMF observed in the Aquila cloud (blue, from Könyves et al. 2015). In both NGC 6334 and Aquila, cores were extracted from the data using the same algorithm getsources. For easier comparison, the Aquila CMF was re-normalized to have a peak value comparable to that of the NGC 6334 CMF. All error bars correspond to √ N counting statistics. The magenta vertical dashed line indicates the estimated 90% completeness level (2.6 M ) of our ALMA 3.2 mm census of dense cores in the NGC 6334 filament. The blue vertical dashed line marks the 90% completeness level of the Aquila CMF (0.2 M ). (The blue shaded area reflects uncertainties associated with the uncertain classification of starless cores as bound or unbound objects.) Takahashi et al. (2013) in Orion OMC3 and Kainulainen et al. (2013) in the infrared dark cloud (IRDC) G11.11-0.12 (see also Teixeira et al. 2016 andKainulainen et al. 2017). Recent theoretical works on filament fragmentation have tried to account for these two fragmentation modes based on perturbations to standard cylinder fragmentation models (cf. Clarke et al. 2016;Gritschneder et al. 2017;Lee et al. 2017).
Unusually massive cores in the NGC 6334 filament
Remarkably, the median mass of the ALMA cores (9.6 +3.0 −1.9 M ) is an order of magnitude higher than the peak of the prestellar core mass function (CMF) at ∼0.6 M as measured with Herschel in nearby clouds (e.g. Könyves et al. 2015). This is illustrated in Fig. 9 which compares a rough estimate of the CMF in the NGC 6334 filament based on the present ALMA study to the CMF derived from Herschel data in the Aquila cloud. Although the two data sets differ somewhat in nature and wavelength (e.g., ALMA 3.1 mm vs. Herschel 160-500 µm), cores were extracted from dust continuum maps using the same algorithm (getsources) in both cases. Moreover, the ALMA 3.1 mm continuum data used here are sensitive to the typical size scales of dense cores identified with Herschel in Gould Belt clouds (∼ 0.02 to ∼ 0.1 pc -see Sect. 2 and Fig. 7 of Könyves et al. 2015). The two CMFs shown in Fig. 9 are thus directly comparable. The difference in typical core mass between NGC 6334 and Aquila largely exceeds possible uncertainties in the dust emissivity at 3.1 mm. For instance, if the dust emissivity index β = 1.5 (instead of β = 2 as assumed in Table 2 and Fig. 9), the median core mass in NGC 6334 would still be a factor ∼5 higher than the CMF peak in Aquila. The median mass of observed ALMA cores essentially corresponds to the peak of the NGC 6334 CMF, and lies more than a factor of three above the estimated 90% completeness level (2.6 M ) of the present core census 7 . While the NGC 6334 CMF shown in Fig. 9 is admittedly quite uncertain due to, for instance, low-number statistics and uncertainties in the 3.1 mm dust emissivity, we stress that it represents one of the first estimates of the CMF generated by a single, massive filament 8 (see Takahashi et al. 2013;Zhang et al. 2015;Ohashi et al. 2016 for examples of CMFs in somewhat less massive filaments or IRDCs with M line < 500 M pc −1 ).
The median core mass derived from the ALMA data is roughly consistent with the effective critical Bonnor-Ebert mass M BE,eff in the clumps of the filament, which can be expressed as (Bonnor 1956): Our results also suggest that more massive cores may form in denser/more massive filaments and are consistent with a picture in which the global prestellar CMF, and possibly the stellar initial mass function (IMF) itself, originate from the superposition of the CMFs generated by individual filaments with a whole spectrum of masses per unit length ).
Likelihood of the NGC 6334 filament being a system of two velocity-coherent fibers
As described in Sect. 3.3, our ALMA observations show that the ∼0.15-pc wide filament detected in the ArTéMiS 350 µm dust continuum map is sub-structured in five fiber-like N 2 H + (1-0) components with different velocities. The typical length of these fiber-like sub-structures is 0.5 ± 0.4 pc and the typical velocity difference between them is 0.8 km s −1 (standard deviation of the five N 2 H + velocity components). The N 2 H + velocity-coherent sub-structures may be broadly categorized into two groups. The first group consists of the N 2 H + fiber-like sub-structures F-1 and F-2, which are also detected in other molecular line tracers of dense gas such as CH 3 CCH, H 2 CS, HC 3 N, and HC 5 N and harbor compact dense cores detected in 3.1 mm continuum emission. The second group (F-3, F-4, and F-5) is made up of velocitycoherent sub-structures detected only in N 2 H + and which do not seem to contain dense cores. There are at least two possible explanations for these two groups. First, the physical excitation conditions in the two groups of sub-structures may differ. Second, the low velocity resolution of the current CH 3 CCH, H 2 CS, HC 3 N line data (see Table 1) may prevent the identification of fiber-like features in these tracers. To test the former explanation, observations in several transitions of each species would be required to estimate the excitation temperature, column density, and chemical abundance of the various molecules. To investigate the latter effect, ALMA observations of the same tracers at higher spectral resolution would be required. The presence of velocity-coherent fiber-like sub-structures in molecular filaments was first reported by Hacar et al. (2013) in the case of the low-mass star-forming filament . In this filament, Hacar et al. (2013) used their friends in velocity (FIVE) algorithm to identify at least 20 velocity-coherent components in N 2 H + and C 18 O (see Table 3 in Hacar et al. 2013), which were subsequently called fibers. Since then, similar velocity-coherent components have also been detected in N 2 H + in the IRDC G035.39-00.33 (Henshaw et al. 2014), the NGC 1333 protocluster (Hacar et al. 2017), IRDC G034.43+00.24 (Barnes et al. 2018), and the Orion A integralshaped filament (Hacar et al. 2018). The fiber-like sub-structures identified in NGC 1333 and Orion A may, however, differ in nature from those observed in the Taurus B211/B213 filament and in the present target NGC 6334 (see also Clarke et al. 2017). The velocity-coherent sub-structures observed in the NGC 1333 and Orion A cases are indeed well separated in the plane of the sky, while those in the Taurus and NGC 6334 filaments overlap in the sky and can mostly be distinguished in PPV space.
The typical length (0.6 ± 0.5 pc) and velocity difference between components (0.7 km s −1 ) reported by Hacar et al. (2013) for the fiber-like sub-structures of the (low-mass) B211/B213 filament in Taurus are remarkably similar to the properties estimated here for the velocity-coherent sub-structures of the (high-mass) NGC 6334 filament. We further note that Hacar et al. (2013) divided up their B211/B213 fibers into two groups, fertile and sterile, depending on whether they contained dense cores or not. Most of the 35 velocity-coherent sub-structures identified in B211/B213 were sterile and detected mostly in C 18 O(1-0), while only 7 fiber-like sub-structures were fertile and also detected in N 2 H + (1-0). This is reminiscent of the situation found here for the NGC 6334 filament, where the two sub-structures detected in multiple dense gas tracers (F-1 and F-2) are the only two fertile fibers harboring ALMA dense cores. We argue below that these two categories of fiber-like structures differ in physical nature and possibly origin.
Possible origin(s) of the fiber-like sub-structures
Three scenarios for the formation of velocity-coherent fiberlike sub-structures have been proposed in the literature. One is the "fray and fragment" scenario proposed by Tafalla & Hacar (2015) and supported by Clarke et al. (2017). In this scenario, a main filament forms first by collision of two supersonic turbulent gas flows. Then, the main filament fragments into an intertwined system of velocity-coherent sub-structures, due to a combination of residual turbulent motions and self-gravity. In this picture, the sub-structures are formed by fragmentation of a single filament, and the velocity-coherent sub-structures are expected to be roughly aligned with the main filament (cf. Smith et al. 2014). The second scenario is the "fray and gather" scenario, in which turbulent compression first generates short, velocity-coherent filamentary structures within the parent cloud, which are then gathered by large-scale collapse of the cloud, as proposed by Smith et al. (2014). In a third, alternative picture, a dense starforming filament forms within a shell-like molecular gas layer as a result of large-scale anisotropic compression associated with, for instance, expanding bubble(s) or cloud-cloud collision (Chen & Ostriker 2014;Inutsuka et al. 2015;Inoue et al. 2018), and subsequently grows in mass by accreting ambient gas from the surrounding shell-like structure due to its own gravitational potential (cf. Palmeirim et al. 2013;Shimajiri et al. 2019). This accretion process supplies gravitational energy to the dense filament, which is then converted into turbulent kinetic energy in the form of MHD waves (Hennebelle & André 2013), explaining the increase in velocity dispersion with column density observed for thermally supercritical filaments (Arzoumanian et al. 2013). A quasi-stationary state is reached as a result of a dynamical equilibrium between accretion-driven MHD turbulence and the dissipation of this MHD turbulence owing to ion-neutral friction, possibly accounting for the roughly constant filament width of ∼0.1 pc (Hennebelle & André 2013). In this "compress, accrete, and fragment" scenario, sterile fiber-like structures would correspond to portions of the accretion flow onto the central filament (see also Clarke et al. 2018), while fertile fiber-like structures would be the direct imprint of accretion-driven MHD waves within the main filament system.
While further observational constraints will be needed to fully discriminate between the above three pictures, the available constraints tend to support a scenario intermediate between the fray and fragment and the fray and gather picture, perhaps more similar to the compress, accrete, and fragment picture that we propose here. Indeed, the NGC 6334 region may be affected by cloud-cloud collision , see also Appendix E), and some of the N 2 H + velocity-coherent sub-structures identified here (e.g., F-5) are not aligned with, but roughly perpendicular to, the main filament traced at 350 µm by ArTéMiS and in N 2 H + by the two main sub-structures (F-1, F-2 in Table 3). Furthermore, Shimajiri et al. (2019) recently reported kinematic evidence that the B211/B213 filament in Taurus may have formed inside a shell-structure resulting from large-scale compression.
Conclusions
To study the detailed density and velocity structure of the massive filament in the NGC 6334 complex, we carried out ALMA observations at ∼3 resolution in the 3.1 mm continuum and the N 2 H + (1-0), HC 5 N(36-35), HNC(1-0), HC 3 N(10-9), CH 3 CCH(6-5), H 2 CS(3-2) lines. Our main results may be summarized as follows: -Both the 3.1 mm continuum emission and the N 2 H + (1-0), HC 5 N(36-35), HC 3 N(10-9), CH 3 CCH(6-5), H 2 CS(3-2) lines detected with ALMA trace the dusty filament imaged earlier at 350 µm at lower (8 ) resolution with the ArTéMiS camera on APEX. -We identified a total of 40 compact dense cores in the ALMA 3.1 mm continuum map, 26 of them being embedded in the NGC 6334 filament. The majority (21/26 or 80%) of these dense cores are also detected in N 2 H + (1-0) emission. The median core mass is 10 +3 −2 M (lower quartile: 8 M , upper quartile: 13 M ), compared to a 5σ mass sensitivity of 2 M . -The CMF derived from the sample of ALMA cores in the NGC 6334 filament (Fig. 9) presents a peak at the median core mass of ∼10 M , which lies an order of magnitude higher than the peak of the prestellar CMF measured with Herschel in nearby clouds. -The projected separation between ALMA dense cores is 0.03-0.1 pc, which is roughly consistent with the effective Jeans length within the filament. The ALMA cores can be grouped into seven groups, approximately corresponding to dense clumps seen in the ArTéMiS 350 µm continuum map. The projected separation between these groups is 0.2-0.3 pc, which roughly agrees with the characteristic spacing of four times the filament width expected from the linear fragmentation theory of nearly isothermal gas cylinders. These two distinct fragmentation scales are suggestive of two fragmentation modes: a cylindrical mode corresponding to groups of cores, and a spherical, Jeans-like mode corresponding to cores within groups. -We also identified five fiber-like, velocity-coherent sub-structures within the filament by applying the getfilaments algorithm to the ALMA N 2 H + (1-0) data cube. The typical length of these fiber-like structures is 0.5 pc and the projected velocity difference between them is ∼0.8 km s −1 . Only two or three of these five velocity-coherent features are well aligned with the NGC 6334 filament and may represent genuine, intertwined fiber sub-structures. The other two detected velocity-coherent features may rather trace accretion flows onto the main filament. -With the important exception of the typical core mass (which is here an order of magnitude higher), the fragmentation properties and velocity structure of the massive ( > ∼ 500 M pc −1 ) filament in NGC 6334 are remarkably similar to the properties observed by Hacar et al. (2013) and Tafalla & Hacar (2015) for the low-mass (∼50 M pc −1 ) B211/B213 filament in the Taurus cloud. -As both regions appear to be affected by large-scale compressive flows, we suggest that the density and velocity sub-structure observed in the NGC 6334 and the Taurus filament may have originated through a similar mechanism, which we dub "compress, accrete, and fragment".
Appendix C: Post-extraction selection of getsources results
In Sect. 3.3.1, we applied the getsources algorithm to extract compact sources from the ALMA 3.1 mm continuum map and N 2 H + data cube. Here, we describe the additional criteria used to select robust compact sources from the getsources extraction results.
The getsources code identified a total of 49 compact sources in the 3.1 mm continuum map. First, we removed all sources lying closer than 3 ALMA beams from the edge of the map. With this criterion, 3 of the 49 initial sources were removed. We also removed 6 sources identified by getsources with a peak intensity below <5σ. As a result of these two selection steps, only 40 robust continuum sources remained.
Then, in order to investigate the potential association of these compact 3.1 mm continuum sources with compact N 2 H + emission/sources, we also applied the getsources algorithm to each velocity channel map in the N 2 H + data cube. In this case, we removed extracted sources with a peak intensity below <3σ in individual channel maps. We selected only compact N 2 H + sources detected in at least two contiguous velocity channels (where the channel width corresponds to 0.2 km s −1 ) and lying within one ALMA beam (∼3 or ∼0.02 pc) of a 3.1 mm continuum source. As a result, we found that 21 (or 81%) of the 26 continuum cores are associated with compact N 2 H + emission.
Appendix D: Post-extraction selection of getfilaments results
In Sect. 3.3.2, we summarized the additional criteria used to select robust filamentary (sub-)structures from the getfilaments results. Here, we provide more details.
Our first criterion was to impose that >90% of the pixels on the crest of a robust filamentary structure are detected above the 3σ level. In Fig. D.1a, for instance, >90% of the pixels on the crest of the filamentary structure are detected above 3σ. Thus, this filamentary structure would pass the first criterion. In Fig. D.1b, <90% of the pixels on the crest of the filamentary structure are detected above 3σ. Thus, this structure would be regarded as a fake filamentary structure and would be removed from the detection list.
Then, we connected filamentary structures detected in distinct velocity channels by comparing their spatial distributions in adjacent velocity-channel maps: whenever two structures detected in adjacent velocity-channel maps overlapped over >60% of the pixels, they were connected together.
In practice, our step-by-step procedure to associate velocitycoherent features seen in adjacent velocity channels, from the most blueshifted to the most redshifted channel, can be described as follows. Figures D.2 and D.3 show the velocity channel maps of the compact-source-subtracted N 2 H + emission cube that were used for the identification of filamentary structures with getfilaments and the crests of these structures in each velocity channel (after post-extraction selection).
In the channel map at V LSR = −3.6 km s −1 , one filamentary structure is detected. This filamentary structure is labeled F-3 10 . In the channel maps at V LSR = −3.4 and −3.2 km s −1 , three filamentary structures are detected in both channels. The spatial distributions of the detected filamentary structures are consistent in both channels. The filamentary structure whose distribution coincides with that of F-3 at −3.6 km s −1 is identified as F-3. The other two filamentary structures are labeled F-4 and F-1.
In the channel map at V LSR = −3.0 km s −1 , three discrete filaments are detected. The spatial distribution of one part of the northernmost filamentary structure coincides with the distribution of F-3 at −3.6 to −3.2 km s −1 . Thus, this filamentary structure is identified as being F-3. The spatial distribution of another one overlaps that of F-1 at −3.4 km s −1 . The spatial distribution of the third structure coincides with the distribution of one of the filamentary structures detected in the next velocity channel (−2.8 km s −1 ). Thus, these features are identified as being the same filamentary structure (F-3).
In the channel maps for V LSR = −2.8 and −2.6 km s −1 , two filamentary structures corresponding to F-3 and F-1 are detected. At −2.6 km s −1 , the filamentary structure shown as a black curve was removed as it is only detected in this channel.
In the channel map at V LSR = −2.4 km s −1 , three filamentary structures are detected. One of them was removed since it is only detected in this channel. The other two discrete structures are identified as F-1 since their distributions overlap with that of F-1 at −2.6 km s −1 .
In the channel map at V LSR = −2.2 km s −1 , four components can be identified. Two discrete structures correspond to F-1 since their distributions overlap with that of F-1 at −2.4 km s −1 . 10 In Table 3, the identified fiber-like features are listed by decreasing order of length. The filamentary structure labeled as F-3 is roughly perpendicular to the other filamentary structures. The spatial distribution of the structure labeled F-2 partly overlaps with the distribution of F-1 at −2.4 km s −1 . Thus, there is a possibility that F-1 and F-2 are actually part of the same physical structure. But the northern portion of F-2 lies slightly to the west of F-1, while the southern part of F-2 lies to the east of F-1. Moreover, at positions where F-1 and F-2 overlap in the plane of the sky, the N 2 H + spectra clearly exhibit distinct velocity components (see, e.g., positions (c) and (d) in Fig. 6). Therefore, F-1 and F-2 seem to be distinct velocity-coherent features.
In the channel maps at V LSR = −2.0 to −1.6 km s −1 , two filamentary structures (F-5 and F-2) can be identified. But F-2 lies slightly to the west of F-1 in the northern part (Dec > −35:47:30), while the southern part of F-2 lies to the east of F-1 in the southern part (Dec < −35:47:30). At −1.8 km s −1 , one structure was removed since it is detected in only one channel. , 4, 6, 8, 10, 15, 20, 25, 30, 35, and 40σ, where 1σ = 0.2 mJy/ALMA-beam. The magenta filled circle marks the position of radio continuum source D (Rodriguez et al. 1982) associated with a compact HII region. D. Most of the highly blue-shifted N 2 H + blobs are distributed on the outskirts of this shell-like structure. While the northern part of the NGC 6334 filament is seen in absorption in the Spitzer 8 µm map, such absorption is not seen in the southern part (around the shell-like structure). This suggests that the southern part of the NGC 6334 filament is in the back of the exciting star (source D). If the N 2 H + blobs were associated with the shell-like structure, their velocity would be expected to be redshifted compared to the surrounding gas material. This is not consistent with the fact that the N 2 H + blobs are blueshifted.
We conclude that the highly blueshifted N 2 H + blobs are more likely related to the cloud-cloud collision scenario proposed by Fukui et al. (2018).
Appendix F: Comparison between GAUSSCLUMPS, getsources, and REINOHLD extractions of clumps in the ArTéMiS map
As described in Sect. 4.1, we applied GAUSSCLUMPS to identify seven clumps in the ArTéMiS 350µm map. To investigate the robustness of this identification, we also applied the getsources algorithm, already used to identify compact sources in the ALMA 3.1 mm continuum map (see Sect. 3.3.1), as well as the REINOLD algorithm, available as a python package (pycupid; Berry et al. 2007). REINOLD marks the edges of emission clumps that have shell or ring shapes. Then, all pixels within each ring or shell are assigned to be a single clump. Here, we adopted the following REINOLD parameters: RMS = 0.08 Jy beam −1 , FLATSLOPE = 1σ, FWHMBEAM = beam size, MAXBAD = 0.05, MINLEN = beam size/pixel size, THRESH = 10, and MINPIX = beam size. Seven clumps were identified in the main filament. Figure F.1 compares the distributions of the clumps identified with GAUSSCLUMPS, getsources, and REINHOLD. The getsources algorithm identified seven sources within the main filament. Five of these seven getsources objects coincide with a clump found with GAUSSCLUMPS. REINHOLD also identified seven clumps within the main filament. One clump identified with GAUSSCLUMPS is not identified with REIN-HOLD, but is identified with getsources. All clumps identified with GAUSSCLUMPS are also identified with getsources and/or REINHOLD. Thus, we conclude that the seven clumps identified with GAUSSCLUMPS are robust. | 2019-10-07T06:24:36.000Z | 2019-10-07T00:00:00.000 | {
"year": 2019,
"sha1": "75fdde8312ab572093342c089b91b3c985a1b72a",
"oa_license": "CCBY",
"oa_url": "https://www.aanda.org/articles/aa/pdf/2019/12/aa35689-19.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "80544cd7b690d67c90a6790755eaa5eeb80e73a4",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
246544561 | pes2o/s2orc | v3-fos-license | Determination of Anti-Mycoplasma Capricolum Subsp. Capripneumoniae Antibodies For The Sero-Epidemiology of Contagious Caprine Pleuropneumonia
Contagious caprine pleuro-pneumonia (CCPP) is fatal disease of goats and causes huge economic losses due to high morbidity and mortality. CCPP is enlisted as notiable animal disease by OIE. The causative agent of CCPP is Mycoplasma capricolum subsp. capripneumoniae (Mccp). The present study was aimed to investigate the seroprevalence of CCPP in northern Pakistan. The study areas were divided into three zones; northern zone, central zone and tribal zone. A total of 1300 serum samples were collected during November 2017 – April 2019 from goats of different age and sex and were analysed by monoclonal antibody based cELISA. The analyses revealed 227 (17.5%) samples positive for anti-Mccp antibodies. The zone wise distribution of CCPP in goats was signicantly different (P(cid:0)0.05), indicated by positive sera for Mccp of 23% animals from northern zone followed by 15% and 13% animals from tribal and central zones, respectively. The analysis of data showed non-signicant values in the seroprevalence among bucks and doe(s), indicated by anti-Mccp sera from 16.6% bucks and 18.3% doe (s). Moreover, among different age groups the prevalence of disease in adult goats (20%) was signicantly (P(cid:0)0.05) higher than kids (10.8%). It is evident from the present study that CCPP caused by Mccp is prevalent in Pakistan and both sex of animals are equally susceptible to Mccp infection. Furthermore the disease is more prevalent in northern zone.
Introduction
Goats farming in developing countries like Pakistan face several problems including poor husbandry practices, extreme environmental stresses, and de ciency of good quality feed stuff and various types of infectious, parasitic and various other non-corrosive diseases. Among the infectious bacterial diseases, the mycoplasmal diseases are well known all over the world for its pathogenicity and high economic loss directly affecting farmers (Regassa et al., 2010). The prevalence of caprine mycoplasmosis has been documented from different region of the globe, however the occurrence of these diseases is more frequent in Asia and Africa and are considered the major constraint to goat industry in terms of high morbidity and mortality (Tigga et putrefaciens are involved to induce manifold pathological condition in small ruminants like mastitis, arthritis, keratoconjunctivitis, pneumonia and septicaemia, which is usually term as 'MAKePS' (Thiaucourt and Bolske, 1996).
In Pakistan, Mmc was rst detected among Mm cluster by applying various tests on clinical samples from infected goats suspected for CCPP (Khan et al., 1989). In district Pishin of Balochistan, goats ocks were investigated for CCPP based on clinical examinations, by biochemical, growth inhibition and PCR on the collected samples (nasal swabs, lungs, liver, and intestinal tissues), two mycoplasma species i.e. Mcc and Mycoplasma putrefaciens (MP) were found prevalent in the affected goats (Awan et al., 2008). The prevalence of respiratory mycoplasmal infection in different region of Pakistan was reported by late agglutination (LAT) test for Mccp, growth inhibition test (GIT) for Mmc and PCR for Mm cluster and found forty eight samples positive for Mmc on GIT, none of the serum sample was detected positive for Mccp, whereas thirty-ve samples were con rmed as Mm cluster by molecular technique (Shahzad et al., 2012). Similarly, Mmc was also found in samples collected from goats having similar signs of CCPP This project was designed to investigate the serological prevalence of CCPP the Khyber Pakhtunkhwa Province and northern areas of Pakistan for effective therapeutic interventions and control strategy to prevent CCPP outbreak in goat's population.
Study Area
This study was performed across the Khyber Pakhtunkhwa, Gilgit-Baltistan, Pakistan. The selected districts along with tribal districts were divided into three different zones namely, northern zone, central zone, tribal zone. Northern zone includes Gilgit-Baltistan, Chitral, Swat, Buner, and Hazara. Central zone includes Charsadda, Mardan, Swabi, Peshawar, and Nowshera districts. The tribal zone includes the tribal districts, Khyber, Bajour, and Mohmand. The climatic condition of the northern zone is extremely cold with heavy rainfall and snowfall in winter season. The weather is pleasant in summer days and extremely cold in winter season. The tribal region containing federally administrated areas (Now part of Khyber Pakhtunkhwa, Province) has borders with Afghanistan and has extreme climate in summer and winter.
The climatic condition of the Central zone is hot humid (Fig. 1).
Sample size
A total of 1300 serum samples were collected from goats during November 2017 to April 2019. All the samples were collected based on clinical signs i.e.mucopurulent nasal discharges, productive cough, deep abdominal respiration (Fig. 2), pyrexia (40-41 o C) and history of respiratory infection and no vaccination record against CCPP. The samples were collected from different age and sex groups of animals. The number of samples collected from each zone is presented in ( Table 1). The Lab work and analysis of the results were performed in pathology laboratory, College of Veterinary Sciences, Faculty of Animal Husbandry and Veterinary Sciences, The University of Agriculture Peshawar.
Harvesting of serum
The blood samples were collected from the jugular vein of animal's adopting aseptic condition. The blood was poured to gel containing tube and centrifuged for 5 minutes at 5000 rpm. The serum was transferred to sterile eppendorf tube in aseptic environment and was stored at -20 o C until further processing.
IDEXX CCPP Antibody Test
The serum was subjected to cELISA test for the detection of Mccp speci c antibodies. The procedure was performed following the manufacturer instructions (IDEXX). The optical density (OD) values of samples and controls were measured and record at 450 nm wavelength using the ELISA plate reader.
2.4.1: Serum samples absorbance (OD value) analysis:
The absorbance values of Controls, were calculated by the following formulas:
Statistical Analysis
All the data obtained was compiled into the Microsoft Excel spread sheet and analysed through Chi square statistical test. Con dence level was taken at 95% and p value less than 0.05 for signi cance in all analysis.
Results
A total of 1300 samples were collected in different zones including northern, central, and tribal zones from naturally respiratory infected goats which were considered suspected for CCPP. The screening of the samples through monoclonal antibody based cELISA test revealed 227 (17.5%) samples positive for CCPP.
Prevalence of CCPP in different zones
The region wise prevalence of the disease in goats was recorded as 23% in northern zone followed by 15% tribal zone, and 13% central zone. The analysis of the data through Chi square statistical test revealed signi cantly difference (P 0.05), in the prevalence of disease, among different zones of the studied areas (Table 2).
Gender based prevalence of CCPP in different zones
Out of total 1300 samples, 650 serum samples were collected from bucks and 650 from Doe(s). The processing of the samples by cELISA revealed 108 (16.6%) male and 119 (18.3%) female goats positive for anti-Mccp antibodies. The analysis of the data by Chi square test revealed non-signi cant difference (P>0.05), in the prevalence of disease, between different sexes of goats (Table 3).
Age wise prevalence of CCPP in goats
Out of 900 samples collected from adult goats having age above 180 days, the anti-Mccp antibodies by cELISA were observed in 184 (20%) samples. While in young animals (age: day1 up to 6 months), 43 (10.8%) samples revealed reactive antibodies against Mccp pathogen. The prevalence was signi cantly (P 0.05) different recorded between different age group of animals (Table 4).
District wise prevalence of CCPP in studied area
In the northern zone the highest prevalence of the disease was found in animals from Gilgit Baltistan (27.4%), followed by animals from Chitral, Hazara, Buner and Swat 24.4%, 23%, 19.7%, and 16.8% respectively ( Table 5).
The central zone is divided into 05 districts (Charsadda, Mardan, Swabi, Peshawar, Nowshera). The prevalence of Mccp recorded in the present study was higher in Charsadda 17.5% followed by Mardan, Swabi, Nowshera and Peshawar 15%, 13.7%, 11.2% and 7.5% respectively. The lowest occurrence of the disease was recorded in District Peshawar (Table 6). In tribal zone the maximum frequency of occurrence of the CCPP was recorded in tribal district Khyber (18.6% and tribal district Bajour (14%), while the lowest occurrence of the disease was noted in tribal district Mohmand (11%) ( Table 7). respectively. The highest frequency of the disease in the present study was recorded in northern region of Pakistan. This could be due to pastoral practices, extreme cold environmental condition, improper management system for large ock (up to 2000 animals in single ock), and movement of animals in winter season (last of October to start of April) for grazing. These factors may lead immunosuppression and make the goats susceptible for the proliferation of pathogenic Mycoplasma to cause infection. It has been reported that the ock size, management and production system of the small ruminants, carrier animals in the vicinity play a key role in extent of the disease in small ruminants (Sherif et al., 2012). | 2022-02-05T16:28:14.930Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "e545c70307d2057328aba857ff31fbc70622313c",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1162481/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7e4afc5b6d2d9d49b4e950e565f2c4649be5840f",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": []
} |
48352605 | pes2o/s2orc | v3-fos-license | A Novel Multiple-Bits Collision Attack Based on Double Detection with Error-Tolerant Mechanism
,
Introduction
Although modern cryptographic algorithms have been proven to be safe mathematically, this does not mean that the physical implementation is safe enough, where attacker can obtain some physical information from side channel.Side-channel attack (SCA) was proposed almost 20 years ago, which was first put forward in 1996 by Kocher [1] and became a powerful cryptanalysis technique.Power consumption analyses are widely used in SCA, which utilizes the relation between power consumption or electromagnetic signal of the executing device and processed data in order to recover the key value.Since Differential Power Analysis (DPA) was proposed in 1997 [2], whose distinguisher is the difference of the mean traces, various distinguishers have been designed and improved to enhance attack ability and efficiency, for example, Pearson correlation coefficient as a distinguisher for Correlation Power Analysis (CPA)[3], mutual information for Mutual Information Analysis (MIA) [4], and maximum likelihood for Template Attack [5,6] (TA) and Template Based DPA [7].However, the necessity of estimating and establishing the leakage model has been a serious restriction for SCA, which collision attack can ignore.Collision attack was first proposed to analyze Hash algorithm [8] and has become a branch of mathematical cryptanalysis, but it only reveals relation between input and output without exploiting internal information as SCA.
As a combination of SCA and collision attack, sidechannel collision attack can exploit the information of internal leakage without a large number of power traces as well as the knowledge of the leakage model.Side-channel collision attack showed strong ability of attack, when first presented [9] against Data Encryption Standard (DES) by Schramm et al., which was applied to AES [10] successfully later.Then all kinds of improved versions [11][12][13][14][15][16][17] of side-channel collision attack sprang up, and most of these methods show high sensitivity to errors, where the recovered key is totally wrong even when error occurs only in 1 bit under the high noise level circumstance, leading to a low efficiency.Bogdanov presented some voting detection methods that seemed to be more practical [14], but they need too many traces in a profiling phase and encrypting the same plaintexts repeatedly for decreasing the influence of noise may not be realistic.In 2010, Moradi proposed a correlation-enhanced method [15] that improves the probability of collision, but it may need lots of average power traces to process an attack and is sensitive to errors.In 2011, Bogdanov proposed an attack strategy [17] that uses the results of DPA to test chain separately.This method can improve the success probability in a sense that it cannot check the mistakes in collision detection which highly impact the attack results.Then Gérard et al. combined Low Density Parity Check (LDPC) decoding with correlationenhanced and Euclidean Distance detection method in 2012 [16], which can be a globally efficient attack strategy in noisy settings.Two side-channel collision attack procedures based on bitwise collision detection were proposed, respectively, by Ren et al [18] in 2015 and by Wang et al [19] in 2017, which may have a poor performance on the detection success rate with high level noise.However, efficiency of collision detection and lack of error-tolerant and check mechanism are two main issues of existing side-channel collision attack.
Our Contribution.In this paper, we propose a novel multiplebits collision attack framework.In particular, double distance voting detection (DDVD) and the error-tolerant and check mechanism are presented to ensure the high accuracy.In addition, we compare our collision detection method called DDVD with the Euclidean Distance and the correlationenhanced collision methods under different intensity of noise, which indicates that our detection technique has a better performance in the circumstances of noise.Furthermore, 4-bit collision attack is proven to be optimal in theory and experiments.Practical attack experiments are performed successfully on a hardware implementation of AES in FPGA board.
The remainder of this paper is organized as follows.In Section 2, for a better understanding, we introduce some notations of our method as well as the basic linear collision attacks and then review the binary and ternary voting detection methods, correlation-enhanced collision attack, and LDPC decoding method in collision attack.In Section 3, a novel framework of multiple-bits collision attack is presented and we take the 4-bit model as an example to explain the attack procedure.In Section 4, we propose an improved version with an error-tolerant and check mechanism.In Section 5, we compare our collision detection method with other widely used detection techniques under different intensity of noise and analyze our model, and the experiments as well as the comparisons are also shown.Finally, we give the conclusion in Section 6.
Preliminaries
In order to understand the strategy easily, AES is chosen as the target block cipher to perform the attack method.As for the hardware implementation of this paper, it operates each of 16 S-boxes, which are used for the SubBytes operation, sequentially one by one.The following proposed statements and techniques can be successfully utilized in other cryptographic symmetric algorithms.
Notations.
For a better description of the proposed method, we define some notations as follows.First we use letters k and p for 16-byte plaintext and first round subkey, with subscripts indicating a particular byte: Then we use the superscripts letters m and l for the 4 most significant bits and 4 least significant bits separately, meaning that ≜ [7: 4], ≜ [7: 4], ≜ [3: 0], ≜ [3: 0].Next, the attacker is able to choose the value of plaintext with key value all the same.The superscripts and state that the 4 most significant bits and 4 least significant bits are equal to values and in decimal format: Each trace acquired corresponding to first-round encryption contains 16 subtraces due to 16 sequential S-boxes, with subscripts indicating a particular S-box and each subtrace contains a number p of points, which are denoted by the subscripts: Furthermore, we use ( ) to denote the power trace corresponding to the plaintext, where the value of 4 most (least) significant bits of all 16 bytes is f () in decimal format; namely, { = } 16 =1 ({ = } 16 =1 ).However, if the superscript is a certain digit, it shows that the plaintext is this value or power trace is corresponding to the plaintext with this value.For example, 128 1 means that the first byte of plaintext equals 128 in decimal format and 128 1 is denoted as the power trace of the first S-box operation with the corresponding plaintext byte being 128.Meanwhile, we use () and () for the nth acquisition of power traces and plaintexts, respectively.
Linear Collision Attack.
The internal collision was first presented for attacking DES [9].It is based on the fact that if a collision on a key-dependent function can be detected, the attacker can acquire some relations between the different inputs.
Linear collision is based on the internal collision.When it is applied to AES, if a collision between two S-boxes operations of the first round is detected (e.g., the collision between the ith and the jth S-boxes in Figure 1), it is obvious that (4) is tenable: Then one can obtain a linear equation about the relation between plaintexts and first round subkey: Online Stage: (1) If the attacker can find all possible relations among 16 key bytes by detecting the collision of S-boxes, then he will obtain an equation set about the key bytes of the first round containing 15 linear equations: Note that all the equations in the set are relevant, and there is only one free variable.Thus, this equation set only has 2 8 possible solutions, which means that we just need to enumerate all 256 possible candidates of 1 key byte to recover the whole key value.However, under the noisy experiment setting, the collision detection method may detect wrong collisions, which lead to some incorrect equations in (6).For all equations in (6) are relevant, even if only one bit error occurs in any equation, the equation system will have no solution.Thus, in this paper, we propose a detection method called DDVD which is to ensure a high detection success rate.
Voting Detection Methods.
In [14], Bogdanov proposed the voting detection containing binary voting test and ternary voting test.Both binary and ternary voting tests are based on Euclidean Distance.For binary voting test, if an attacker can acquire a lot of power traces of the same plaintexts, he will calculate the Euclidean Distance between two trace pairs of different plaintexts.Then the attacker should calculate the total number of the trace pairs whose distance is less than a predetermined threshold.When the total number is more than the predetermined voting value, it can be confirmed that one collision is detected.However, the basic strategy of ternary voting test is the same as binary voting test, but instead of calculating the Euclidean Distance directly, this method requires calculating the distance between each of the obtained power traces with certain plaintexts and the reference power traces that are obtained during a profiling phase preparing a set of reference traces without knowing related encrypting values.[15].This method compares the correlation coefficient between two sets of power traces corresponding to two different Sboxes rather than detecting the collision between two single power traces.
Correlation-Enhanced Collision Attack. Correlation-enhanced collision attack was one of the last major advanced detection techniques proposed by Moradi et al in 2010
As can be seen in Algorithm 1, we take the detection between S-box 1 and S-box 2 as an example.In the online stage, an attacker should obtain N power traces corresponding to N plaintexts.When in offline stage, the attacker cuts the power trace into 16 sections based on the operation of 16 S-boxes and takes the section for S-box 1 and S-box 2. Then 1 () is divided into 256 groups according to the plaintext byte value and the attacker can get the averaged power traces of each group ({ 1 } 255 =0 ), which is the same for S-box 2. Next, for each value of Δ ∈ (2 8 ), the attacker rearranges { 2 } based on the value of ⊕ Δ and calculates correlation coefficients (Δ) between { 1 } 255 =0 and { ⊕Δ 2 } 255 =0 .If Δ = 1 ⊕ 2 , the correlation (Δ) shall reach a maximum value; otherwise, it should have a pretty low value.
LDPC Decoding Problem in Collision Attack. In [16],
Gérard et al. proposed a unified and optimized collision attack method.The proposed method rewrote the linear collision attack as a LDPC decoding problem, according to the linear relationship: (1) Algorithm 2: Multiple-bits side-channel collision attack.
The vector ΔK = (Δ 1,2 , Δ 2,3 , ⋅ ⋅ ⋅ , Δ 15,16 ) can be seen as an LDPC codeword whose dimension is 15 and length is 120.Finding the right key value is just to decode the LDPC code.Furthermore, in order to make the attack method have a better performance in the noisy settings, actual posteriori probability value of each code was used for LDPC decoding, which is called soft decision decoding.Unlike soft decision decoding, hard decision decoding uses bit value for decoding.Compared to soft decision decoding, the effect of hard decision decoding is worse in a noise setting but the computation complexity is lower.
In this paper, for the error-tolerant and check mechanism, we choose top three possible values for each Δ 1 , 3 as candidates and find the likeliest value based on (7).It may be seen as a kind of hard decision decoding procedure.However, due to the DDVD, the detection success rate may remain in a high level in some noisy settings.
A Novel Framework of Multiple-Bits Collision Attack
In this section, a framework of multiple-bits side-channel collision attack is presented.As can be seen in Figure 2, plaintexts need to be chosen based on multiple-bits (n-bits) model and then we prepare the power trace.Double distance voting detection is the important part of the framework ensuring the high success probability along with high efficiency.However, the principal part of the framework is based on a circulation.Each iteration stands for an attack, where we obtain only nbits of a byte relation between all key bytes.After several iterations, the whole byte value of Δ between all key bytes can be acquired, which will be utilized to recover the key value.Due to the fact that the 4-bit collision attack leads to the highest efficiency, which will be proven and verified in Section 5, we take the 4-bit model as an example to explain our attack method.According to our attack framework, for 4-bit model, 2 iterations are enough to recover the key value, whereof one is for the four most significant bits and the other is for the four least significant bits.In the rest of this paper, we only describe the strategy for the four most significant bits of one byte.The remaining four least significant bits can be found using the same technique.(1) for = 0: 15 statements can be applied to our multiple-bits models.For example, some parameters of 4-bit model are 15 or 16, which can be interpreted as 2 4 − 1 or 2 4 , and thus for other n-bits model, the parameters should be 2 − 1 or 2 .Like most of other attack strategies, our method also first gets some traces and proceeds with some preprocessing, seen from Steps1, 2, and 3. Double distance voting detection is the core of our method combining enhanced Euclidean Distance detection and voting detection, which ensures our success rate.Finally, based on the main idea in Section 2.2, when we find all possible relations among 16 key bytes, the brute force way is able to find the right key value quickly, for we only need to enumerate 256 key values.Some details will be presented in the following sections.
Choose Plaintexts.
According to our attack strategy, we assume that the attacker is able to choose the plaintext.The 4-bit side-channel collision attack model aims to detect the collision of 4 bits between 2 different S-boxes, and in this situation other 4 bits are the noise for the detection.It is important for improving the efficiency of our method to determine how to choose the value of { being random.As for preprocessing the power traces, the detailed procedures are stated in Algorithm 4. For each of the trace sets, we can average all N power traces in this set to a single averaged trace.Each of the averaged power traces is composed of 16 subtraces corresponding to 16 sequential S-boxes operations and can be cut into 16 subtraces.Thus, we can obtain 16 averaged power traces containing 16 subtraces.
Double Distance Voting Detection. Double distance vot-
ing detection is the core of our attack technique ensuring the high success rate and stability.As is seen in Algorithm 5, DDVD is composed of enhanced Euclidean Distance detection and voting detection.For a better understanding of our DDVD technique, a diagrammatic sketch is shown in Figure 3. Taking S-boxes 1 and 2 as an example, there shall be 16 subtraces {t m j 1 Input: 2 sets of sub-traces:{t Output: the 4 most significant bits of Δk i 1 ,i 2 : Δk m i 1 ,i 2
Improved Framework
In this section, we propose an improved framework of multiple-bits side-channel collision attack, where we modify our double distance voting detection and insert the errortolerant and check mechanism.As is shown in Figure 4, in this new framework, the modified double distance detection works with the error-tolerant and check mechanism, which leads to a remarkable promotion in the success rate as well as the attack efficiency.We still take the 4-bit model as an example to describe the improved attack framework of our method.The procedure is shown in Algorithm 6.Like Section 3, we only care about the four most significant bits, with the four least significant bits being almost the same.Algorithms of ChoosePlaintexts, AcquireTrace, and PreprocessTrace are all the same.In the rest of this section, we only explain the modified double distance voting detection and the fresh error-tolerant and check mechanism.
Modified Double Distance Voting Detection.
The modified detection method is shown in Algorithm 7. Just like the original one, the input of the DDVD is still 2 sets of subtraces corresponding to 2 different S-boxes, but the output changes from a single value to a 1×3 matrix including 3 candidate values of Δ 1 , 2 .Euclidean Distance between each subtrace of a certain S-box and a set of subtraces of another S-box also should be calculated first.Then, instead of choosing n with the maximum number as the result, we prefer three values whose number is in the top three Δ 1 , 2 where 1 and 2 range from 1 to 15.
Error-Tolerant and Check Mechanism.
Error-tolerant and check mechanism is presented in Algorithm 8. Three candidate values ensure the error-tolerant mechanism.The main thought of the error detection and tolerance is based on (6), which provides a way to find errors occurring in collision detections.
In order to recover the key value correctly, 15 delta values (Δ 1,2 , Δ 2,3 , . . ., Δ 15,16 ) should be right.Thus, for each candidate of Δ ,+1 , any exiting relations in ( 7) should be checked.If there exists a candidate of Δ ,+1 that can pass the check, it will be the final result of Δ ,+1 ; otherwise this attack is considered to have failed and should start from the beginning again.
Comparison of Detection Success Rate under Noise. Collision detection technique has played an important role in
side-channel collision attack.In this section, we compare double distance voting detection (DDVD) proposed in this paper with other two widely used detection techniques, which are correlation-enhanced detection [15] and traditional Euclidean Distance detection with dimension reduction [17], respectively.
Input : 3 candidates for each Δk
= 0 exiting all loops (8) end if (9) end for (10)end for Algorithm 8: Error-tolerant.collision detection generates incorrect results, DPA makes no sense for recovering the right key value.Therefore, the success rate of Euclidean Distance detection for that method is the key part.
The power traces are obtained from an AES hardware design implemented on a SAKURA-G board.Each trace shall be averaged by four power traces with the same input.The noise of the traces usually comes from both electronic noise mainly containing power supply noise, clock generator noise, conducted emissions, and radiated emissions and algorithm noise which are the power assumption of other uncorrelated operations.For SAKURA-G is a dedicated board that may be far from being noisy, we can add the Gaussian noise of different intensity into the averaged traces to model the noise, which can be used for an initial analysis of efficiency of different detection techniques [14].SNR (signal-to-noise ratio) is used for indicating the intensity of the noise, which is defined as follows: The comparison result is shown in Figure 5. Detection technique proposed in this paper is marked with DDVD, and correlation-enhanced method in [15] and Euclidean Distance in [17] are denoted as CE and ED, respectively.The value of SNR ranges from 0 dB to 30 dB.Each technique is done for 1000 times to calculate the success rate.As is shown in Figure 5, our detection technique performs better under the noise.
How Many Bits Are Best for Multiple-Bits Model.
In this paper, we propose a multiple-bits collision attack model, and all the statements are based on the 4-bit model, which can be expanded to other n-bits (n ranging from 1 to 8) models.However, one question we should figure out is how many bits are best for the multiple-bits model of our attack method.
What matters in our attack strategy is the necessary number of power traces to reach a given success rate, which reflects the attack efficiency.Therefore, we will analyze the necessary number of power traces for different model both in theory and in experiment.
In Section 5.1, the performance of our detection method in noise environment is presented; thus, for a simple analysis in this section, we assume that the noise for an n-bits model only comes from the operation of other 8-n bits and all 2 decision making units in Figure 3 are independent.So, for the n-bits model, when preparing the power traces (Sections 3.2 and 3.3), we need h power traces of each byte to obtain the averaged traces.When we compute the Euclidean Distance between two single averaged traces whose n-bits can cause the collision (e.g., the probability that these two averaged traces have the least Euclidean Distance is where letter C is denoted as combinatorial number, n equals the number of bits in the model, and h equals the number of traces for calculating the average.Due to the fact that 8 n-bits of one byte are random, there are a total of ℎ 2 8− × ℎ 2 8− kinds of choices to determine these two averaged power traces.Since there shall be only one corresponding plaintext of one byte which can cause a collision with each plaintext of another byte, there are a total of ℎ 2 8− × ℎ 2 8− −ℎ kinds of choices that include no collision plaintext pair.
The probability of successful detection for the method proposed in Section 3 and the improved method presented in Section 4 is calculated separately as follows: where Pr is equal to (9) and n is the number of bits in the model.From the illustration of Figure 3, there are 2 decision making units for the n-bits model.According to the rules of voting detection that the value that occurs the maximum times is chosen as the final result, if more than half of the decision making units generate the wrong answer, the voting detection shall fail.Result of all 2 decision making units can be seen as the binomial distribution, so the probability that more than half of the units generate wrong result is Analysis for the improved method in Section 4 is similar, and if more than threequarters of the decision making units generate the wrong answer, the voting detection shall fail.
As for the necessary number of the power traces, it can be calculated as follows: where ⌈⌉ stands for the minimum integer that is larger than n.According to our attack strategy, for each n-bits model, we should get 2 averaged power traces and each trace is averaged by h original power traces.Obviously, the method should be operated for ⌈8/⌉ times.According to (9), ( 10), (11), and (12), we estimate the necessary number of the power traces to reach a 90% success rate for the basic attack method in Section 3 and an improved version in Section 4, shown in Tables 1 and 2, respectively.The 1-bit model only has two possible results, so improved method makes no sense for it.It is obvious that, in theory, 4-bit model collision attack needs the least number of traces with high efficiency, which will be verified later in experiments.Furthermore, the practical experiments have been carried out to find how many bits are best for the proposed multiplebits model.Figure 6 shows the necessary number of power traces to reach a 90% success rate for the original approach presented in Section 3 (denoted as MBDD), while Figure 7 shows the he necessary number of power traces for the improved version proposed in Section 4 (denoted as MBDD-ET).As can be seen in Figures 6 and 7, it is verified that 4bit model (in black) is the best choice when operating the proposed attack strategy.
Experiments and Results
. The attack method and the improved method with error-tolerant mechanism have been performed successfully in practice against a hardware design with an 8-bit data path of AES, where 16 S-boxes are sequentially operated in every round operation.The target AES is implemented on a Xilinx SPARTAN-6 FPGA of a SAKURA-G circuit board.An Agilent MSO-X 9104A oscilloscope is employed to collect original power traces.In our case, each power trace obtained contains about 32365 points.
For a better understanding, operation on The accumulation of all points' square of difference between two subtraces is the value of Euclidean Distance.We take Figure 8(a) as an example to describe its meaning.} 15 1 =0 is marked by grey curves.If the black curve is lower than any other grey curves, the decision making unit will generate the right candidate.An initial and rough conclusion can be drawn that when in situations like Figure 8(a), whose black curve is close to zero, two traces corresponding to a collision may have the lowest distance, meaning that the corresponding decision making unit generates the right candidate, but in some exceptional situations such as Figure 8(p), whose black curve is higher than some grey curves, collision cannot be assured by minimum Euclidean Distance and the unit generates the wrong candidate.Therefore, voting detection works to determine the final value of Δ 1,2 .As is shown in Figure 9, (1011) 2 occurs the maximum times, and voting detection chooses it to be the final result.
Comparison.
In this section, we compare our improved attack version denoted as MBDD with correlation-enhanced collision attack [15], bitwise collision attack [19], and LDPC method with Euclidean Distance detection [16] denoted as CECA, BCA, and LDPC, respectively.Comparisons are done from three aspects, which are relation between success rate and necessary number of traces, relation between success rate and online time, and relation between offline time and online time.Each compared method was performed 1000 times for calculating an actual success rate.
In this section, V is used for indicating the total time that the oscilloscope spends on capturing and averaging one power trace in real time, and is for indicating the time that the oscilloscope spends on acquiring and saving one trace.Taking Agilent MSO-X 9104A oscilloscope that we use for acquiring power traces as an example, is about 50 times of V .The number of power traces used to obtain one averaged power trace in oscilloscope is denoted as q, and the number of saved averaged power traces is n.Therefore, the online time denoted as can be written as And we fix = 6 for this experiment, so Figure 10 presents the relations between success rate and number of traces.As can be seen from Figure 10, LDPC has a better performance.To get a high given success rate, LDPC needs less number of traces.However, in Figure 11, the success rate is as a function of the total online time rather than the number of original power traces.As is mentioned above, we can decrease the online time due to the fact that the time an oscilloscope spends on averaging one trace is much less than saving one trace.It is obvious in Figure 11 that the performance of MBDD with error-tolerant mechanism got a promotion under this setting.Due to the fact that the 4-bit model of MBDD can find all 120 relations among 16 key bytes with 32 averaged power traces, the fact that the oscilloscope spends less time on averaging traces does a favor for MBDD to have a higher success rate with the same online time.Meanwhile, it seems that LDPC method does not have a remarkable promotion as MBDD with the help of averaging traces.The reason may be that the collision detection method of LDPC will need more averaged traces to detect all collisions occurring among 16 key bytes, even if traces are far from being noisy.However, the results of Figures 10 and 11 can reflect that LDPC is more tolerant to noise because the performance of LDPC in a noisy setting is almost the same as that in a less noisy setting.
Finally, we show the relation between offline time and online time for LDPC and MBDD.The offline time, which reflects the computational complexity, was estimated by MATLAB.As is shown in Figure 12, LDPC is more costly in terms of computation time than MMBD.However, the increased time overhead is slight.For LDPC, the offline time decreases as the online time increases, which indicates that the number of iterations for LDPC decoding decreases.For MBDD, the offline time increases as the online time increases, and it quickly converges to a certain value.
From these comparisons, it can be confirmed that LDPC with soft decision decoding has less trace overhead but more computation time overhead than MBDD, which can be seen as a kind of hard decision decoding procedure.In addition, the necessary number of traces for our method is 90% less than CECA and 96% less than BCA.
Conclusion
In this paper, we proposed a basic multiple-bits side-channel collision attack framework based on double distance voting detection.Then an improved version with modified double detection as well as error-tolerant and mechanism is presented.The 4-bit model is proven to be the optimal choice for the novel attack strategy in both theory and practice.Practical attack experiments are performed successfully on a hardware implementation of AES on SAKURA-G circuit board with Xilinx SPARTAN-6.Results show that our detection method performs steadily in noisy environment.We compare our methods with other attacking methods; our method needs less computation time but more traces than LDPC method, and to reach 90% success rate, the necessary number of traces for our method is 90% less than CECA and 96% less than BCA.The novel framework proposed in this paper can be utilized in other cryptographic symmetric algorithms.
Figure 1 :
Figure 1: Collisions between two S boxes.
Figure 3 :
Figure 3: Flow of double distance voting detection.
Figure 5 :
Figure 5: Comparison of detection success rate.
Figure 6 :Figure 7 :
Figure 6: Necessary number of traces for MBDD to reach 90% success rate.
Figure 8 ( 1 ,
Figure 8(a) shows the result of ( 0 1, − 2 2, ) 2 for each point ranging from 1 to 32365 and each value of 2 ranging from 0 to 15.The black curve represents the square of difference between each point of 0 1, and 0⊕Δ 1,2
Figure 10 :
Figure 10: Relations between success rate and number of traces.
Figure 11 :
Figure 11: Relations between success rate and online time.
Figure 12 :
Figure 12: Relations between success rate and online time.
Due to the fact that the value belonging to (2 4) ranges from (0000) 2 to (1111) 2 , 16N plaintexts should be obtained.3.3.Acquire Traces and PreprocessTraces.We can obtain 16N power traces of the first round operation corresponding to 16N plaintexts. Te obtained power traces can be divided into 16 sets according to the values of { } 16 =1 .For the values of { } 16 =1 are the same all the time ranging from 0 to 15 referred to Section 3.2 and each value corresponds to N random value for { } 16 =1 , the number of the sets is 16 and each set contains N power traces.
Table 1 :
The necessary number of original attacks to reach 90% success rate.
Table 2 :
The necessary number of improved attacks to reach 90% success rate. | 2018-06-30T00:51:45.497Z | 2018-06-05T00:00:00.000 | {
"year": 2018,
"sha1": "8dd4f88b90fac6345c3e619354b49a4148edf8dd",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/scn/2018/2483619.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8dd4f88b90fac6345c3e619354b49a4148edf8dd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
54658736 | pes2o/s2orc | v3-fos-license | Numerical Study of Thermal Performance of a Capillary Evaporator in a Loop Heat Pipe with Liquid-Saturated Wick
Heat transfer of a capillary evaporator in a loop heat pipe was analyzed through 3D numerical simulations to study the effects of the thermal conductivity of the wick, the contact area between the casing and the wick, and the subcooling in the compensation chamber (CC) on the thermal performance of the evaporator. A pore network model with a distribution of pore radii was used to simulate liquid flow in the porous structure of the wick. To obtain high accuracy, fine meshes were used at the boundaries among the casing, the wick, and the grooves. Distributions of temperature, pressure, and mass flow rate were compared for polytetra-fluoroethylene (PTFE) and stainless steel wicks. The thermal conductivity of the wick and the contact area between the casing and the wick significantly impacted thermal performance of the evaporator heat-transfer coefficient and the heat leak to the CC. The 3D analysis provided highly accurate values for the heat leak; in some cases, the heat leaks of PTFE and stainless steel wicks showed little differences. In general, the heat flux is concentrated at the boundaries between the casing, the wick, and the grooves; there-fore, thermal performance can be optimized by increasing the length of the boundary.
Introduction
Loop heat pipes (LHPs) and capillary pumped loops (CPLs) have attracted attention as advanced thermal control devices of spacecrafts, electronic devices, etc. The LHP and CPL require no external power to drive working fluids because of the capillary pressure generated in a porous structure in the evaporator, and they have high heat-transfer capabilities because heat transport is associated with phase changes between liquid and vapor (Figure 1(a)). Operating principles and characteristics are explained in detail elsewhere [1] [2].
The thermal performance of these devises is significantly affected by the design of the evaporator, and several studies of heat and mass transfer in capillary evaporators have been reported [3]- [13]. Fluid in the wick takes one of two principal forms: at low heat loads, the wick is completely filled with liquid, whereas at high heat loads, the wick contains fluid in a two-phase liquid-vapor state. Since it is complicated to propose an optimum geometry of the evaporator for the two widely different conditions, this paper focuses on the first condition. Numerical studies on heat transfer with saturated wicks have been performed [3]- [9]. In [5], a quasi-3D numerical model was developed on the basis of Darcy's law and energy conservation in the evaporator, including liquid in the compensation chamber (CC), the wick, the groove, and the casing; the model was used to obtain distributions of temperature, pressure, and velocity. In [6], a 3D simulation using the lattice Boltzmann method was presented, and details of flow and heat transfer in the porous structure were obtained.
A high-performance evaporator requires high-efficiency heat exchange between the casing of the heating surface and the vapor transferred to the condenser. In addition, heat leaks to the CC must be small because the saturated state in the CC controls the state of the LHP. Therefore, evaporator designs for thermal performance can be discussed in terms of the trade-off between the heat-transfer coefficient and heat leaks to the CC. This approach has been addressed in [14]. In our previous study, polytetrafluoroethylene (PTFE) wicks, with a bulk thermal conductivity of 0.25 W/m K, were applied in an attempt to decrease heat leaks to the CC [8] [15] [16]. Moreover, there is a requirement also for the evaporator to transfer the maximum heat load. To increase the heat-transfer amount reaching the capillary limit, the reduction of the pressure loss by the fluid flow should be considered. However, this point is not included in this paper.
In this study, a 3D numerical model of an LHP evaporator with the wick fully saturated with liquid is developed. The purpose was to find a guide for a design that attains geometric optimization in terms of the capillary evaporator's thermal performance. Focus has been provided on how the thermal performance of the evaporator is affected by the wick's thermal conductivity, the contact area between the casing and the wick, and the subcooling in the CC. Note that, in this study, situations wherein the wick contains fluid in two-phase vapor-liquid states are not considered; those situations have been addressed elsewhere [10]- [12]. In Section 2, the governing equations and the porous characteristics of the wick are described. In Section 3, our numerical procedure is discussed. Finally, in Section 4, computational results are presented. Comparison of PTFE with stainless steel as a representative material of metal wick is discussed specially.
Numerical Model
This section presents our numerical model for heat and mass transfer in the evaporator. Figure 1(b) shows the computational domain, which corresponds to the area surrounded by the red dashed line in Figure 1(a). Thewick contains grooves, as observed in Figure 1(b). The pore network model (PNM) in [9] [11] [12] was applied here to determine how liquid flow in the wick is affected by the pore radius distribution in the porous media. Figure 1(c) shows the PNM. The PNM divides the porous structure into two parts-pores and throats. In our case, the pores were spherical and the throats were cylindrical. The radii of the pores and throats together with the lengths of the throats characterize the porous structure of the wick, which includes pore radius, porosity, and permeability. Using this simple model, the PNM can include more pores and calculate over a larger porous media than the lattice Boltzmann method. Energy conservation in other components of the LHP was excluded, allowing us to easily determine heat-transfer only in the evaporator.
The main assumptions used in the model were as follows: 1) grooves were filled with saturated vapor at constant temperature and pressure; 2) the bulk solid and fluid in the wick were in local thermal equilibrium; 3) the fluid was incompressible; 4) fluid properties were temperature dependent; 5) the liquid-vapor interface had no thickness; 6) the process was at steady state; and 7) the effects of gravity and thermal radiation were negligible.
Governing Equation
In the PNM, mass flow rate ij m in the throat between pores i and j is expressed as follows, where, ij g is flow resistance in the throat, l is kinematic viscosity coefficient of liquid, P is pressure in a pore. Assuming Poiseuille flow, the flow resistance is given by: where, th r and th l are radius and length of the throat, respectively. By considering mass conservation at pore i , system of linear equations is obtained, Energy conservation in the wick, including conduction and convection, can be written as and heat conduction by the evaporator casing is given by In Equations (4) and (5), p c is the isobaric specific heat of the liquid, T is temperature, case k is thermal conductivity of the casing, and eff _ l k is the effective thermal conductivity of the wick filled with liquid. The latter was obtained by volume averaging the thermal conductivities of the bulk solid bulk k and liquid _ l k each appropriately weighted with the porosity : Liquid evaporate at the interface between the wick and the grooves, which is defined as interface. The interface is shown in Figure 2. The mass flow rate generated by evaporation n m is expressed by where n is the direction normal to the interface, i h is the interfacial heat-transfer coefficient calculated by using the equation in [17], gr T is the temperature of the grooves, A is the cross-sectional area, and fg H is the latent heat of the working fluid. The temperature in the grooves was assumed to be constant at the saturation temperature. The saturation temperature was calculated from the pressure in the grooves and the -P T saturation curve of the working fluid. The pressure loss from the grooves to the CC lines P was calculated from the total mass flow rate in the wick, assuming Poiseuille flow in a circular pipe, as presented in [18]. The pressure in the grooves was obtained from cc lines gr P P P (9) where cc P is the pressure in the CC. The temperature and pressure in the CC were not calculated in this study. The assumptions of constant temperature and saturation pressure were imposed to allow simple calculations of only the heat transfer in the evaporator. The other boundary conditions were those commonly used in previous studies: 1) uniform heat flux at the heating surface y Ly (Neumann boundary condition); 2) thermal connection by contact thermal resistance of 10,000 W/m 2 K used in [19] at the casing-wick contact surface _ y Ly w ; 3) convective boundary at the bottom of the wick 0 y with a heat-transfer coefficient of 100 W/m 2 K used in a previous study [13] (Robin boundary condition); 4) constant pressure cc P at the bottom of the wick 0 y (Dirichlet condition); 5) impervious walls at 0 z and _ y Ly w ; 6) adiabatic conditions at 0 z and at the casing-groove interface; and 7) periodic boundary conditions at 0 x and x Lx .
Representation of Porous Structure
In the PNM, characteristics of the porous structure can be established by setting dimensions for voids-the radius of each pore pore r , the radius of each throat th r , and the length of each throat th l (see Figure 1(c)) -which control the porosity, permeability, and pore radius distribution, respectively. In this study, a polytetrafluoroethylene (PTFE) wick [16], for which the porosity and permeability are 0.34 and 2.0 × 10 14 m 2 , respecwas determined by using a log-normal probability density function to fit the pore radius distribution, which was measured by mercury porosimetry [20]. A mercury porosimeter can measure a pore radius distribution when the capillary pressure balances. In the PNM, the capillary pressure is expressed in terms of the throat's radius. Therefore, in the PNM, the measured pore radius distribution corresponds to the radius distribution of the throat. Figure 3 compares measured and calculated distributions of pore radii. In the figure, the y-axis contains the pore volume normalized by its maximum. The distribution of throat radii agrees well with the measured distribution of pore radii. To determine the permeability of the constructed wick, the permeability was calculated as introduced in [12]. The computed permeability agreed with the measured permeability within 2%. All the main porous characteristics were in good agreement with measured values. The representation of porous characteristics is presented in [9]. Once the porous structure was constructed, it was used in all calculations.
Numerical Procedure
The method used to solve the above equations is shown by the flow chart in Figure 4. First, standard finite volume methods were used to solve the combination of Equations (4) and (5) with no mass flow rate. From those results, the mass flow rate at the interface was calculated, as imposed by the boundary condition in Equation (3). After the pressure field was obtained from Equation (4), mass flow rates in the wick and the LHP, and the pressure drop from the grooves to the CC were calculated. Finally, the updated pressure drop and mass flow rate in the wick were compared with values from previous iterations to judge convergence of the calculation. If convergence was not yet attained, the calculation returned to Equations (4) and (5) with the updated mass flow rate. The above calculation loop was iterated until convergence was obtained.
An energy balance over the entire system can be written as where, leak Q is heat leak to the CC, evap Q is heat transferred by phase change, and sens Q is sensible heat from the inlet to the outlet of the wick. The right side is total amount of applied heat to the evaporator. Each term in the left side is calculated as follows: The energy balance (Equation (10)) was satisfied with a maximum relative error of less than 1%. Generally, a mesh size of 0.1 mm is used. However, in case the thermal conductivity of the wick is lower than that of the casing, the temperature gradient in the wick near the boundary among the casing, the wick, and the grooves can be high. Therefore, to calculate with a high degree of accuracy, a finer mesh of 100 is used, as shown in Figure 2. The total number of nodes was 10,773,391.
Results and Discussion
Calculations were performed for the eight conditions shown in Table 1 gate how the thermal performance of the evaporator was affected by the thermal conductivity of wicks, contact area between wick and casing, and the subcooling of liquid in the CC. The effective thermal conductivities of the PTFE and stainless steel wicks completely filled with liquid were 0.22 and 10.6 W/m K, respectively. The contact area between the casing and the wick depends on the width of the grooves _ Lx g ; in Table 1, the contact area is expressed as the percentage of casing cross-sectional area, Lx Lz . Subcooling in the CC was used as the liquid temperature in the CC, which was the boundary condition at 0 y . Other conditions included the following: the heat flux applied to the casing was 0.625 W/cm 2 , the saturation temperature in the CC was 29.4 , and the thermal conductivity of the stainless steel casing was 16 W/m K. The working fluid was ethanol. Figure 5 shows 3D color renderings for pressure and temperature distributions and 2D color renderings of mass flow rate distributions on the x-y plane at 2 z Lz for the PTFE (left) and stainless steel (right) wicks; these results are from calculations 2 and 5 in Table 1. Figure 5(a) shows little difference in the pressure distributions: the maximum pressure was in the grooves, and the minimum pressure was right under the evaporator case at the groove-wick interface. The maximum pressure difference was about 3.5 kPa, which was much less than the capillary pressure (18 kPa) for the largest pore radius. This confirms that it was reasonable to assume that the liquid-vapor interface was at the interface between the wick and the grooves.
In Figure 5(b), the maximum temperature was on top of the casing, and the minimum temperature was at the lowest part of the wick. The casing temperature with the PTFE wick was higher than that with the stainless steel wick. In the model, the saturation temperature in the CC was set directly to study the performance of only the evaporator. In fact, the CC temperature is affected by the heat leak from the evaporator and the subcooled liquid flowing from the liquid line. Therefore, the CC temperature affects the casing temperature, and evaluation of the temperature in the calculation differs somewhat from reality. Note that the color scales are different for the two panels in Figure 5(b). Note also that, although the boundary conditions on the front and rear surfaces differ, there is almost no variation of temperature or pressure in the z-direction in Figures 5(a)-(b).
In Figure 5(c), the two distributions of mass flow rate are very different. For the PTFE wick, the maximum mass flow rate is at the boundary between the casing, the wick, and the grooves, and the variations are large. In contrast, for the stainless steel wick, the variations are small. It seems that the mass flow rate in the PTFE wick is more sensitive to the temperature distribution. Figure 6 presents calculated results for the response of the evaporator heat-transfer coefficient to changes in contact areas when PTFE and stainless steel wicks were used; these results are from calculations 1 -6 in Table 1.
The evaporator heat-transfer coefficient evap h was calculated from where cont A is the contact area between the case and wick, and case T is the average temperature on the upper surface of the casing y Ly . Here, the performance of the evaporator, without the effect of the heat leak to the CC, was evaluated using evap Q as the amount of heat instead of using the heat load to the evaporator. Figure 6 shows that the evaporator heat-transfer coefficient with the stainless steel wick was approximately 10 times higher than that with the PTFE wick. This indicates that the thermal conductivity of the wick has a large impact on the heat-transfer coefficient of the evaporator. The figure also shows that, for both wicks, the evaporator heat-transfer coefficient increases as the contact area decreases. Figure 7 presents distributions of heat flux in the y-direction on the contact surface at 2 z Lz for the PTFE and stainless steel wicks; these are results from calculations 2 and 5 in Table 1. Heat flux in negative y-direction is positive along the vertical axis. For both wicks, the maximum heat fluxes occurred at the groove-wick interface, which were at 1.0 x and 3.0 mm. The maximum heat flux for the PTFE wick was 12 times larger than that for the stainless steel wick. At the groove-wick interface ( 1.0 x and 3.0 mm), high heat flow occurs in the x-direction due to evaporation with a high interfacial heat-transfer coefficient. As a result, the heat flux concentrates at the boundaries between the casing, the wick, and the grooves. Therefore, the evaporator heat-transfer coefficient is higher when the region of the boundary is larger. Therefore, the thermal performance of the evaporator can be optimized by changing the length of the boundary per unit area of contact surface. In this calculation, the lengths were 0.67, 1.0, and 2.0 mm on contact areas of 75%, 50%, and 25%, respectively. In [8] [21], increasing the number of grooves, i.e., increasing of the length of the boundary, led to enhance the thermal performance of the evaporator. The pressure loss in the grooves must also be included when considering the LHP performance. The pressure loss depends on groove length and cross-section area, since it can be estimated from flow in a pipe. Thus, geometric optimization of the evaporator for LHP performance can be discussed in terms of the trade-off between the length of the boundary and the geometrical parameters affecting the pressure loss. The optimization theory based on the above will be developed and be verified by LHP experiments in a future work. Table 2 lists the calculated results at each calculation condition for the evaporator heat-transfer coefficient evap h , the heat transferred by phase change evap Q , and the heat leak to the CC leak Q . The two amounts of heat were normalized using the heat load to the evaporator from the rhs of Equation (10). Although the effective thermal conductivities of the two wicks differed by a factor of 48, the heat leaks for the PTFE and stainless steel wicks were almost the same (see results for calculations 1 and 4 in Table 2). Based on an estimate of the heat leak assuming one-dimensional heat conduction in the wick [22]- [25], the heat leak of the PTFE wick, which has a lower thermal conductivity, is smaller. However, results from the two calculations do not correspond. On the other hand, on data comparison between calculation numbers 3 and 6, the results still provide valid estimations. In [25], the convection effect of liquid flow in the wick is small because the Peclet number was 0.4. The heat leak can generally be estimated with higher accuracy by 3D analysis; thus, estimates of the heat leak assuming one-dimensional heat conduction in the wick can be less accurate in some cases. The reason may be the temperature distribution in the x-direction, as shown in Figure 5(b), and the formation of a region of superheated liquid in the wick. To estimate the heat leak with high accuracy, it will be necessary to develop a model that considers the above or to solve for the temperature distribution in the wick. In addition, when the entire LHP system evaluated, the heat leak to the CC through the metallic casing must to be considered.
Evaporator heat-transfer coefficients calculated with 5 of subcooling (calculations 7 and 8 in Table 2) were smaller than those without subcooling (calculations 2 and 5). This is because the heat leak leak Q was larger due to the lower temperature of the CC by the subcooling; thus, the heat transferred by phase change evap Q was smaller. Figure 8 presents the distribution of mass flow rate in the wick-groove interface, as obtained from calculation 7. For this calculation, the length Lz was changed to 1.5 mm, which allows us to more easily view the entire evaporator. On surfaces having heights less than the grooves, the arrows representing mass flow rates in Figure 8 cannot be seen because they point into the wick. This implies that condensation occurred at the interface; this is a heat-pipe effect and has been recognized earlier in [26]. The total mass flow rate by condensation was 8.48% of the total mass flow rate in the evaporator. The effect on the PTFE wick was more noticeable than that on the stainless steel wick. It seems that the heat-pipe effect was larger for the PTFE wick because of the larger temperature distribution in the PTFE wick.
Conclusion
The thermal performance of the evaporator in a loop heat pipe using 3D numerical simulations with a pore network model for situations wherein the wick was fully filled with liquid, has been investigated. In particular, how the evaporator's performance responded to changes in the thermal conductivity of the wick, the contact area between the casing and the wick, and the subcooling of the CC has been studied. Three-dimensional color renderings of pressure and temperature distributions were presented as well as 2D distributions of mass flow rates. Mass flow rates in the PTFE wick were more sensitive to the temperature distribution than those in the stainless steel wick. The evaporator heat-transfer coefficient for the stainless steel wick was approximately 10 times higher than that for the PTFE wick. On both wicks, heat fluxes concentrated at the boundary among the casing, the wick, and the grooves, and the result of the PTFE wick has larger distribution. The computational results showed that the length of the boundary per unit contact area is an important parameter in geometric optimization of the evaporator with respect to thermal performance. In our 3D analyses, the heat leak to the CC was estimated with high accuracy, and some cases showed small differences in the heat leaks between the PTFE and stainless steel wicks. In addition, the calculated results showed a heat-pipe effect in which evaporation and condensation occurred at the groove-wick interface. This effect was larger for the PTFE wick, which had a larger temperature distribution. | 2018-12-13T09:01:02.257Z | 2014-12-03T00:00:00.000 | {
"year": 2014,
"sha1": "fd5f4f06b8a6ed4067f9a3aff8baaaa729c042b7",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=52090",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "271d96df620856c6233f22a658119a33dd2a443f",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
31217054 | pes2o/s2orc | v3-fos-license | Double resonance in the infinite-range quantum Ising model
We study quantum resonance behavior of the infinite-range kinetic Ising model at zero temperature. Numerical integration of the time-dependent Schr\"odinger equation in the presence of an external magnetic field in the $z$ direction is performed at various transverse field strengths $g$. It is revealed that two resonance peaks occur when the energy gap matches the external driving frequency at two distinct values of $g$, one below and the other above the quantum phase transition. From the similar observations already made in classical systems with phase transitions, we propose that the double resonance peaks should be a generic feature of continuous transitions, for both quantum and classical many-body systems.
I. INTRODUCTION
A noise is often considered as a nuisance for a system to display any ordered behavior, and thus the weaker the better for the performance of the system. However, for the last decades, a lot of researchers have revealed that this is not always the case and that there exist a class of systems in which the intermediate strength of noise can help the system to show the best coherence with an external periodic driving. This surprising phenomenon was termed as the stochastic resonance (SR) due to its stochastic nature [1]. The phenomenon of SR has been found in the fields of physics and earth science, as well as in biology: the periodically recurrent ice ages, the bistable ring laser, superconducting quantum interference device, human vision and th3 auditory system, and the feeding mechanism of paddle fish, to list a few [1,2]. The occurrence of the SR is properly explained by the time-scale matching condition: the coherence between the system's response and the external driving becomes strongest when the stochastic time scale inherent in the system matches the time scale provided by the external driving. In a simple classical system of a few degrees of freedom making contact with a thermal reservoir, the intrinsic time scale is given by the monotonically decreasing function of the exponential thermal activation form. Accordingly, the above mentioned timescale matching condition can only be satisfied at a single temperature [1]. The time-scale matching condition was later extended to the classical statistical mechanical systems with continuous phase transitions such as the globally coupled, i.e., infinite-range kinetic Ising model [3]. It has been shown that the nonmonotonic behavior of the intrinsic time scale around the critical temperature makes the time-scale matching condition satisfied at two distinct temperatures, one below and the other above the critical temperature, resulting in the double resonance * Corresponding author: beomjun@skku.edu peaks. The double SR peaks have also been observed in the classical Heisenberg spin system in a planar thin film geometry [4], and infinite-range q-state clock model [5].
The SR phenomenon in quantum systems, named as quantum stochastic resonance (QSR), has been studied with focus on the interplay between quantum and clas-sical fluctuation at finite temperatures [1,6]. The QSR at zero temperature has also been studied for the onedimensional quantum spin system with a spatially modulated external field, and the length-scale matching similar to the time-scale matching in conventional SR has been discussed [7]. In the present work, we study the QSR at zero temperature in the Ising spin system with the quantum phase transition [8]. We summarize our main findings in Fig. 1, which displays the double SR peaks and the time-scale matching conditions in infinite-range classical [3] and quantum (this work) Ising systems in the presence of a weak external driving with the frequency Ω. We conclude that the time-scale matching condition allows us to understand the classical and the quantum double SR peaks on the same ground.
In this paper, we numerically study the resonance behavior of the infinite-range quantum Ising model. Integrations of the time-dependent Schrödinger equation and the semiclassical equation of motion unanimously yield the existence of the double SR peaks, which are clearly explained from the matching condition between the energy gap, which is intrinsic, and the frequency of the external time periodic driving.
Let us begin with the globally coupled N spins described by the Hamiltonian
is the spin-1/2 operator in the α direction (α = x, y, z) at the jth site (S ≡ 1/2 and ≡ 1 henceforth), and the transverse field g in the x direction induces quantum fluctuation due to [S z j , S x k ] = iδ jk S y j = 0. By using the total spin operator J α ≡ j S α j with J = N/2, the Hamiltonian can be cast into the form [9] which allows us to handle much bigger N since the number of base kets becomes only N + 1 (we use J z eigenkets as base kets). The globally coupled quantum Ising model Eq. (1) is very well-known to exhibit the quantum phase transition of the mean-field nature and its finite-size scaling has also been extensively studied [9] (see [10] for the finite-size scaling of the quantum phase transition in the one-dimensional Ising chain system).
We numerically obtain the energy gap ∆ between the ground and the first-excited states of the Hamiltonian Eq. (1), which exhibits the quantum phase transition at g = g c = 1 of the mean-field universality class as displayed in Fig. 2 for the system sizes N = 200, 600, and 1000. The inset of Fig. 2 shows the finite-size scaling of ∆ with the well-known exponents: dynamic critical exponent z = 1, the correlation length exponent ν = 1/2, and the upper critical dimension d c = 3 [9]. The vanishing energy gap (and thus the divergence of the intrinsic time scale) at the quantum critical point is particularly important in the present study: The nonmonotonicity of ∆ as a function of the fluctuation strength g provides the origin of the double quantum resonance peaks (see Fig. 1).
In parallel to studies of the classical SR behaviors [1,3], we next apply the weak time-periodic external magnetic field h(t) = h 0 cos Ωt along the z-direction with h 0 = 10 −3 and Ω = 0.8, to get the time-dependent Schrödinger equation where the quantum ket |Ψ(t) = J M=−J A M (t)|M with J z |M = M |M and the complex coefficient A M (t). The time evolution of the system is numerically traced through the use of the fifth-order Runge-Kutta method combined with the Richardson extrapolation and Bulirsch-Stoer method [11]. We check that the use of the sufficiently small time step δt = 10 −4 keeps the normalization condition |A M (t)| 2 = 1 unchanged within numerical accuracy. We first get the ground state in the presence of the extremely small external field in the positive z-direction to break the up-down spin symmetry, and use it as the initial condition for Eq. (2).
As the most important quantity to detect SR behavior, the average magnetization in the z direction m(t) ≡ (1/J) Ψ(t)|J z |Ψ(t) is measured as a function of time. We do not observe significant difference for other system sizes, and we display our results m(t) for N = 600 in Fig. 3 at (a) below and (b) above the quantum critical point g c = 1. When g > g c , m(t) oscillates around m = 0, and we shift vertically each m(t) in Fig. 3(b) for better comparison. It is obvious from Fig. 3 that the resonance behavior of m(t) is seen in the form of the larger oscillation amplitude at two distinct strengths of quantum fluctuation, i.e., one below g c and the other above g c . In Fig. 3(c), we display the oscillation amplitude ∆m ≡ max t m(t) − min t m(t), which clearly shows double resonance peaks. We denote the first and the second resonance points as g 1 ≈ 0.59 and g 2 ≈ 1.44, where the oscillation amplitudes become maxima. As another indicator of the SR behavior, we carry out the Fourier transformation of m(t) to obtain m(ω) at frequency ω. Figure 4 displays the magnitude of the spectral components |m(ω)| versus ω at various values of g. In general, there exist two peaks in |m(ω)|, one at ω 1 = Ω = 0.8 [indicated by the dotted vertical lines in Fig. 4], and the other at the position ω 2 that depends on g. We observe that the latter peak at ω 2 simply originates from the en- ergy gap (see Fig. 2), i.e., ω 2 = ∆ (note that = 1 in this work). As g is increased toward g c from below, ∆ decreases (see Fig. 2), which in turn yields decreasing ω 2 as shown in Fig. 4(a)-(c). As g passes through g c from below, ω 2 bounces back and moves to a larger value, reflecting the increase of ∆ for g > g c in Fig. 2. Another important observation one can make from Fig. 4 is that when the two peaks at ω 1 and ω 2 merge into a single one at ω = ω 1 = Ω, the power spectrum (P ω = |m(ω)| 2 ) at Ω suddenly increases much. From this, it is clear that the merging of the two peaks must occur at two distinct values of g, which are in good agreement with g 1 ≈ 0.59 and g 2 ≈ 1.44 in Fig. 3. We next adopt the Heisenberg picture in which the spin operator satisfies the equation of motionJ α = −i[H, J α ]. By using the commutation relation [J α , We then make the semiclassical approximation and treat the spin operator J α as the α-th component of the classical spin vector J ≡ J(sin θ cos φ, sin θ sin φ, cos θ), which results in [9,12] θ = g sin φ, φ = g cot θ cos φ − cos θ − h(t). ( We take initial values of θ and φ from the ground state proximation, the order parameter is simply computed from m(t) = J z /J = cos θ(t), which is then used for the Fourier analysis. We again find the SR behaviors at two distinct values of g: g 1 ≃ 0.59 and g 2 ≃ 1.44 as displayed in Fig. 5, which are in perfect agreement with the findings in Fig. 4.
III. SUMMARY
In summary, the infinite-range quantum transversefield Ising model at zero temperature has been numerically investigated in the presence of the weak longitudinal time-periodic magnetic field at the frequency Ω. The resonance behavior at two distinct values of the transverse field g has been clearly observed via (i) the large amplitude oscillation of the magnetization in time and (ii) the large peak at Ω in spectral analysis. The origin of the double SR peaks in the system has been identified from the vanishing of the energy gap around the quantum critical point. When the energy gap matches the frequency of the external field, the strong resonance peaks occur at two different values of g, exhibiting the double resonance behavior. We have also confirmed the double resonance in the thermodynamic limit through the use of the semiclassical approximation made for the Heisenberg equation of motion. We propose that the time-scale matching condition should play an important role in understanding the double SR behavior in a broad range of systems with continuous phase transitions, both classical and quantum. | 2012-09-11T11:59:41.000Z | 2012-08-17T00:00:00.000 | {
"year": 2012,
"sha1": "e841f283726610c49bb6c65aeead4caa41ffa233",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1209.2294",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e841f283726610c49bb6c65aeead4caa41ffa233",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
26730576 | pes2o/s2orc | v3-fos-license | Neptunism and Transformism: Robert Jameson and other Evolutionary Theorists in Early Nineteenth-Century Scotland
This paper sheds new light on the prevalence of evolutionary ideas in Scotland in the early nineteenth century and explores the connections between the espousal of evolutionary theories and adherence to the directional history of the earth proposed by Abraham Gottlob Werner and his Scottish disciples. A possible connection between Wernerian geology and theories of the transmutation of species in Edinburgh in the period when Charles Darwin was a medical student in the city was suggested in an important 1991 paper by James Secord. This study aims to deepen our knowledge of this important episode in the history of evolutionary ideas and explore the relationship between these geological and evolutionary discourses. To do this it focuses on the circle of natural historians around Robert Jameson, Wernerian geologist and professor of natural history at the University of Edinburgh from 1804 to 1854. From the evidence gathered here there emerges a clear confirmation that the Wernerian model of geohistory facilitated the acceptance of evolutionary explanations of the history of life in early nineteenth-century Scotland. As Edinburgh was at this time the most important center of medical education in the English-speaking world, this almost certainly influenced the reception and development of evolutionary ideas in the decades that
Introduction ideas by many Edinburgh-educated thinkers in the decades that followed (see, for example, Desmond, 1989;Rehbock, 1983). In this paper I will argue that the Wernerian geology taught by Robert Jameson (1774, the University of Edinburgh's professor of natural history from 1804 to 1854, may also have played a significant role in suggesting evolutionary explanations for the history and diversity of life on earth to his students. Some of the most well-known of Jameson's students and associates who came to accept a transformist interpretation of the history of life were Robert Grant (1793Grant ( -1874, Robert Knox (1791-1862), Ami Boue´(1794-1881), Hewett Cottrell Watson (1804-1881 and, most famously, Charles Darwin (1809Darwin ( -1882. In this paper I will suggest how Jameson's teaching and the influence of the natural history circle around him may have nudged these individuals towards transformist solutions to one of the great questions of nineteenth-century biology. By most accounts Jameson was an energetic and diligent professor. According to the report of the Scottish Universities Commission of 1826 he lectured to his students five days a week for the five months of his course and also made ''it a practice to converse with them an hour before the Lecture, and very frequently after the Lecture.'' (Scottish Universities Commission (1826), 1830). In addition, the report of the Commission notes that he took them on regular field excursions. As a result, Jameson's lectures were popular and well attended. As Robert Christison (1797-1882), who was a student of Jameson in 1816, later testified, his: lectures were numerously attended in spite of a dry manner, and although attendance on Natural History was not enforced for any University honour or for any profession. The popularity of his subject, his earnestness as a lecturer, his enthusiasm as an investigator, and the great museum he had collected for illustrating his teaching, were together the causes of his success. (Quoted in Ashworth, 1935, p. 100).
Jameson therefore had ample opportunity to promote his views both through formal lectures and in more informal settings. Among his students were many of the key figures who were to shape debates on the transmutation of species in the decades leading up to the publication of the Origin of Species and beyond. As Edward Forbes (1815Forbes ( -1854 was to say in his inaugural address as Jameson's successor in the chair of natural history at Edinburgh, ''The value of a professorial worth should chiefly be estimated by the number of his disciples. A large share of the best naturalists of the day received their first instruction from Professor Jameson.'' (Forbes, 1854, p. 4). It would therefore seem highly likely that many of the leading figures in natural history in the nineteenth century would have been influenced by the progressivist and transformist ideas discussed in the Edinburgh natural history circles around Jameson. In an important paper on the ''Edinburgh Lamarckians'' published in 1991 James Secord questioned earlier attributions of an anonymous transformist article entitled ''Observations on the Nature and Importance of Geology'' which was published in the Edinburgh New Philosophical Journal in 1826 (Anon, 1826). Earlier accounts of the article had assumed that Robert Grant was the author of the piece (see, for example, Eiseley, 1958;Desmond, 1989). 1 As the article praised the transformist theories of Jean-Baptiste Lamarck (1744-1829), who Grant is known to have admired, it is easy to see why his authorship seemed likely. 2 The fact that in the mid-1820s Grant was also a member of the natural history circle around Robert Jameson, who was the editor of the journal as well as Edinburgh's professor of natural history, also added plausibility to the argument. Secord suggested instead that Jameson, was a much more convincing candidate, both in regard to the style and the content of the article. If this attribution is correct, it would seem that Jameson was both a neptunist geologist and a transformist, a combination that might appear unlikely in the light of some conventional interpretations of the history of science. As Secord remarked, that ''Jameson could be simultaneously a neptunist, a gradualist, and a transmutationist shows how completely our current picture of the acceptance of evolution needs to be overhauled. It is not only in questions of attribution that we have taken too much for granted'' (Secord, 1991).
As Rachel Laudan has demonstrated, the neptunian theory of the earth of Abraham Gottlob Werner (1749-1817) ''dominated geology until the late 1820s'' (Laudan, 1993, p. 87). That Jameson was the leading British advocate of Werner's theory had been well known to historians of geology; however, that he may also have been a transformist was perhaps a more surprising claim. The apparent incongruity between neptunism and transformism stemmed from prevalent models 1 Desmond later accepted Secord's attribution of the paper to Jameson in Desmond and Parker, 2006, p. 206. 2 I will be using the terms 'transmutation of species' or 'transformism' henceforth in this paper to distinguish these older theories from Darwinian evolution, from which they vary significantly. The term 'evolution' was current in the 1820s, but was generally used with reference to foetal development, and was often associated with preformationist theories of generation. of the histories of geology and evolutionary thought that Secord threw into question in his paper. These interpreted transmutationism as an essentially progressive phenomenon, pointing forwards towards the triumph of evolutionism in the second half of the nineteenth century, while neptunism was perceived as a geological creed which had had its day by the mid-1820s, when Jameson had become one of its last defenders in Britain.
The contention that there may have been a link between theories of neptunism and transformism in the early nineteenth century receives striking confirmation in the writings of one of Jameson's contemporaries in Edinburgh, the geologist and minister of the Free Church of Scotland John Fleming (1785Fleming ( -1857. The close relationship between the two doctrines was quite clear in Fleming's mind when he wrote his inaugural lecture as professor of natural history at the newly founded New College in Edinburgh in 1850. In this lecture, Fleming unleashed a tirade against the theory of the earth espoused by Werner and his Edinburgh disciples in the following words: Subsequent to the rise of this Scottish geology of Hutton, the German geology of Werner was introduced, and for a while appeared to triumph. This system, equally indifferent to the truths of palaeontology, and outraging all philosophy by the extravagance of its assumptions, paved the way for those reveries of progressive development with which of late years we have been inundated. (Fleming, 1851, p. 216) The catalyst for this outburst seems to have been the publication a few years before of Jameson's ''laudatory'' reviews of Robert Chambers' anonymously published transformist magnum opus, the Vestiges of the Natural History of Creation (1844) and its sequel, Explanations: A Sequel to ''Vestiges of the Natural History of Creation'' (1845), in the ''new publications section'' of the Edinburgh New Philosophical Journal. Jameson had this to say of Vestiges: ''Although we do not agree with the ingenious author of this interesting volume in several of his speculations, yet we can safely recommend it to the attention of our readers'' ( [Jameson], 1845, p. 186). The following year he reviewed Explanations, noting that ''These explanations sufficiently prove that the author has met with great effect the arguments of its distinguished opponents.'' ( [Jameson], 1846, p. 400). While perhaps not meriting the term ''laudatory'', Jameson's reviews at the very least indicate that he maintained an open mind towards the ''development hypothesis'' that Chambers expounded in Vestiges and which Fleming referred to dis-missively as ''reveries of progressive development''. That Jameson considered Explanations to have effectively answered the arguments of the critics of Vestiges must have been particularly galling to Fleming, as these critics included Fleming himself, as well as and his friend and fellow evangelical David Brewster (1781-1868, who has written a blistering review of Vestiges for the North British Review ( [Brewster], 1845). Fleming roundly condemned Chambers' book in the strongest possible terms later in his inaugural lecture and it seems that in his mind there was no doubt regarding the links between the scandalous theory of universal progress outlined in that work and the developmental vision of the history of the earth advanced by the Wernerians earlier in the century. In this paper I intend to follow up this intriguing suggestion and examine to what extent directional theories of the earth of the kind developed by Werner may have helped to pave the way for the acceptance of a developmental model of the history of life in the early decades of the nineteenth century.
Robert Jameson and Wernerian Geology in Edinburgh
Jameson, the professor of natural history at the University of Edinburgh whom Fleming castigated for both his positive review of Vestiges and espousal of neptunist geology in his inaugural lecture, probably first became acquainted with Werner's theories through Charles Anderson, the translator of Werner's Theory of the Formation of Veins, whom he got to know while he was working as an assistant to the surgeon John Cheyne as a young man in Leith (Jameson, 1854, p. 6). In 1792 and 1793 Jameson attended the lectures of the University of Edinburgh's professor of natural history, John Walker (1731-1803), whose friendship and patronage were later to shape Jameson's career. Jameson made a trip to Ireland in 1793 where his interest in Werner's theory of the earth and rejection of the rival theory proposed by James Hutton (1726Hutton ( -1797 were encouraged by the Irish geologist Richard Kirwan (1733-1812), who pointed out to him ''several strong fails [sic] against the Huttonian theory'' (quoted in Sweet, 1967, p. 110). By 1796 Jameson had fully embraced Werner's neptunian theory of the earth, as can be seen in two papers which he read to the Royal Medical Society of Edinburgh in that year in which he expressed an uncompromisingly neptunist view of the history of the earth (Sweet and Waterson, 1967). In 1800 Jameson travelled to Freiberg, where Werner taught at the Mining Academy, to study with the master himself.
NEPTUNISM AND TRANSFORMISM
On his return to Scotland, and following the death of his friend and mentor, John Walker, Jameson was appointed professor of natural history at the University of Edinburgh in 1804. This put him in a strong position to promote his views to generations of students over the five decades for which he was to hold the chair. In 1808 he founded the Wernerian Natural History Society, named in honor of his master. From 1819 he edited the Edinburgh Philosophical Journal, first with the natural philosopher and scientific journalist David Brewster, and then alone after 1824. From 1826 he was the sole editor of the successor to the Edinburgh Philosophical Journal, the Edinburgh New Philosophical Journal. This was to provide an important forum for the dissemination of progressivist theories of geology and transformist theories of the history of life. Before we are in a position to examine the relationship between theories of the earth and of the history of life, we will need to cast a quick glance at the fundamental differences between the Wernerian vision of the history of the earth and the rival Huttonian system.
The situation in geology at the beginning of the nineteenth century has been accurately summarized by Martin Rudwick, who has noted that the theories prevalent at the time could be classified into ''those that postulated an earth in steady state or cyclic equilibrium and those who saw the earth's temporal development in directional terms'' (Rudwick, 2005, p. 173). The theory of Werner belongs firmly in the latter camp, while Hutton's is an extreme example of the former. While Werner's theory interpreted the geological record as showing a clear pattern of progressive change over time, Hutton's theory was radically ahistorical, centered on a uniformitarian model of the history of the earth. For Hutton the earth's history was an endlessly repeated cycle of uplift and erosion with, as he famously put it, ''no vestige of a beginning, -no prospect of an end'' (Hutton, 1788, p. 304). But for religious considerations, there would have been no reason to assume that this unchanging natural order was not eternal. Despite Hutton's importance for the later development of geology, directional models were dominant in the early nineteenth century to the extent that Rudwick refers to theories of the type espoused by Werner as the ''standard model'' for the period. So what was the nature of the Wernerian directional model of earth history? To illustrate his theory I will be drawing largely on the writings of Jameson, his most important British disciple and one of the principal subjects of this paper.
Neptunist geotheory was based on the premise that sea levels had been falling continuously throughout geological history. The nature of the Biblical Deluge, much debated by geologists in this period, did not, however, present particular problems for Werner and his followers. Apparently anxious to take into account historical evidence for the Deluge, Werner argued that the retreat of the ocean was not necessarily an absolutely continuous process, but that the geological record provided evidence that a temporary resurgence of the waters had taken place (Laudan, 1993, p. 90). In the earliest times he believed that there had existed a universal ocean, very different in chemical composition from the ones that exist today. The spherical form of the earth was taken as evidence of its original fluidity (Jameson, 1808, p. 73). From this primordial ocean the oldest rocks had been deposited by chemical precipitation. The rocks of the earliest, ''primitive'', period in the earth's history were crystalline in character, as might be expected from their process of formation. 3 Obviously crystalline rocks such as granite and gneiss would fall into this category. During the ''transition'' and ''floetz'' periods the waters receded and the first land appeared. While chemically precipitated rocks, such as limestone, continued to be formed, erosion of the land masses also contributed mechanically deposited strata, such as sandstone. Gradually the balance shifted, and the recent ''alluvial'' strata were almost entirely deposited mechanically rather than chemically, the most recent ones being largely unconsolidated. As might be expected from their order of deposition from a receding universal ocean, the most ancient, crystalline rocks were to be found in high mountain ranges, while the youngest, alluvial, rocks were found in low lying areas of the globe. Several different explanations for the recession of the ocean were put forward by neptunist thinkers; the one favored by Jameson was that the excess water had been lost to space over the millennia (Jameson, 1808, p. 77).
As can be seen from the surviving notes from his lectures, this was essentially the model of earth history that Jameson taught his students until at least the mid-1830s. 4 Generally, it is the recession of the oceans and gradual but profound change in their chemical composition that Jameson saw as driving the directional change that he observed in the geological record. However, there are some indications that he also considered the possibility that global temperatures had declined over time, a doctrine often associated with George-Louis Leclerc, Comte de Buffon (1707-1788), who saw the cooling of the earth from an original molten state as the primary motor for change. The section of the natural history syllabus that covered botany for the 1826 session included ''Deductions illustrative of Gradual Change in the Heat of the Earth, and of Alteration in Climate, as disclosed by the facts in the Physical and Geographical Distribution of Fossil and Living Plants'' (Jameson, 1826, p. 11). In his lectures Jameson gave a little more detail regarding the direction and effects of climatic change; in a set of lecture notes from 1830 he suggested that in the geological past ''the climate was very different from what it is at present and that at the time Britain was calculated to produce plants and animals requiring a much more considerable temperature then the Island possesses at present'' (Jameson, 1830, f.5). In a fragmentary note found among Jameson's paper we also find the following note in Jameson's hand that suggests he saw a diminution in temperature over geological time as likely and compatible with a broadly neptunian picture of earth history: 7 [owing?] to the diminution of temperature the expanded state of water becoming less the quantity of the atmospheric vapor & height of water diminishes 8 the Transition rocks the next down from them conglomerate, charcoal & their organic remains intimate the existence of mechanical action & of such a temp as to allow of the growth of organic bodies all of which appear to have been marine (Jameson, n.d. (a)).
The evidence quoted above also hints at the consequence of directional change in the conditions on the earth's surface for the history of life, and it is to this subject that I turn in the next section of this paper.
Progressive Visions of the History of Life in Edinburgh and Beyond
By the early decades of the nineteenth century it could no longer be ignored that the fossil record appeared to follow a strong trend towards greater complexity in the remains of living creatures found in the rocks, with the major groups of living things making their appearance in a clearly defined order. The progressive nature of the fossil record was becoming generally accepted among geologists in this period and, as Martin Rudwick has chronicled, by the early decades of the nineteenth century there had developed a general consensus among geologists that the history of life was progressive (Rudwick, 2008, p. 49). In a set of notes from Jameson's lectures, which are undated but not earlier than 1826, as they contain a reference to a paper published that year, we find the following outline of the fossil record: In the oldest strata [of the Transition] we find the lowest species of vegetables & animals, as marine plants and zoophytes, which were therefore first called into existence. … Floetz rocks are less crystallized than transition rock, but contain a greater variety of organic remains. Indeed there appears to be a regular & consistent distribution of organic beings through the rocks of this class from the very low species of the earliest strata to the more perfect animals of the newest strata, immediately adjoining the alluvial formation. (Jameson, n.d. (b), f.255).
For Jameson, the directional history of life revealed by the fossil record was directly related to the directional changes in the physical environment of the earth's surface. In his Elements of Geognosy (1808) Jameson wrote that As the water diminished, it appears to have become gradually more fitted for the support of animals and vegetables, as we find them increasing in number, variety and perfection, and approaching more to the nature of those in the present seas, the lower the level of the outgoings of the strata, or, what is the same thing, the lower the level of the water. The same gradual increase of organic beings appears to have taken place on the dry land. (Jameson, 1808, p. 82). 5 In the first decade of the century a student of Jameson named J. Ogilvy wrote a dissertation ''On the Huttonian and Neptunian Theories of the Earth'' for the Royal Medical Society of Edinburgh, which appears in the Society's dissertation book for 1806-1807. This essay demonstrates that Jameson had already made some converts among the student body to his neptunian geology and its connection with a progressive vision of the history of life with his ''masterly statement of the Wernerian theory'' in his first few years as professor (Ogilvy, 1806(Ogilvy, -1807. In his dissertation Ogilvy remarked that Another general observation of the same philosopher [Werner] beautifully confirming his opinion, -is the constantly increasing frequency of the relicts of the animal and vegetable kingdoms, as we descend from the Transition to the rocks of most recent formation; and, at the same time, he sagaciously remarked, that, in making this descent, these vestiges point out individuals of these kingdoms with which be become the more familiar as we approach the most modern formations. (Ogilvy, 1806(Ogilvy, -1807. We have already seen that in later decades John Fleming became a harsh critic of both Werner and Jameson, as well as a resolute opponent of transformism. However, Fleming's views had changed radically between the 1820s and 1850. His earlier writings show that in the early decades of the century Fleming had held geological views not very different from those of Jameson and he had been a founding member of the Wernerian Society in 1808. In his great work on The Philosophy of Zoology (1822) he wrote that From the period, therefore, at which petrifactions appear in the oldest rocks, to the newest formed strata, the remains of the more perfect animals increase in number and variety; and it is equally certain, that the newest formed petrifactions bear a nearer resemblance to the existing races, than those which occur in the ancient strata. (Fleming, 1822, vol. 2, p. 97).
Fleming believed, as did Jameson, that these changes were caused by physical changes in the environment of the surface of the globe. Like Jameson, he believed that the area of dry land had increased over time, although he considered that this had been achieved by the filling in of lakes and seas by the products of erosion rather than by a net loss of water. He also believed that one result of these changes was the gradual modification of the climate, making the difference in temperature between summer and winter more extreme. Although he may have differed from Jameson about the details of the mechanism of progressive change, he expressed its consequences in a similar manner: A variety of changes have taken place in succession, giving to the earth its present character, and fitting it for the residence of its present inhabitants. And if the same system of change continues to operate, (and it must do while gravitation prevails,) the earth may become an unfit dwelling for the present tribes, and revolutions may take place, as extensive as those which living beings have already experienced. (Fleming, 1822, vol. 2, p. 104).
The Edinburgh New Philosophical Journal, edited by Jameson, provided a forum for like-minded geologists, not just from Edinburgh or Great Britain, but from across Europe and beyond, to exchange their findings and opinions. It included a significant number of papers that dealt with the history of the earth. The following three examples will give a flavor of the kind of articles dealing with the progressive history of life on earth that Jameson published. The first appeared in 1826 and was entitled ''Geological Observations''. Its author was Ami Boue´, an Austrian Wernerian geologist, former student of Jameson in Edinburgh and member of the Wernerian Natural History Society. After noting that ''the farther we penetrate into the crust of the earth, the more simplicity do we observe in the vegetable and animal productions'', he speculated that this was due to a greater equality of temperature across the globe before concluding that as ''the zones and climates gradually became established, the vegetables and animals became diversified.'' (Boue´, 1826, p. 90). In the following year Jens Esmark (1762-1839), the Danish-Norwegian Wernerian mineralogist, published a piece in which he speculated that the earth might have been devoid of life for several thousand years after the creation, and that ''organisation did not begin till this long period was completed, which the earth required to the full development of its own constitution; that, after it began, it proceeded by successive steps from the less to the more perfect formations, ending with man as the head of the whole.'' (Esmark, 1826, pp. 120-121). We are presented by Esmark with a model in which the appearance of life was made possible by changes in the physical environment, changes which then continued to act, gradually promoting an increase in the complexity of living things. Jameson was clearly favorably impressed by Esmark's article, as he discussed this ''very ingenious paper'' approvingly and at some length in one of his lectures, a set of notes for which survives in Edinburgh University Library (Jameson, n.d. (b), ff.236-238). A similar paper was published anonymously in 1830. Here again we find the same gradual, progressive history of life as we have seen in the preceding papers. The anonymous author remarks that It is, notwithstanding, always of much importance to be able to look into the facts already established, and to observe that the gradual development of organic bodies in the animal and vegetable kingdom has followed precisely the same progress. While the simplest organized kinds of both kingdoms first appear, we also find repeated throughout the same gradations, as regards the gradual appearance and increase of the most perfectly organized beings in the strata of the earth's crust. (Anon, 1830, p. 127).
These articles demonstrate how Jameson's journal provided a space for broadly Wernerian geologists to present their ideas on the relationship between the history of the earth and the history of life. It is evident from them that the link between a progressive history of the earth and a progressive history of life was clearly a commonplace of the geological circles around Jameson. But did any of these thinkers ever speculate regarding the process responsible for the progressive change evident in the fossil record and make the leap to a transformist interpretation of this pattern? This is the question I will be addressing next.
Neptunian Geology and Transformism
In a footnote to the introduction to his System of Mineralogy (1804), Robert Jameson had laid out the main problems of natural history as he saw them as the nineteenth century began. For Jameson, the most important questions included: ''Were all animals and plants originally created as we at present find them, or have they by degrees assumed the specific forms they now possess? Are certain species become extinct? In what order and whither have they migrated? What change has climate produced?'' (Jameson, 1804, pp. xix-xx). Right at the very beginning of his career as professor of natural history at the University of Edinburgh, Jameson was already raising important questions regarding the history of life on earth. First among these questions was that of the transmutation of species. If James Secord was correct in his attribution of the paper ''Observations on the Nature and Importance of Geology'' to Jameson himself, by 1826 Jameson he had found an answer to his question. As Secord has noted, ''For the author of the 'Observations,' this progression of life is best explained through transmutation. Lamarck's theory is the logical consequence of Werner's.'' (Secord, 1991, p. 9).
The attribution of ''Observations'' to Jameson is not, however, absolutely secure, and Pietro Corsi has suggested that the author of the article may in fact have been Ami Boue´ (Corsi, 2011, p. 17). Most of the arguments in favor of Jameson would hold equally well for Boue´, who had attended Jameson's classes when he was a medical student in Edinburgh between 1814 and 1817 and, like Jameson, to whom he still referred in his autobiography many decades later as ''mon maıˆtre'', was a Wernerian geologist (Boue´, 1876, p. ii). In any case, the article shows every sign of having been written by a Wernerian, down to the characteristic terminology that is used throughout. That the author was steeped in Wernerian geological theory is evident from the vocabulary he used to discuss his subject. Whoever wrote it, the article certainly makes the connection between Wernerian geology and transformism quite explicit. The anonymous author comments at some length on the progressive nature of the fossil record, before going on to link this to the transformist theories of Lamarck. According to the author, the ''doctrine of petrifactions, even in its present imperfect condition, furnishes us with accounts that seem in favour of Mr Lamarck's hypothesis.'' (Anon, 1826, p. 297). The article notes the presence in the rocks of colder parts of the globe of fossils of species only found today in hot climates, indicating ''a great change in the temperature of their former situations'' (Anon, 1826, p. 299). If this is so, the author maintains, it raises an important question about the effect that such changes have on living things. The changes that can be observed to have been wrought on domesticated plants and animals by modifying their environment help to provide an answer: But are these forms as immutable as some distinguished naturalists maintain; or do not our domestic animals and our cultivated or artificial plants prove the contrary? If these, by change of situation, of climate, of nourishment, and by every other circumstance that operates upon them, can change their relations, it is probable that many fossil species to which no originals can be found, may not be extinct, but have gradually passed into others. (Anon, 1826, p. 298).
This passage, which contains unmistakable echoes of the theories of the French comparative anatomist and transformist É tienne Geoffroy Saint-Hilaire (1772-1844), makes clear that the author considers that directional change in the physical environment of living things is the ultimate cause of the transmutation of species, rather than an innate tendency to increase in complexity as proposed by Lamarck. Directional change in the surface of the globe, of the kind which is integral to the Wernerian model of the history of the earth, is therefore put at the center of this theory of transmutation.
Even if there is some doubt as to the attribution of the anonymous 'Observations' to Jameson, there is significant evidence from other sources to suggest that he was sympathetic to a transformist interpretation of the history of life. In the preface to the fifth edition of Cuvier's Theory of the Earth he wrote of ''Geology, which discloses to us the history of the first origin of organic beings, and traces their gradual development from the monade to man himself'' (Cuvier, 1827, p. vi). These words would appear to express a fundamentally transformist interpretation of the fossil record. In an appendix 'On the universal deluge' published the same edition the Theory of the Earth Jameson went on to add the following telling observation: 'like the formation of the rocks, we observe a succession of organic formations, the later always descending from the earlier, down to the present inhabitants of the earth, and to the last created being who was to have dominion over them.' (Cuvier, 1827, p. 431). These passages would clearly seem to indicate that Jameson interpreted the succession of fossil forms found in the geological record in genealogical terms rather than as a series of progressive but separate creations.
The 'Observations' was not the only article that proposed a transformist interpretation of the history of life against the background of a directional theory of the earth to be published in the Edinburgh New Philosophical Journal during the 1820s. The following year, Jameson's journal published an article entitled ''Of the changes which life has experienced on the globe''. This has received less attention than the 1826 article discussed above, although it has been suggested by Adrian Desmond that it might have been by Grant (Desmond, 1989). While Grant is certainly a possible candidate as the author, there is no concrete information that would allow authorship to be confidently assigned. It was unlikely to be by Jameson, as the references to the important role of volcanism and ''the original igneous state of the earth'' would be incompatible with his orthodox Wernerian views on the original aqueous state of the globe (Anon, 1827, p. 299). This would not, perhaps, necessarily entirely rule out Boue´, who himself admitted he was not as zealous a Wernerian as Jameson. 6 The article opens with a reference to the important role of fossils as evidence of ''the history and successive changes of the various races that existed before the present'' (Anon, 1827, p. 298). The author then goes on to establish two types of causes at work in the natural world. The first and most important act gradually but inexorably: ''The differences which vegetables and animals exhibit at the present day, according to the various climates or situations in which they occur, have been gradually established under the predominating influence of a small number of natural causes, and constitute at length the order of distribution which life now presents at the surface of the earth.'' (Anon, 1827, pp. 298-299). He then proceeds to establish the nature of these causes: These gradual variations in the temperature, the lowering of the general level of the seas, the equally successive and gradual diminution of the energy of volcanic phenomena arising from the original igneous state of the earth, as well as the strength and power of atmospheric phenomena, and of the tides -such were the reg-ular, general, and continued natural causes of the modifications which life has undergone … (Anon, 1827, p. 299).
The author then calls the fossil record as a witness to ''the successive and gradual change which we have pointed out.'' (Anon, 1827, p. 300). The second and less significant type of cause to which the author then turns consists of ''the irregular, and more or less violent and perturbing secondary causes of the partial vicissitudes experienced by animal and vegetable life.'' (Anon, 1827, pp. 299-300). This model of double causation is reminiscent of that of Lamarck, whose theory included both a continuously acting innate tendency towards progressive change and a secondary mechanism that depended on the effects of unpredictable environmental changes, which disrupted the simple pattern of development that would otherwise have prevailed. It differs radically from Lamarck, however, over the nature of the primary cause of transmutation, which is attributed to the effect of directional environmental change rather than an innate tendency of living things to become more perfect even in a constant environment. In this the anonymous author seems more inclined towards a Wernerian view of the history of the earth than Lamarck, who in geology was essentially a uniformitarian (Burkhardt, 1977, p. 111). Finally, the author expresses his overwhelming confidence in the correctness of his theory and appeals to its compatibility with natural law as confirmation: ''Our theory, which is founded on all the facts that have been established, cannot but prevail over the systems hitherto established, for it is in harmony with the natural laws of order and permanency which rule the universe'' (Anon, 1827, pp. 300-301).
The only transformist articles published in the Edinburgh New Philosophical Journal in the 1820s which appeared under their author's name were by Robert Grant. Grant was principally an invertebrate zoologist, and most of his published papers in this period dealt with marine invertebrates. It is well known and has been thoroughly documented by a number of scholars that Grant was one of the most significant transformist thinkers in Britain in the 1820s and 1830s (see in particular Desmond, 1984;Secord, 1991). At the time he wrote the articles he was resident in Edinburgh and was a leading figure in natural history circles in the city. As noted above, he had been a student of Jameson and there is some evidence in these articles that he adhered to an essentially Wernerian view of the history of the earth and saw the transmutation of species as occurring in the context of an earth undergoing gradual, directional change. He published a series of 16 papers between 1825 and 1827 in the Edinburgh Philosophical Journal NEPTUNISM AND TRANSFORMISM and its successor the Edinburgh New Philosophical Journal. These papers mostly dealt with aspects of the biology of the invertebrate animals he had collected from the Firth of Forth. However, two of them contain explicitly transformist themes. It is in a paper published in 1826 ''On the structure and nature of the Spongilla friabilis'' that we find the first statement in print of Grant's transformist views. Towards the end of the article, he speculated regarding the relationship between the freshwater sponge Spongilla and the more complex marine sponges: From this greater simplicity of structure and internal texture, we are forced to consider it as more ancient than marine sponges, and most probably their original parent; and, as its descendants have greatly improved their organization, during many changes that have taken place in the composition of the ocean, while the spongilla, living constantly in the same unaltered medium, has retained its primitive simplicity, it is highly probable that the vast abyss, in which the spongilla originated and left its progeny, was fresh, and has gradually become saline, by the materials brought to it by rivers, like the salt lakes of Persia and Siberia. (Grant 1826a, pp. 283-284).
Grant here gave a concrete example of the principle expounded by Geoffroy Saint-Hilaire in his ''Organisation des gavials'' that when the ''physical and chemical agents'' to which an organism is exposed remain the same, so does the development of the organism, but when conditions change, the development of the organism exposed to these new conditions will be modified by them, provided the change is not so great as to kill it (Geoffroy Saint-Hilaire, 1825, pp. 151-152). Grant then went on to give his evidence for the alleged primitive character of this freshwater sponge, based on the siliceous nature of its skeleton, noting that ''its aptness for secreting silica, and the abundance of that earth in its skeleton, show the period of its creation to have been nearly synchronous with that of the siliceous or primitive rocks.'' (Grant, 1826a, p. 284). The implication here is that these primitive creatures first came into being in an ocean rich in silica, which was in the process of precipitating out to form the crystalline primitive rocks. Wernerians had long been aware that silica was soluble only in hot basic liquids, of the kind they imagined constituted the primordial ocean (Laudan, 1993, p. 181). The silicate rocks would therefore be the first to precipitate out as the ocean cooled and its chemical composition changed over time. Grant's clear espousal of this model is strong evidence that his trans-formist views were integrated with a fundamentally Wernerian model of earth history.
Later the same year Grant repeated his views on the evolution of sponges in a paper on the structure of siliceous sponges published in the first number of the Edinburgh New Philosophical Journal. Here Grant suggested a family tree of sponges based on the form of the spicula which make up the skeletons of many species. He traced the development of the spicula from the simple forms found in freshwater sponges through three stages of increasing complexity, first to forms where ''the unnecessary and probably hurtful embedded point has been removed'' and finally to the most complex jointed speculum (Grant, 1826b, p. 350). Grant relates these changes directly to function, as he considered that the more advanced forms were better suited for defending the sponge against predators, as ''at the time of its formation, animalicules of larger magnitude swarmed in the heated ocean'' (Grant, 1826b, p. 350). Here Grant also made it clear that he believed that the oceans of earlier epochs had been hotter than at present and that the earth had consequently experienced progressive cooling during its geological history. Although there is strong evidence that Grant admired the theories of Lamarck, his belief that directional change in the physical environment played a role in driving the transmutation of species brought him closer both to Wernerian geology and to the theories of Geoffroy Saint-Hilaire. We know that Grant was an enthusiastic disciple of Geoffroy's views on unity of form in comparative anatomy and had got to know him well during his trips to Paris in the 1820s (Desmond, 1989, p. 56). Unlike Lamarck, whose views on geology were essentially uniformitarian, Geoffroy believed that there had been a gradual but profound change in the composition of the atmosphere over geological time, and that this was the motor for the transmutation of species (Geoffroy Saint-Hilaire, 1831, p. 79).
In 1829 Jameson published an anonymous report in the Edinburgh New Philosophical Journal of a memoir read by Geoffroy before the French Academy of Sciences and published in the Me´moires du Muse´um d'Histoire Naturelle the previous year (Anon, 1829b, pp. 154-155). 7 This report was attributed to Grant by Desmond in a 1984 paper (Desmond, 1984, pp. 201-202). However, Pietro Corsi has recently demonstrated beyond doubt that the paper is in fact a direct translation of an anonymous article which appeared earlier the same year in the French newspaper Le Globe. 8 The article gives a detailed account of Geoffroy's transformist theories and supports his belief that changes in the composition of the atmosphere drove the transmutation of species. The content of this paper was clearly of great interest to Jameson, as on 25 April 1829 he ''gave an account of the doctrines of Geoffroy St Hilaire on the analogy between extinct animals and those now living'' to the Wernerian Society, although sadly no record of exactly what Jameson had to say about Geoffroy's ideas has survived to enlighten us as to the opinions he expressed on that occasion (Wernerian Natural History Society, 1808-1858. However, from the brief description of the talk from the minutes quoted above it seems almost certain that his paper focused principally on Geoffroy's transformist theories, which would surely have been congenial to Jameson, based on what we know about his own views. Given the coincidence of dates between Jameson's paper to the Wernerian Society and the publication of ''Of the continuity of the animal kingdom'' in his journal, which appeared in the April-June 1829 number, it seems highly probable that the paper he gave was largely based on that article. We have seen above that Ami Boue´wrote an article on the progressive nature of the history of life and its relationship with the Wernerian model of the history of the earth which appeared in the Edinburgh New Philosophical Journal in 1826. While the picture of the history of life presented in this article is open to a transformist interpretation, it stops short of making any explicitly transformist claims. However, as Goulven Laurent has demonstrated, there is significant later evidence from other sources that Boue´was indeed a transformist and an admirer of the theories of Lamarck and Geoffroy (Laurent, 1993). His credentials as a transformist are left in little doubt by a ''re´sume´of the progress of geological sciences during the year 1833'' he wrote for the Bulletin de la Socie´te´Geologique de France. In this work he stated that: The naturalist who restricts the circle of his ideas to the short duration of his life will necessarily be directed to the ancient idea of the species as a being sui generis formed once for all time, which must perpetuate itself as such, at least as long as the present laws of nature remain in effect. The authority of scholastic writings and the most ancient legislators also corroborate this opinion, engraved in the memory from the most tender infancy. On the other hand, in examining the whole scale of creations, living as well as fossil, in ignoring individual instances in order to see the whole, set in motion by a subtle material that is disseminated everywhere, one easily arrives with the Lamarcks, the Geoffroys, and other great naturalists, at an entirely different conclusion. (Boue´, 1834, pp. 113-114). 9 It has recently come to light that another of Jameson's students, Henry H. Cheek (1807Cheek ( -1833, who studied medicine at Edinburgh between 1826 and 1832, also openly espoused transformist views (Jenkins, 2015). While Cheek was bitterly critical of Jameson's teaching as professor of natural history, the opinions he expressed in his writings on transformism are very much in harmony with the ideas we have seen were current in the circle around Jameson. In a key paper he published in 1830 in the Edinburgh Journal of Natural and Geographical Science, a journal he himself edited between 1829 and 1831, Cheek outlined his transformist ideas, concluding that ''Adaptation of the law by which organized bodies change with the variation of the conditions of existence; and separation of the functions of relation, and concentration of the vital functions, seems to be the mode of perfection.'' (Cheek, 1830, p. 65). His wording might be rather obscure, as is the case with much of his writing, but the suggestion that changes in the 'conditions of existence' are the driving force for the transmutation of species is clear. Cheek's ideas seem to derive more from the theories of Geoffroy St Hilaire, whose theories he vociferously defended in his journal, than from Wernerian geology. Nonetheless, they do go to show how commonplace the idea that the transmutation of species was linked to directional environmental change of the kind that was integral the Wernerian model was in the Edinburgh of the late 1820s and early 1830s.
It would therefore appear that there is significant evidence that transformist ideas were widely discussed and relatively uncontroversial in Edinburgh natural history circles, at least up until the early 1830s, and that they were closely linked to directional, broadly Wernerian theories of the earth. However, evidence for transformist opinions in Edinburgh becomes scantier from the early 1830s onwards. The next section will make some suggestions as to why this might have been. 9 The original French text reads : 'Le naturaliste qui restreint le cercle de ses ide´es al a courte dure´e de sa vie, sera ne´cessairement porte´a`l'ide´e ancienne que l'espe`ce est un eˆtre sui generis forme´une fois pour toutes, et devant se perpe´tuer tel, aussi long-temps du moins que dureront les lois actuelles de la nature. L'autorite´des e´crits scolastiques et des le´gislateurs les plus anciens vient encore corroborer cette opinion grave´e dans la me´moire de la plus tendre enfance. D'un autre coˆte´, en parcourant toute l'e´chelle des cre´ations, tant vivantes que fossiles, et en ne´gligeant les individualite´s pour ne voir qu'un tout mis en mouvement par une matie`re subtile disse´mine´e partout, on arrive aise´ment avec les Lamarck, les Geoffroy, et autres grands naturalistes, a`une tout autre conclusion.'
The Eclipse of Transformism in Edinburgh
After 1832 open advocacy of progressive, gradualist visions of the history of life become increasingly rare in Edinburgh natural history circles. 10 Wernerian geology itself found few defenders after the mid-1820s, and Robert Jameson, the high priest of neptunism, became an increasingly isolated figure among geologists. Cuvierian catastrophism, championed in England by such figures as William Buckland, William Conybeare and Adam Sedgewick, for a time carried all before it. Buckland, for example, in his Bridgewater Treatise, suggested that the history of life on earth had been punctuated by ''revolutions and catastrophes, long antecedent to the creation of the human race'' that were apparent in the geological record (Buckland, 1836, vol. 1, p. 130). In his Discourse on the Studies of the University Sedgwick also asserted that ''our globe has been subject to vast physical revolutions'' (Sedgwick, 1834, pp. 25-26). Sedgwick went on to make clear that the creatures of the new creations that followed these revolutions showed a radical discontinuity with previous forms, and ''though formed on the same plan, and bearing the same marks of wise contrivance, oftentimes [are] as unlike those creatures which preceded them, as if they had been matured in a different portion of the universe and cast upon the earth by the collision of another planet.'' (Sedgwick, 1834, p. 30).
Catastrophism, implying as it did a more or less complete turnover of flora and fauna at the time of each catastrophe, was fundamentally incompatible with the picture of the gradual development of life driven by environmental change that Wernerian geology had suggested to many earlier geologists. Ironically, Jameson had done much to promote catastrophist ideas as the editor of successive editions of Cuvier's Theory of the Earth, for which he also provided extensive notes. However, the picture of the history of the earth presented in the Theory of the Earth may not have seemed to Jameson to challenge the Wernerian picture of gradual, progressive change in living things, as Cuvier himself admitted that marine organisms had undergone transmutations brought about by changes in the properties of the medium in which they lived. There is a striking statement of this in Jameson's translation for the fifth edition of the Theory of the Earth (1827), where, closely following the original French text, it is noted that: ''There has, therefore, been a succession of variations in the economy of organic nature, which has been occasioned by those of the fluid in which the animals lived, or which at least corresponded with them; and these variations have gradually conducted the classes of aquatic animals to their present state'' (Cuvier, 1827, p. 14). 11 Despite this, the majority of British geologists interpreted the obvious, radical discontinuities in the fossil record as evidence that an entire world of living things had been swept away and replaced with a new creation. Hugh Miller, one of the leading Scottish advocates of discontinuity the history of life, was to write that ''The curtain drops at his command over one scene of existence full of wisdom and beauty -it rises again, and all is glorious, wise and beautiful as before, and all is new.'' (Miller, 1841, p. 102).
As the quotation above makes clear, it was not just Miller's catastrophist views on the history of life that led him to reject any slow transformation of life over geological time, but also the evangelical faith that underlay them. Miller was utterly opposed to the idea of gradual, progressive development, which he saw as denying God's power to create new species by supernatural intervention. In his Old Red Sandstone he asserted that: There is no progression. If fish rose into reptiles, it must have been by sudden transformation; -it must have been as if a man who had stood still for half a life-time should bestir himself all at once, and take seven leagues at a stride. There is no getting rid of miracle in the case (Miller, 1841, pp. 44-45).
The Evangelical Party within the Church of Scotland included many prominent scientists and natural historians among its ranks, including John Fleming and David Brewster as well as Miller. Some of these Evangelical figures, such as Fleming and Brewster, had been close associates of Jameson. Brewster co-edited the Edinburgh Philosophical Journal with Jameson until he broke with him in 1824 to found his own journal, the Edinburgh Journal of Science. Its increasing militancy in the decades before their definitive split with the Established Church to form the Free Church of Scotland in the Disruption of 1843 had a profound influence on cultural developments in the period, not least in natural history (Baxter, 1993). In the two decades leading up to the publication of Vestiges of the Natural History of Creation in 1844 attacks against transformism from Edinburgh natural historians came almost exclusively from among the ranks of the Evangelicals and their allies, and even after the publication of that book led to more widespread condemnation of transformist ideas they very much led the charge in Scotland. A very early Evangelical response to Lamarckian transformism comes from the pages of the Memoirs of the Wernerian Society. This took the form of a paper by the Evangelical minister James Grierson (1791Grierson ( -1875, given to the Society in February 1824. Here Grierson rejects Lamarckian transformist theories ''which, if they do not evince much power of observation, or great accuracy of deduction, certainly shew no deficiency in power of fancy.'' (Grierson, 1823(Grierson, -1824. The dismissal of transformism as mere fanciful speculation was to be a principal mode of attack for the Evangelical opponents of transmutation. Although John Fleming seems to have supported a progressive history of life on earth in his Philosophy of Zoology of 1822, by 1829 he was completely denying any evidence for the progressive appearance of the different orders of animals in a review of J.E. Bichino's Systems and Methods in Natural History (1827), published in the Quarterly Review. He claimed that the fossil record did not after all present a picture of progressive development, but that in fact the remains of ''zoophytes and mollusca, along with the bones of vertebrated animals, and the stems of dicotyledonous plants'' could all be found in rocks from every geological epoch where there was evidence of life ( [Fleming], 1829, p. 321). The main focus of his attack, however, was on Lamarckian transformism, asking why God could not have created ''Man directly, as easily as a Monas'' ( [Fleming], 1829, p. 320). Fleming's later estrangement from Jameson and rejection of Wernerian geology and the progressive view of the history of life raises many questions, although it certainly bore some relation to his religious beliefs. Fleming was a deeply religious man and a minister of the Evangelical Party of the Church of Scotland, and after the Disruption of 1843, of the Free Church of Scotland. Like many Evangelicals with scientific interests he was horrified by the use made of geological and biological theories in the Vestiges of the Natural History of Creation, which he saw as an appalling assault on the principles of true religion. 12 Based on the testimony of his inaugural address quoted above, he had clearly come to believe that the dangerous ''development hypothesis'' outlined in that book had its roots in the Wernerian geology favored by Jameson and his associates, including Fleming himself in his younger days. Fleming's personal relationship with Jameson may also have deteriorated over the years. Jameson was a complex and difficult character, who succeeded in alienating many of those he had dealings with during his long career through his high-handed manner and arbitrary behavior. Fleming's later estrangement from Jameson is quite evident in a quotation found in John Duns' memoir of Fleming, only published in 1859 after the deaths of both men, where Fleming is quoted as describing Jameson as ''irregular, cold, and distant'' (Duns, 1859, p. xl). Whatever the reasons, Fleming in later years became one of the most implacable enemies of transformism and progressivism in British geology. By the early 1830s, the rise of catastrophist geology, the evidence for discontinuity in the fossil record and the concerted opposition of influential Evangelical natural historians would seem to have made gradualist, developmental theories of the history of life on earth appear increasingly untenable and support for such theories, at least in public, died away.
Conclusion
We have seen that there was a significant circle of figures promoting progressivist and transformist theories of the history of life associated with the Edinburgh natural history circle around Robert Jameson, Edinburgh's professor of natural history, in the early decades of the nineteenth century. Of those we have concrete evidence for, Grant and Jameson himself were resident in the city, while Boue´had left Edinburgh for France after graduating, although it seems he continued to keep in touch with Jameson and his circle. Cheek, while not part of Jameson's circle and deeply critical of the professor himself, shared many of their opinions. There are likely to have been others, some represented perhaps by the anonymous articles in the Edinburgh New Philosophical Journal, but it is impossible to identify these with certainty. We have also seen that these transformists generally accepted a directional model of the history of the earth rooted in Wernerian neptunist geology and saw the gradually changing environment as a primary motor for the transmutation of species. In addition, we know there existed a wider circle of other Wernerians among Jameson's associates and correspondents who accepted a relationship between a directional history of the earth and a progressive history of life that must surely have strongly suggested a transformist interpretation, although they may not have made the final leap to accepting transformism themselves; in the 1820s these probably included John Fleming in Edinburgh and Jens Esmark in Norway. Grant, who was certainly a committed transformist, is sometimes portrayed as a radical figure on the margins of mainstream natural history circles (see in particular Desmond, 1989). This seems to have been very far from the case in the Edinburgh of the 1820s, where he appears to have maintained cordial relations with many of the key figures in scientific and natural history circles, including Jameson and Fleming, who were both very much establishment figures in their different ways. Grant seems to have been a particular friend of Fleming, who even named a newly discovered species of sponge Grantia in his honor (Fleming, 1828, p. 524). Both Jameson and Fleming supported Grant's successful application for the post of professor of comparative anatomy at University College London in 1827, as did a number of other key figures from the Edinburgh medical and scientific establishment ([Wakley], 1850, p. 690). Grant provides a shining example of how an openly transformist thinker could be fully integrated into the network of patronage and friendship that existed in Edinburgh natural history circles in the 1820s. His ability to publish articles in a respected journal openly avowing his transformist views surely must lead us to question any interpretation of him as a radical outsider at that time, even if, as Desmond has suggested, his situation in London after his move there in 1827 may have been very different (Desmond, 1984). Charles Darwin, who was a medical student at Edinburgh between 1825 and 1827, had no time for Jameson's Wernerian geology (Darwin, 2002, p. 26). 13 It has been suggested by a number of scholars that the development of Darwin's evolutionary theory may have been more deeply influenced by the transformist ideas he must have encountered in Edinburgh that has conventionally been accepted, or than Darwin himself was prepared to admit (see, for example, Secord, 1991;Hodge, 2014). However, while he may have been influenced by the ideas he would have heard discussed by Grant and some of his student contemporaries, Jameson's Wernerian geology does not seem to have been among the influences pushing him towards his theory of evolution. Although he attended Jameson's lectures during his second year in Edinburgh, Darwin seems not to have got much out of them (Ashworth, 1935, pp. 99-100). He much later described Edinburgh's professor of natural history in a letter to J. D. Hooker as ''that old, brown, dry stick Jameson'' (Darwin, 1985-, vol. 5, p. 195). In his posthumously published autobiography he went on to claim of Jameson's ''incredibly dull'' lectures that ''The sole effect they produced on me was the determination never as long as I lived to read a book on Geology or in any way study the science.'' (Darwin, 2002, pp. 25-26). It is therefore perhaps not surprising that he does not appear to have made the connection between a directional history of the earth and the transmutation of species that some of his older contemporaries undoubtedly did. He was certainly exposed to transformist ideas in Edinburgh, as it is well known that while he was there he had a short-lived but close friendship with Robert Grant, with whom he used to go on long invertebratecollecting trips along the Firth of Forth. Darwin famously noted in his autobiography that Grant ''burst forth in admiration of Lamarck and his views on evolution'' one day while they were on a collecting trip together, although Darwin denied that this had any significant effect on his own thinking (Darwin, 2002, p. 24). When he did come to formulate his own theory of evolution, it was to grow not from a directional model of geohistory, but from the, to all appearances, less fertile ground of Charles Lyell's uniformitarian geology, a model of earth history that had been developed in part as a refutation of Lamarckian transformism (see, for example, Rudwick, 2008). However, Darwin's rejection of Wernerian geology does not rule out the possibility that some germs of his own theory of evolution may not have been planted during his years in Edinburgh.
Unlike the Darwinian theory of evolution, with its roots in Lyell's essentially unchanging, uniformitarian vision of the earth, which would have been entirely congenial to Jameson's Huttonian enemies in the early decades of the nineteenth century, the transformists of the Edinburgh of the 1820s drew inspiration from a progressive, directional model of the history of the earth associated with Werner and his Edinburgh followers. Corsi has pointed out that Geoffroy Saint-Hilaire's model of transformism, unlike Lamarck's ''had the additional virtue of being formulated in the context of a geological hypothesis linking the vast, progressive changes in environmental conditions to a parallel development of living forms.'' (Corsi, 1988, p. 215). It was just such a fertile soil that Wernerian geology provided for transformism in Edinburgh in the early decades of the nineteenth century. This should perhaps come as no surprise, for, as Corsi has demonstrated, a number of European thinkers, notably Jean-Claude Delame´thrie (1743-1817), made similar connections (see, for example, Corsi, 2012). Indeed, it seems that Lamarck may have been somewhat unusual among transformists in the early nineteenth century in espousing a uniformitarian geology.
In this paper I have tried to show how directional theories of the history of the earth, inspired principally by the work of Werner, opened up the possibility of a transformist solution to the problem of the progressive nature of the fossil record for a generation of Scottish geologists and natural historians. These figures seem to have largely belonged to a circle around Robert Jameson, the professor of natural history at the University of Edinburgh and Werner's most important British disciple. Jameson, who was clearly sympathetic to transformist ideas himself, taught a number of these figures as professor of natural history at Edinburgh, provided a forum for them to air their ideas through his editorship of the Edinburgh New Philosophical Journal and his presidency of the Wernerian Natural History Society, and also acted as an important patron to some of his younger colleagues. This circle of transformist and progressivist natural historians seems to have emerged in first decades of the century before losing coherence after around 1830. The eclipse of neptunist theories of the earth and the ascendancy of catastrophist models less congenial to transformist interpretations of the history of life doubtless go some way to explain this phenomenon, while other social, religious and political factors, such as the growing Evangelical reaction against transformism, certainly also must have played a role. Because the transformist theories of Lamarck and Chambers did not rely on environmental change to drive transmutation the decline of Wernerian geology did not lead to the complete disappearance of transformism from debates on the natural world, and geologists such as Lyell still found it worthwhile to refute them. However, critics found it relatively easy to dismiss Lamarck's theory as a fanciful, speculative system, and it did not seem to have had many adherents in elite natural history circles after the early 1830s, while Chambers' development hypothesis met an almost universally hostile reception from expert critics on its publication in 1844. Another generation would pass before transformist ideas would once again be taken seriously by British scientists and natural historians. And when that happened they would reemerge in a very different context. Nonetheless, the speculations regarding the history of life that took place among Robert Jameson's students and in the pages of the Edinburgh New Philosophical Journal were surely not entirely without consequence. At a time when Edinburgh was the leading center of medical education in the Englishspeaking world the exposure of a generation of Edinburgh students to a gradualist, progressive vision of the history of life, fully compatible with the transmutation of species, must surely have colored their reception of evolutionary ideas when they again bubbled to the surface in the succeeding decades. | 2018-04-03T00:47:51.713Z | 2016-08-01T00:00:00.000 | {
"year": 2016,
"sha1": "b770f3cf25c9458155498d7f7e792683d03768ed",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10739-015-9425-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2b0305df173ca06cba006c6f631b65fc94428357",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Geology",
"Medicine"
]
} |
242089343 | pes2o/s2orc | v3-fos-license | The Mediating Role of Orthorexia in the Relationship between Physical Activity and Fear of COVID-19 among University Students in Poland
Previous research showed that the COVID-19 pandemic has a significant impact on the wellbeing and lifestyle of populations worldwide, including eating and physical activity (PA) patterns. The present study aims to examine the mediating effect of orthorexia on the relationship between PA and fear of COVID-19. A sample of 473 university students from Poland of a mean age of 22 years (M = 22.04, SD = 2.90, 47% of women) participated in the cross-sectional online survey study. Continuous variables were measured using the Fear of COVID-19 Scale (FCV-19S) and the Test of Orthorexia Nervosa (TON-17), while categorical variables divided participants into the physically active and inactive group regarding WHO criteria (150 min per week). Weak gender differences were found. Active people showed lower fear of COVID-19 and higher orthorexia scores than those inactive. Orthorexia was found as a suppressor variable, which increases the negative predictive value of PA on fear of COVID-19. The model of cooperative suppression explained 7% of FCV-19S. The mechanism of mediation showed that health-related behavior could help reduce fear of COVID-19, but caution is necessary for people with addictive behavior tendencies. Universities should support university students by offering programs focused on increasing healthy lifestyles and improving wellbeing.
Impact of COVID-19 on Lifestyle
The new coronavirus disease "COVID-19" was identified in December 2019 in the Wuhan region of China and has since spread worldwide in spring 2020. In Poland, the Coronavirus pandemic 2019 (COVID-19) began on 4 March 2020 [1]. Lockdown-related social distancing and numerous restrictions disrupted everyday life, including social and family life, school, or work. A systematic review and meta-analysis [2] found that the pooled prevalence of anxiety equals 33%, while for depression it is 28% in the public, with such risk factors as female gender, nurses, lower socioeconomic status, high risks of contracting COVID-19, and social isolation. Muyor-Rodríguez et al. [3] suggested that college students are considered an especially vulnerable group for mental health problems during the COVID-19 pandemic. Indeed, a nationwide study indicates that at least one mental health problem (including high stress, anxiety, or depression) was found in 45% of Chinese college students [4] and 42.8% of university students from France [5]. Furthermore, high levels of stress, anxiety, and depression were found in university students from many countries [6][7][8]. In addition, a systematic review and meta-analysis showed that mental health problems in university student populations increased during the COVID-19 pandemic period compared to pre-pandemic times [9].
The COVID-19 pandemic significantly impacted lifestyle, including changes in dietary habits and physical activity (PA) patterns [10][11][12][13][14][15][16][17][18][19][20]. Izzo et al. [17] found changes in eating habits among 33.5% of respondents (out of which 81% reported an increase in frozen food consumption) and physical activity reduction in 70.5% of the Italian population during the COVID-19 lockdown. A scoping review [11] found an increase in unfavorable dietary habits (e.g., increased alcohol, sweets, fried food, snacks, processed foods consumption, and reduced fresh produce intake), weight gain, and a reduction in physical exercise during the pandemic. Cecchetto et al. [12] showed that isolation and lockdown have adverse effects on eating behavior and emotional wellbeing through increased emotional distress, binge eating, and higher BMI scores. The impact of the coronavirus pandemic on eating and exercise behaviors was also investigated in an Australian sample among individuals with an eating disorder and the general population [19]. Increased restricting, binge eating, and purging were found in both groups, while a higher level of exercise behavior was reported in people with eating disorders (EDs) than the general population. The present study aims to examine the relationship of physical activity with orthorexia (eating disorder relying on restrictive eating tendencies) and fear of COVID-19 among a sample of university students from Poland.
Healthy Lifestyle among University Students
Lockdown and corresponding social isolation also affected the dietary patterns of university students, which can lead to a higher frequency of excessive weight and obesity [13,21,22]. However, increased level of physical activity during lockdown was usually related to a healthier diet among university students [13]. The level of PA decreased significantly during the COVID-19 pandemic [5]. In the sample of Turkish students, 38% were physically active and met the WHO criterion (over 150 min per week) before the pandemic, while only 13% remained active [5]. Prevalence of low PA level was 37.1% among Swiss university students, and 36.1% reported prolonged sitting time (>8 h/day) during the COVID-19 lockdown [23]. Among university students from Ukraine, 43% were engaged in PA ≥ 150 min weekly, 24% met the criteria of General Anxiety Disorder (GAD), while 32% met the criteria for depression during the first wave of the COVID-19 pandemic [24]. Furthermore, students reported being more involved in PA before the COVID-19 outbreak than during the lockdown.
Studies show that university students do not exhibit satisfactory health-oriented behaviors in terms of their physical activity, diet, alcohol and drug use, preventive practices, mental health, stress, and social relationships. However, female university students performed higher in health-related behavior than men [25][26][27]. A recent study examined the health-related behaviors of university students from Poland, Croatia, Turkey, Lebanon, Spain, Romania, and Italy [28]. Self-rated health was positively related to the female gender, daily breakfast eating, physical activity, and time spent studying, while negatively with BMI, stress, and smoking.
The study indicates that 68.4% of men and 48.4% of women reported practicing physical activity in a representative sample of students from a Spanish university [29]. Physical activity was positively associated with consuming more fruits. Romaguera et al. [29] found that physically active students tended to engage in other healthy habits. In addition, a more recent study indicated that most university students from Spain (69.6%) were involved in physical activity, which was related to less sedentary behavior [30]. Badicu [31] found the prevalence of low, moderate, and high levels of PA in 17.65%, 30.58%, and 51.76% of men, and also in 23.02%, 43.16%, and 33.81% of women, respectively, in a sample of students of the Faculty of Physical Education and Sport. Many students demonstrated a good sleep quality and a high level of PA. However, some differences were presented depending on the year of study, gender, and academic field. A higher percentage of men than women was engaged in a high PA level. A medium correlation was presented between PA level and sleep quality. Dąbrowska et al. [32] showed that 46% of Polish physiotherapy students had a high PA and 54% a moderate PA, while a low level of physical activity was shown in 26% of medical students.
Orthorexia Nervosa
Orthorexia Nervosa (ON) can be defined as "a pathological obsession, fixation or preoccupation with healthy food" [33] (p. 1); however, researchers have discussed different diagnostic criteria [33][34][35][36][37]. As a trait, ON is characterized by restrictive and avoidant eating behavior and a tendency to pathological obsession and preoccupation with healthy, strictly organic, and biologically pure foods [36]. The ON is not yet included in the International Statistical Classification of Diseases and Related Health Problems (ICD-10) or the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), because it is not clear whether ON is as a single syndrome of an eating disorder (EDs) or a variant of avoidant/restrictive food intake disorder (AFRID), anorexia nervosa (AN), obsessive-compulsive disorder (OCD), somatic symptom disorder, obsessive-compulsive personality disorder (OCPD), psychotic spectrum disorders, or general anxiety disorder [38][39][40][41][42].
The prevalence of orthorexia depends on the diagnostic cut-off criteria, the instrument used, or geographic region, and ranges between 1% to 89%, as suggested by review studies [43][44][45]. However, Varga et al. [45] found the average prevalence of ON to be 6.9% in the general population, while rates between 1% and 7% were estimated by using Düseldorf Orthorexia Scale (DOS ≥ 30) [46], and 5.5% using Test of Orthorexia Nervosa (TON-17 ≥ 61) [47] in the Polish sample. However, the prevalence of ON was rarely assessed among university students. We have found only two such studies. Brytek-Matera et al. [48] found ON prevalence rates of 2.3% and 2.9% (using DOS) among university students' Spanish and Polish samples. Orthorexic tendencies were present in 65.31% of 320 university students from the Lebanon (ORTO-15 < 40) [49]. Reynolds [50] found the prevalence rate of 6.5% of Australian adults at a Sydney university, using the cut-off score of ORTO-15 < 35 and diagnostic criteria. Comparing two measurement methods ORTO-11-ES and DOS-ES, Parra-Fernández et al. [51] showed that the prevalence of ON was 25.2% or 10.5%, respectively, among Spanish university students.
Research indicates that ON may lead to poor physical and mental health. In particular, ON is positively related to eating disorders (EDs), obsessive-compulsive traits, stress, anxiety, and depressive symptoms, and negatively associated with psychological wellbeing and life satisfaction [40,52,53]. Recent studies also found correlations between ON and EDs, healthy behaviors, anxiety, obsessive-compulsive disorder (OCD), and depression [47,54].
Fear of COVID-19
One of the most common emotions emerging during the pandemic is the fear of COVID-19, which can include contracting the disease or infecting loved ones, death of loved ones, severe course of illness in loved ones, healthcare failure, and the consequences of the pandemic at an individual and social level [55]. Tzur Bitan et al. [56] found two dimensions of fear of COVID-19 using the Fear of COVID-19 scale (FCV-19S), namely emotional fear reactions and symptomatic expressions of fear. Fear of COVID-19 is positively correlated with stress, depression, anxiety (considered a trait and a state), chronic illness, germ aversion, being in an at-risk group, and having a family member die of COVID-19, while it is negatively related to life satisfaction [2,[56][57][58].
Fear of COVID was examined in university students [2,[59][60][61][62][63][64][65]. Norwegian nursing students showed higher levels of fear of COVID compared to the general reference population [61]. Fear of COVID-19 (measured by FCV-19S) was positively related to psychological distress and negatively associated with general health and quality of life (QoL). Extremely high fear of COVID-19 can lead to coronaphobia. Arora et al. [66] defined coronaphobia as "an excessive triggered response of fear of contracting the virus causing COVID-19, leading to accompanied excessive concern over physiological symptoms, significant stress about personal and occupational loss, increased reassurance and safety-seeking behaviors, and avoidance of public places and situations, causing marked impairment in daily life functioning. The triggers involve situations or people involving probability of virus contraction, such as meeting people, leaving the house, traveling, reading the updates or news, falling ill, or going for work outside" (p. 2). Coronaphobia consists of three components: (1) physiological response of fear, which can appear with such symptoms as palpitations, tremors, difficulty in breathing, dizziness, change in appetite, and sleep; (2) cognitive response and preoccupation with threat-provoking cognitions, which can also trigger emotional response (e.g., sadness, guilt, anger); (3) behavioral response by engagement in avoidance behaviors (e.g., avoiding gatherings, shopping, public places, and social situations) or excessive engagement in health-related behaviors (e.g., washing hands, preparing healthy nutrition).
The Relationships between Physical Activity, Orthorexia, and Fear of COVID-19
Rodgers et al. [67] assumed fear of COVID-19 might increase the risk of EDs. People can pursue restrictive diets focused on increasing immunity. Increased stress and negative emotions due to the pandemic and social isolation can also increase the risk of ED. Evaluating and assessing these factors in different cultural environments is crucial to better understand the impact of the pandemic on risk and recovery in EDs. Indeed, research indicates fear of COVID-19 was related to greater bulimic behavior among Italian college students [68]. A high prevalence of orthorexia (67% in men and 83.2% in women, using ORTO-11) and anxiety symptoms (62.4% in men and 95.4% in women, using GAD-7) was found among adults during the COVID-19 pandemic [69]. Moreover, orthorexia and anxiety symptoms were positively associated. We assume that orthorexic behavior focused on a healthy diet can be a coping strategy to reduce fear of COVID-19.
Numerous studies showed a positive association between ON and physical activity or exercise addiction [49,[70][71][72][73][74][75][76][77][78][79][80][81][82][83][84][85]. Segura-García et al. [81] indicated athletes presented higher ON scores than the control group. In addition, professional sport involvement was a predictor of ON [81]. ON was also predicted by frequent exercising in a sample of Portuguese fitness participants [70] and by endurance sport practice (sports with predominantly aerobic activity > 150 min/week) in Italian athletes [71]. A recent systematic review and meta-analysis [82] showed a small correlation between ON and exercise and a medium correlation between ON and exercise addiction. Strahler et al. [82] suggest that comorbidity between ON and exercise addiction should be explained in future research, focusing on clinical relevance, underlying mechanisms, vulnerability, and risk factors.
The Current Study
The present study aims to examine the mediating role of orthorexia on the relationships between PA and fear of COVID-19. Previous research found that PA is negatively associated with anxiety and depression as well as negative emotions in university students during the COVID-19 pandemic [6,24,86,87]. The physically inactive group had higher scores of anxiety and depression than the physically active group [24]. Moreover, insufficient physical activity was a predictor of high anxiety among university students from Poland and Ukraine during the COVID-19 pandemic [6]. The total physical activity level and low-intensity physical activity were inversely associated with depressive symptoms in Chinese college students [86]. A longitudinal study of college students in China indicates that physical activity directly alleviated general negative emotions [87]. We assume there are inverse relationships between PA and fear of COVID-19. However, extreme PA level or exercise addiction may be positively related to fear of COVID-19. A healthy diet and exercising are two correlated dimensions of a healthy lifestyle. Orthorexia can mediate the relationship between PA and fear of COVID-19 since being involved in both orthorexia and compulsive exercise can be used as a coping strategy to decrease general anxiety and fear of COVID-19. Due to the COVID-19 pandemic-related restrictions in social contacts, the paper and pencil versions of the questionnaires are impossible to complete. Therefore, all variables of interest (including PA, orthorexia, and fear of COVID-19) will be measured using an online survey with standardized questionnaires developed or validated in the Polish cultural context.
Study Design
An online questionnaire was used for a cross-sectional self-report survey. The invitation to participate in the study was posted on the university e-learning website from 14 April to 16 June. As e-learning was the main software for attending online classes during the lockdown, the invitation was aimed at the university's entire student population.
Students were asked to participate in the study by anonymously completing a questionnaire. Participants' personal data was not obtained, and no compensation was offered as an incentive to participate. The sample size was determined a priori using G*Power ver. version 3.1.9.4. for Windows, Uiversität Kiel, Germany [88]. In order to detect a medium effect size φ = 0.30 with given 95% power (1-β error probability) in a bivariate correlation (two tails), α = 0.05, G*Power suggests 138 participants are needed in the study group (critical r CI = -0.167, 0.167; power = 0.950). When Student's t-test (two tails) was considered for two independent gender samples (women and men), with a moderate effect size d = 0.50, 95% power (1-β error probability) and α = 0.05, the required sample size should consist of 210 people (non-centrality parameter δ = 3.623; critical t(208)= 1.971, minimal gender sample size n = 105, total sample size N = 210, power = 0.950). For regression analysis, an expected sample size is 107, if considering two predictors, a medium effect size f 2 = 0.15, with given 95% power (1-β error probability) in a bivariate correlation (two tails), and α = 0.05, with critical F(2, 104) = 3.08, and non-centrality parameter λ = 16.05.
Participants
All Opole University of Technology students (about 6000 people) willing to participate and above 18 years old were able to participate in the study. The total number of responses to the invitation was 490, completed surveys was 482, while eight people refused to participate in the study at the informed consent stage (1.6% of refusal rate). However, to minimize the source of bias and allow gender comparisons, only students who disclosed their gender as either male or female were included in the final sample, which consisted of N = 473 participants, out of which n = 222 (47%) were female and n = 251 (53%) were male. Participants' age ranged between 19 and 47 (M = 22.04, SD = 2.90). Students were members of one of six departments: Civil Engineering and Architecture, Economics and Management, Electrical Engineering, Automatic Control and Informatics, Production Engineering and Logistics, Mechanical, and Physical Education and Physiotherapy. Details are shown in Table 1. Most university students reported engagement in walking (n = 294, 62.16% of the total sample), strength exercises (n = 179, 37%), cycling (n = 135, 28%), and jogging (n = 125, 26.42%). Selected forms of PA are shown in Figure 1. Among participants, 7.4% (n = 35) did not undertake any physical activity during the last week, 13.5% (n = 64) were involved in PA on one day, 19.2% (n = 91) on two days, 20.5% (n = 97) three days, 19.2% (n = 91) four days, 9.7% (n = 46) five days, 4.9% (n = 23) six days, and 5.5% (n = 26) seven days a week. In the total sample (N = 473), university students were engaged in PA on average 3 days a week (M = 3.07, SD = 1.82, ranging from 0 to 7 days in the last week). The mean duration of PA during one typical session was approximately one hour (M = 57.51, SD = 44.68, ranging from 0 to 420 min daily), whereas 200 min weekly (M = 201.29, SD = 225.78, ranging from 0 to 1800 min weekly). A sample of 222 students were identified as those who exercised a minimum of 150 min per week (Active group, 46.9% of the total sample), and 251 did not meet the criterion of a minimum of 150 min of PA per week (Inactive group, 53%).
Measures
The Fear of COVID-19 Scale (FCV-19S) [57] is a seven-item scale designed to measure the extent to which people are afraid of negative outcomes of the global COVID-19 pandemic. It consists of statements concerning different symptoms of fear caused by the COVID-19 virus. Participants assess how strongly they agree with said statements on a scale from 1 (strongly disagree) to 5 (strongly agree), and the final score is calculated by adding up each score. The internal consistency of the FCV-19S was good in the original study (Cronbach's α = 0.82) [57]. The reliability coefficient was even better in the present sample (Cronbach's α = 0.88).
Orthorexia was measured with a 17-item Test of Orthorexia Nervosa (TON-17) questionnaire, with a response scale ranging from 1 (totally disagree) to 5 (totally agree) [47]. All items form a general score (by addition) and can also be divided into three subscales: control of food quality (CFQ), fixation on health and healthy lifestyle (FHHL), and disorder symptoms (DS). A cut-off score TON-17 ≥ 61 is used to classify orthorexia [47]. The reliability coefficient in the previous study was 0.82, 0.79, 0.80, and 0.81, for CFQ, FHHL, DS, and of the total TON-17, respectively [47]. In the present study, the reliability coefficient ranged between 0.73 to 0.81 (Cronbach's α = 0.73 for CFQ, α = 0.73 for FHHL, α = 0.78 for DS, and α = 0.81 for the total score of TON-17).
Measures
The Fear of COVID-19 Scale (FCV-19S) [57] is a seven-item scale designed to measure the extent to which people are afraid of negative outcomes of the global COVID-19 pandemic. It consists of statements concerning different symptoms of fear caused by the COVID-19 virus. Participants assess how strongly they agree with said statements on a scale from 1 (strongly disagree) to 5 (strongly agree), and the final score is calculated by adding up each score. The internal consistency of the FCV-19S was good in the original study (Cronbach's α = 0.82) [57]. The reliability coefficient was even better in the present sample (Cronbach's α = 0.88).
Orthorexia was measured with a 17-item Test of Orthorexia Nervosa (TON-17) questionnaire, with a response scale ranging from 1 (totally disagree) to 5 (totally agree) [47]. All items form a general score (by addition) and can also be divided into three subscales: control of food quality (CFQ), fixation on health and healthy lifestyle (FHHL), and disorder symptoms (DS). A cut-off score TON-17 ≥ 61 is used to classify orthorexia [47]. The reliability coefficient in the previous study was 0.82, 0.79, 0.80, and 0.81, for CFQ, FHHL, DS, and of the total TON-17, respectively [47]. In the present study, the reliability coefficient ranged between 0.73 to 0.81 (Cronbach's α = 0.73 for CFQ, α = 0.73 for FHHL, α = 0.78 for DS, and α = 0.81 for the total score of TON-17). Physical Activity (PA) was assessed by two questions concerning the frequency and duration of physical activities in the past week. The responses were then multiplied to form the number of active minutes for the last week. The WHO global recommendation of PA maintaining health among adults between 18 and 64 years old is to do at least 150 min of moderate-intensity aerobic PA or at least 75 min of vigorous-intensity aerobic physical activity throughout the week, or an equivalent combination of moderate-and vigorous-intensity activity [89]. However, it is difficult to assess the differences between low, moderate, and vigorous-intensity PA for an average non-expert person. Moreover, involvement in PA should be tailored to individual preferences, capabilities, opportunities, and circumstances and should be appropriate for people with chronic diseases or physical disabilities. Therefore, we abandoned the question regarding intensity and assumed that the active group would include those students who exercise regularly at least 150 min per week, regardless of the degree of intensity. This approach was used in the previous study [24]. Active participants (PA ≥ 150 min a week) were coded as 0, and Inactive (PA < 150 min a week) were coded as 1.
Statistical Analyses
Firstly, the reliability analysis of utilized scales was performed. All scales and subscales showed at least good reliability [90] as measured by Cronbach's α. Descriptive statistics, such as range of scores, mean (M), median (Mdn), standard deviation (SD), skewness, and kurtosis, were computed for all scales, along with Shapiro-Wilk's normality test. Most variables showed marginal deviations from the normal curve, the exceptions being FCV-19S and disorder symptoms scale of the TON-17, which were strongly positively skewed and leptokurtic. However, since the sample size was large and a value ±2 is acceptable [91] (p. 114), parametric tests were used in further statistical analysis. As the study's main goal was to test the mediation effect using Hayes' PROCESS macro, based on bootstrapping technics (which is a nonparametric analysis), the assumption of normality distributions is not required.
An association between gender and PA (considered as categorical variables) was examined using a contingency table and Pearson's χ 2 test, with φ coefficient to test effect size. The Student's t-test was used to examine differences between genders and active versus nonactive participants, with Cohen's d to examine effect size. Interaction of gender and physical activity was checked using two-way ANOVA, and Pearson's r was calculated to assess the associations between variables. The mediating role of orthorexia in the relationship between physical activity and fear of COVID was tested using Model 4 of PROCESS v. 3.5. Macro for SPSS, designed by Hayes [92]. A bootstrapping procedure with 5000 resampling was used to assign measures of accuracy to sample estimates. All analyses were performed using Statistical Package for the Social Sciences (IBM SPSS Statistics, ver. 25, 2019, Predictive Solutions Sp. z o.o., Kraków, Poland). The sample size exceeded the needed number of participants, which increased the power to 1.00 for Student's t-test, Pearson's correlation, and regression analysis, as suggested by G*Power posthoc test.
Descriptive Statistics
In the first step of statistical analyses, descriptive statistics were computed, and distributions of variables were examined. Values of skewness and kurtosis exceeded the absolute value of +1 for two of the variables but did not exceed ±2, as shown in Table 2. The prevalence of orthorexia was determined by using a cut-off score of TON ≥ 61. In the total sample, 21 students identified with a score of 61 or more, which is 4.44% of ON prevalence. Among participants, 222 people met the minimum 150 min PA per week (47% of the total sample), while 251 students (53%) were included in the physically inactive group.
Group Comparisons
A contingency table was created to compare frequencies of physically active men (n = 128) and women (n = 94), with inactive male (n = 123) and female (n = 128) participants. The Person's χ 2 test (two-tailed) did not show significant associations between PA and gender, χ 2 (1) = 3.54, p = 0.060, φ = 0.09. Gender differences were examined in regards to all measured dimensions (Table 3). Women showed significantly higher levels of fear of COVID and fixation on health and healthy diet (which is one of the subscales of orthorexia) than men. Men scored higher than women in disorder symptoms of orthorexia. However, all differences were presented with a small effect size.
Fear of COVID also showed significant differences depending on the activity level (see Table 3 and Figure 2 for more details). Inactive participants scored higher in fear of COVID-19 than those exercising at least 150 min a week. For orthorexia and its two subscales (control of food quality and fixation on health and healthy diet), the opposite effect was found-active people obtained higher scores of orthorexia than those who were inactive. A medium effect size was found for FCV-19S and FHHL, while a small effect was found for a total score of the TON-17 and CFQ subscale.
The Relationships between PA, Orthorexia, and Fear of COVID-19
Fear of COVID is significantly correlated with orthorexia (r = 0.20, p < 0.001) and its two subscales, control of food quality (r = 0.16, p < 0.001) and disorder symptoms (r = 0.19, p < 0.001). All correlations are positive, suggesting that a higher risk of orthorexia is connected with a higher level of fear of COVID. However, the strength of these associations is weak.
As the final step of analyses, the mediating effect of orthorexia on the relationship between PA and fear of COVID-19 was verified (
The Relationships between PA, Orthorexia, and Fear of COVID-19
Fear of COVID is significantly correlated with orthorexia (r = 0.20, p < 0.001) and its two subscales, control of food quality (r = 0.16, p < 0.001) and disorder symptoms (r = 0.19, p < 0.001). All correlations are positive, suggesting that a higher risk of orthorexia is connected with a higher level of fear of COVID. However, the strength of these associations is weak.
As the final step of analyses, the mediating effect of orthorexia on the relationship between PA and fear of COVID-19 was verified ( Figure 3). As it was assumed, physically inactive participants scored higher in fear of COVID, b = 1. we conclude that all associations are significant, and mediation is confirmed. However, the regression coefficient value increased when orthorexia was included in the regression model, which indicates cooperative suppression [93] or reciprocal suppression [94].
Associations between Variables
This study aimed to examine the mediating effect of orthorexia on the relationships between PA and fear of COVID-19. This study found evidence of a significant reciprocal (cooperative) suppression effect in regression analysis. Orthorexia is a suppressor variable that increases the predictive value of PA on fear of COVID-19 by its inclusion in a regression equation [95]. Overall, a high level of PA contributes to better wellbeing. In contrast, orthorexia is related to the worst mental and physical health. However, high orthorexia is associated with a high PA level. Therefore, orthorexia demonstrates a suppressing effect on the relationship between PA and fear of COVID-19. Because people with high orthorexia are more likely to have a high fear of COVID-19, orthorexia as a mediator suppresses some parts of PA variance (most likely those related to excessive PA) and increases PA's effect on fear of COVID. Orthorexia as a suppressor contributed to an additional 5% of the variance beyond the separate direct effect of PA on fear of COVID-19.
These results establish that the PA contains three distinct, opposing components: low PA level is unhealthy, moderate PA level is beneficial for health, and exercise addiction has adverse consequences for mental and physical health. This complexity of antagonistic tendencies in PA cannot be taken into account in a simple correlation. The present findings allowed us to distinguish these opposing processes exclusively during the COVID-19 crisis.
Although physical inactivity predicts high fear of COVID-19, extremely high PA (indicating exercise addictive tendencies) can also increase fear of COVID-19, likely because (together with orthorexia) it includes a pathological anxiety component. On the other hand, healthy compulsive behavior related to diet (orthorexia) and PA (exercise addiction, EA) may be performed to reduce fear of COVID-19, and a vicious circle can start to spin. Increasing orthorexic and EA behaviors decreases the fear of COVID-19 for a moment, but continuous concerns about health trigger a further reaction in a more compulsive way. Solymosi et al. [96] distinguish functional and dysfunctional fear of COVID-19. Functional concerns motivate a behavioral response that helps people manage any insecurity to unharm their quality of life. In contrast, dysfunctional fear can be presented when the quality of life is subjectively reduced either by worry or precautionary behavior (or both). Further research is necessary to replicate the present research. Additionally, a separate analysis could be conducted in the future in distinct groups: inactive people, those with moderate PA levels, and persons with exercise addiction.
Associations between Variables
This study aimed to examine the mediating effect of orthorexia on the relationships between PA and fear of COVID-19. This study found evidence of a significant reciprocal (cooperative) suppression effect in regression analysis. Orthorexia is a suppressor variable that increases the predictive value of PA on fear of COVID-19 by its inclusion in a regression equation [95]. Overall, a high level of PA contributes to better wellbeing. In contrast, orthorexia is related to the worst mental and physical health. However, high orthorexia is associated with a high PA level. Therefore, orthorexia demonstrates a suppressing effect on the relationship between PA and fear of COVID-19. Because people with high orthorexia are more likely to have a high fear of COVID-19, orthorexia as a mediator suppresses some parts of PA variance (most likely those related to excessive PA) and increases PA's effect on fear of COVID. Orthorexia as a suppressor contributed to an additional 5% of the variance beyond the separate direct effect of PA on fear of COVID-19.
These results establish that the PA contains three distinct, opposing components: low PA level is unhealthy, moderate PA level is beneficial for health, and exercise addiction has adverse consequences for mental and physical health. This complexity of antagonistic tendencies in PA cannot be taken into account in a simple correlation. The present findings allowed us to distinguish these opposing processes exclusively during the COVID-19 crisis.
Although physical inactivity predicts high fear of COVID-19, extremely high PA (indicating exercise addictive tendencies) can also increase fear of COVID-19, likely because (together with orthorexia) it includes a pathological anxiety component. On the other hand, healthy compulsive behavior related to diet (orthorexia) and PA (exercise addiction, EA) may be performed to reduce fear of COVID-19, and a vicious circle can start to spin. Increasing orthorexic and EA behaviors decreases the fear of COVID-19 for a moment, but continuous concerns about health trigger a further reaction in a more compulsive way. Solymosi et al. [96] distinguish functional and dysfunctional fear of COVID-19. Functional concerns motivate a behavioral response that helps people manage any insecurity to unharm their quality of life. In contrast, dysfunctional fear can be presented when the quality of life is subjectively reduced either by worry or precautionary behavior (or both). Further research is necessary to replicate the present research. Additionally, a separate analysis could be conducted in the future in distinct groups: inactive people, those with moderate PA levels, and persons with exercise addiction.
Numerous studies showed an association between excessive exercise and ON. Among people with orthorexic behaviors, 5.2% showed a sedentary lifestyle, 45% were lightly physically active, 41.15% were active, while 8.6% were very active [49]. For comparison, 9.9% demonstrated a sedentary lifestyle among students with regular eating behavior, 46.85% light physical activity, 36.93% were active, and 6.3% very active physically. Clifford and Blyth [72] found higher ON prevalence among athletes who undertake high volumes of exercise. PA (measured by using IPAQ-SF) was significantly associated with ON tendencies (ORTO-15) among Dutch university students [74]. When study majors were compared, a higher proportion of people with ON were found in exercise science students (in particular among men) than in business students [78]. Furthermore, Oberle et al. [79] indicated that university students who scored high in orthorexia (using EHQ) were internally driven to exercise to improve their physical and mental health. Orthorexia was positively correlated with aerobic and strength-training exercise levels, exercise addiction, internal exercise motivation, and exercise motivation for psychological, social, health, and body improvement.
Kiss-Leizer et al. [77] suggested that obsessive features of sports activities play an essential role in ON. Indeed, Rudolph [80] found a significant positive correlation between ON (using DOS) and exercise addiction among German members of fitness studios. Research indicates that social desirability, guilt over skipping training, and health anxiety were the strongest predictors of ON. Almeida et al. [70] showed that ON is associated with other non-dietary behaviors focused on a healthy lifestyle and aesthetic concerns, such as physical appearance and frequent exercising. In particular, frequent exercising was a predictor of ON in the sample of Portuguese fitness participants. Bert et al. [71] found that ON (using EHQ) can be predicted by endurance sport practice (sports with predominantly aerobic activity > 150 min/week) in Italian athletes (b = 2.407, 95% CI = 0.27;4.54). Moreover, Yılmaz et al. [84] found higher orthorexic tendencies (ORTO-15) in participants who regularly performed physical exercises than those diagnosed with obsessive-compulsive disorder (OCD) and healthy individuals who did not perform physical exercises.
Orthorexia, Fear of COVID-19, and PA among University Students
We found the prevalence of orthorexia in 4.44% of university students, while criteria of a minimum of 150 min PA per week were reported in 47% of the sample. The prevalence of orthorexia among university students is within the range of 1-7%, which was found in some review studies [45,46], and is lower than in the previous study, which used TON-17 (5.5% in the general population) [47], but slightly higher than among Polish and Spanish students by using DOS (2.9% and 2.3%, respectively) [48]. The prevalence of ON was previously discussed as challenging to compare with other studies since various measurement tools have different cut-off criteria. Cross-cultural differences may also be important for it [43][44][45]. Di Renzo et al. [14] examined the impact of the early stage of the COVID-19 pandemic on eating habits and lifestyle changes among the Italian population. The research found that 15% of participants turned to farmers or organic food, purchasing fruits and vegetables. Increased interest in healthy food may lead, in some cases, to higher orthorexia risk. Indeed, recent studies indicate that symptoms of various eating disorders (e.g., anorexia, bulimia, excessive eating, emotional eating, orthorexia) increased during the lockdown [67][68][69].
The COVID-19 epidemic significantly impacted eating disorder (ED) patients, interfering with the recovery process [97]. Parsons et al. [98] suggested that the pandemic has impacted the experience of ED patients, the experience of service provision and the family situation. Therefore, support is necessary in various forms for people with eating disorders and their families. Rodgers et al. [67] suggested the disruptions to daily routines and constraints to outdoor activities may increase weight and shape concerns and negatively impact eating, exercise, and sleeping patterns, increasing ED risk and symptoms. In con-trast, regularizing daily routines, such as healthy diet, sleep, personal hygiene, exercising, leisure/social activities, and practices associated with work or study, can maximize the efficacy to maintain an overall regular daily living and buffer the adverse impact of stress exposure on mental health [99].
Previous research conducted among university students during the COVID-19 pandemic showed the prevalence of sufficient PA level was 62.9% in Switzerland [23], 43% in Ukraine [24], and only 38% in Turkey [5]. These prevalence rates are lower than in similar research performed in university students during the pre-pandemic time [29][30][31][32]. However, university students who improved their dietary habits were also more likely to have healthier lifestyles in other areas, including a higher level of PA [13].
Research indicates the COVID-19 pandemic significantly affected lifestyle, particularly reducing physical activity and changing eating habits [10][11][12][13][14][15][16][17][18][19][20]. Sulejmani et al. [20] found that the weight gained during the lockdown in Kosovo was positively associated with a higher cooking frequency, lower meat and fish consumption, higher fast-food consumption, and no physical activity performance. The study results from Iraqi Kurdistan showed that 12.0% of participants reported improvements in lifestyle, whereas 50.9% declared that their lifestyle deteriorated [16]. These negative changes included the decreased frequency of physical activity, changes in appetite (29.3% felt that their appetite increased while decreased in 14.3%), and weight gain (32.4%). In conclusion, this study is among the earliest studies showing the effect of COVID-19 on eating behavior and lifestyle changes. Furthermore, significant decreases were also observed in the frequency of intake of rice, meat, poultry, fresh vegetables, fresh fruit, soybean products, and dairy products, while significant increases were found in wheat products, other staple foods, and preserved vegetables among Chinese youth [21].
Previous studies indicated that university students demonstrate a relatively low level of healthy behavior, which is related to lack of the parents' control, lack of enough time, low health literacy, and frequent social gatherings, which is related to excessive substance use and less sleep time [25][26][27][28]. Cena et al. [28] suggested greater emphasis needs to be placed on improving the lifestyle of those university students who will be future healthcare workers.
On the other hand, health literacy was found to be a protective factor from fear of COVID-19 among Vietnamese medical students [63]. A negative association was found between fear of COVID and health-related behaviors, such as smoking and drinking alcohol [63]. Lower FCV-19S scores were presented in male participants, those of older age or in their last academic years, and those able to pay for medication. Fear of COVID-19 is substantially related to wellbeing. Latent profile analysis revealed that 46% of Turkish university students were classified into the sample with high fear of COVID-19 and medium psychological symptoms of stress, anxiety, and depression [65]. Low psychological symptoms and high mindfulness and resilience were found in 38% of participants. A high COVID-19 fear, psychological symptoms, and low mindfulness and resilience were classified among 16% of students. Yalçın et al. [65] evidenced that fear of COVID-19 was positively related to female gender, depression, anxiety, and stress, and negatively to life satisfaction, social support, mindfulness, and resilience. Furthermore, a study conducted among university students from Ecuador showed the mediating role of anxiety in the relationship between fear of COVID and depression [64]. Positive links were also found between fear of COVID-19, the personality trait of neuroticism, social networks use disorder, and smartphone use disorder among Chinese university students [62].
Limitations of the Study
Although this study identified significant evidence of the mediating effect of orthorexia on the association between PA and fear of COVID-19 in a sample of university students, the findings should be interpreted with caution due to the cross-sectional design. Further research should be aimed at performing a longitudinal repeated-measures study for matched samples. Furthermore, the sample represents one technical university from Poland. Therefore, it does not allow us to generalize the results of this study to the population of university students as a whole and the general Polish population. A cross-cultural study could be conducted to compare the present results with other samples from various countries. In addition, self-reported measures included in a survey may also be a source of potential bias. Future research may use observational, experimental, and psychophysiological methods to assess PA, orthorexia, and fear of COVID-19. Several problems as a potential source of bias are related to the online survey method, including self-selection bias, representation bias, and unknown response rates. Therefore, these studies should be repeated on a representative sample of university students in paper and pencil form. Moreover, alternative measures can be used to assess PA, orthorexia, and COVID-19 anxiety, which may increase the reliability of the research in the future. Finally, mental health was not analyzed in this study, so it is unclear to what extent the fear of COVID-19 was associated with depression, generalized anxiety disorder (GAD), specific phobias, obsessive-compulsive disorder (OCD), or post-traumatic stress disorder (PTSD). Future studies using the fear of COVID-19 scale should control participants' diagnosis and risk of mental disorders.
Conclusions
The study found a mechanism by which healthy lifestyle behaviors affect the fear of COVID-19 during the lockdown. These findings confirmed the indirect effects of orthorexia on the relationship between exercise and fear of COVID-19. As expected, healthy behavior related to PA and diet can decrease fear of COVID-19. However, physical activity was found here as a complex variable, and its various aspects may also have opposite effects on the fear of COVID-19. Although high PA decreases fear of COVID, exercise addiction and orthorexia tendencies can trigger an opposite process that limits its beneficial impact on the dependent variable. The addictive, obsessive, and compulsive behaviors associated with a healthy lifestyle can increase COVID-19 anxiety and decrease wellbeing. It is also possible that there is a reciprocal association between fear of COVID-19 and addictive behavior related to health. Rodgers et al. [67] suggested that fear of COVID-19 may increase ED symptoms, specifically related to health concerns due to the pandemic and social isolation, or by the pursuit of restrictive diets focused on increasing immunity. In addition, the disruptions to daily routines and constraints to outdoor activities may increase weight and shape concerns and negatively impact eating, exercise, and sleeping patterns, increasing ED risk and symptoms, including orthorexic restrictive eating. Evaluating and assessing all factors contributing to EDs across different cultural settings is essential to better understand the impact of the pandemic on ED risk and recovery. In conclusion, individuals prone to addictive behavior should increase their efforts to maintain their daily routine regardless of changing circumstances during the lockdown and following waves of the COVID-19 pandemic. In addition, some preventive or intervention strategies can help maintain wellbeing.
Cognitive restructuring (as coping strategy) showed an inverse association with the students' fear of COVID-19 [60]. Therefore, clinical interventions could train effective coping strategies for daily stress to improve students' wellbeing. Developing programs for promoting healthy lifestyle behaviors among students is also recommended [100]. University institutions should adopt support mechanisms to alleviate psychological impacts on students during the COVID-19 pandemic. Changes in eating and exercise behaviors need to be acknowledged and monitored during the COVID-19 pandemic for potential long-term consequences [19]. It is crucial to detect orthorexia symptoms and high fear of COVID-19 levels early during the pandemic to modulate daily behaviors to prevent long-term detrimental consequences [69]. | 2021-11-04T15:29:43.409Z | 2021-10-29T00:00:00.000 | {
"year": 2021,
"sha1": "47ba5b46a045b0e64bbb172736852fb3932ad6e9",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc8584844?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "eaa273252b8271bf2ccb3efe7d1252001a10773a",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
30257277 | pes2o/s2orc | v3-fos-license | p38 maintains E-cadherin expression by modulating TAK1–NF-κB during epithelial-to-mesenchymal transition
Epithelial-to-mesenchymal transition (EMT) of peritoneal mesothelial cells is a pathological process that occurs during peritoneal dialysis. EMT leads to peritoneal fibrosis, ultrafiltration failure and eventually to the discontinuation of therapy. Signaling pathways involved in mesothelial EMT are thus of great interest, but are mostly unknown. We used primary mesothelial cells from human omentum to analyze the role of the p38 MAPK signaling pathway in the induction of EMT. The use of specific inhibitors, a dominant-negative p38 mutant and lentiviral silencing of p38α demonstrated that p38 promotes E-cadherin expression both in untreated cells and in cells co-stimulated with the EMT-inducing stimuli transforming growth factor (TGF)-β1 and interleukin (IL)-1β. p38 inhibition also led to disorganization and downregulation of cytokeratin filaments and zonula occludens (ZO)-1, whereas expression of vimentin was increased. Analysis of transcription factors that repress E-cadherin expression showed that p38 blockade inhibited expression of Snail1 while increasing expression of Twist. Nuclear translocation and transcriptional activity of p65 NF-κB, an important inducer of EMT, was increased by p38 inhibition. Moreover, p38 inhibition increased the phosphorylation of TGF-β-activated kinase 1 (TAK1), NF-κB and IκBα. The effect of p38 inhibition on E-cadherin expression was rescued by modulating the TAK1–NF-κB pathway. Our results demonstrate that p38 maintains E-cadherin expression by suppressing TAK1–NF-κB signaling, thus impeding the induction of EMT in human primary mesothelial cells. This represents a novel role of p38 as a brake or ‘gatekeeper’ of EMT induction by maintaining E-cadherin levels.
Introduction
EMT is a complex, stepwise phenomenon that occurs during embryonic development and tumor progression (Thiery et al., 2009), and is also associated with chronic inflammatory and fibrogenic diseases affecting lung, liver and the peritoneum of patients undergoing peritoneal dialysis (Kalluri and Weinberg, 2009;Aroeira et al., 2007). During dialysis, the peritoneum is exposed to continuous inflammatory stimuli such as hyperosmotic, hyperglycemic and acidic dialysis solutions, as well as episodes of peritonitis and hemoperitoneum, which might cause acute and chronic inflammation and progressively lead to fibrosis, angiogenesis and hyalinizing vasculopathy. Our previous work demonstrated that effluent-derived mesothelial cells (MCs) from peritoneal dialysis patients show phenotypic changes, reminiscent of EMT, which are associated with the time of peritoneal dialysis treatment and with episodes of peritonitis or hemoperitoneum (Yanez-Mo et al., 2003). Moreover, the appearance of EMT signs correlates with structural and functional deterioration of the peritoneal membrane (Aroeira et al., 2007).
EMT is characterized by the disruption of intercellular junctions, replacement of apical-basolateral polarity with front-to-back polarity, and acquisition of migratory and invasive phenotypes.
Cells that have undergone EMT also acquire the capacity to produce extracellular matrix (ECM) components and a wide spectrum of inflammatory, fibrogenic and angiogenic factors. EMT is triggered by an interplay of extracellular signals, including components of the ECM, as well as soluble growth factors and cytokines, including members of the transforming growth factor (TGF)- and fibroblast growth factor families, epidermal growth factor and hepatocyte growth factor (Thiery et al., 2009).
The molecular mechanisms controlling the establishment and progression of EMT appear to be multifold and cell-type specific. One of the key events in EMT is the disruption of cadherin junctions between epithelial cells, and a pivotal role is played by families of transcription factors that repress E-cadherin expression, such as Snail, ZEB and basic helix-loop-helix (bHLH) families (Peinado et al., 2007). The expression of these transcription factors is controlled by a complex network of signaling molecules, including SMADs, integrin-linked kinase, phosphatidylinositol 3kinase, mitogen-activated protein kinases (MAPKs), glycogen synthase kinase 3 , and nuclear factor B (NF-B) (Thiery et al., 2009;Zavadil and Bottinger, 2005).
MAPKs are serine/threonine kinases that play important roles in a vast array of pathophysiological processes. The family is divided into three main subfamilies: extracellular regulated kinase (ERK), Jun N-terminal kinase (JNK) and p38. All are characterized by the presence of a typical activation module and a conserved activation domain (Chang and Karin, 2001). ERK1 and ERK2 are activated by mitogenic stimuli, whereas JNK and p38, also called stressactivated protein kinases (SAPKs), are activated by environmental and genotoxic stresses (Wagner and Nebreda, 2009). Besides being a central mediator of the inflammatory and stress response, p38 plays an important role in non-inflammatory processes such as cell-cycle regulation and cell differentiation (Cuenda and Rousseau, 2007). Once activated, p38 phosphorylates a wide array of substrates in the cytoplasm and in the nucleus, thus regulating gene expression, cell cycle and cellular polarization. The induction or activation of phosphatases is an important mechanism of crosstalk between MAPKs, and allows p38 to regulate the activity of ERK and JNK (Junttila et al., 2008;Perdiguero et al., 2007). Studies into the roles of MAPK families in the genesis of EMT have produced conflicting results, due to the heterogeneity of the cellular models and the different experimental approaches used. There is compelling evidence that ERKs drive EMT in many experimental systems (Zavadil and Bottinger, 2005), and our previous work has demonstrated the role of ERKs in E-cadherin downregulation and EMT induction (Strippoli et al., 2008). However, the role of SAPKs is less studied, although some reports indicate that JNK is an EMT inducer, also in MCs (Alcorn et al., 2008;Liu, Q. et al., 2008). p38 appears to promote EMT during development and in tumors (Zavadil and Bottinger, 2005;Zohn et al., 2006;Liu, Y. et al., 2008); however, the role of p38 in EMT during chronic inflammatory disease has not been analyzed. Here, we demonstrate that p38 promotes E-cadherin expression by suppressing TGF-activated kinase 1 (TAK1)-NF-B signaling, thus impeding the induction of EMT in human primary mesothelial cells.
Inhibition of p38 MAPK represses E-cadherin and cytokeratin expression in primary MCs
The main inducers of EMT in MCs in vivo are thought to be combinations of inflammatory and profibrotic cytokines, such as TGF-1 and IL-1. We have shown that this combination can induce a genuine EMT in primary omentum-derived MCs (Yañez Mo et al., 2003) and described the role of ERK MAPK in EMT induction in MCs (Strippoli et al., 2008). To analyze the role of SAPKs in E-cadherin expression, we pretreated MCs with specific inhibitors of p38 and JNK before stimulation with a combination of TGF-1 (0.5 ng/ml) and IL-1 (2 ng/ml) for 24 hours (T/I stimulation). These stimuli are able to induce a sustained activation of ERK (Strippoli et al., 2008) and p38 (supplementary material Fig. S1A). Moreover, p38 is stably activated in quiescent MCs and, differently from ERK, its activation levels are increased upon cellular confluency in MCs and in other experimental systems (supplementary material Fig. S1B) (Faust et al., 2005). As previously described, inhibition of ERK signaling with U0126 increased basal E-cadherin expression and limited its downregulation upon cytokine treatment (Fig. 1A) (Strippoli et al., 2008). On the other hand, treatment with p38 inhibitor SB203580 markedly reduced E-cadherin expression in both untreated and cytokine-treated cells (Fig. 1A), whereas inhibition of JNK with SP600125 had some effect in limiting E-cadherin downregulation (Fig. 1A). The effects of p38 inhibition were confirmed by using BIRB 796 (Pargellis et al., 2002), a more specific p38 inhibitor (Fig. 1A). Similar results were obtained in MCs infected with a retroviral vector encoding p38AGF, a non-phosphorylatable and inactive mutant form of p38 (Fig. 1B). Specific blockade of p38 activity with the p38AGF construct was confirmed in cells treated with sodium arsenite, a p38 activator (Westermarck et al., 2001) (supplementary material Fig. S2). The reduction in E-cadherin expression upon p38 inhibition was also demonstrated by infecting primary omentum-derived MCs with two specific lentiviral vectors encoding small hairpin RNA (shRNA) specific for p38 (Fig. 1C), and lentiviral silencing of p38 expression led to the disappearance of E-cadherin from cell membrane junctions ( Fig. 1D; supplementary material Fig. S3A). Inhibition of p38 with pharmacologic inhibitors (Fig. 1E) or by lentiviral-mediated shRNA silencing (Fig. 1F) also reduced the expression of E-cadherinencoding mRNA. On the other hand, treatment of MCs with sodium arsenite led to blockade of T/I-induced E-cadherin downregulation ( Fig. 2A). Confocal microscopy analysis showed increased membrane localization of E-cadherin upon treatment with sodium arsenite in cells stimulated with T/I ( Fig. 2B; supplementary material Fig. S3B). These results strongly suggest that p38 promotes E-cadherin expression and plasma membrane localization in human primary MCs.
Pharmacological inhibition of p38 also reduced the expression of cytokeratin, another epithelial marker whose downregulation is often associated with EMT, and resulted in a disorganized cytokeratin skeleton (Fig. 3A,B). The expression of ZO-1, a component of tight junctions, was also reduced upon p38 inhibition (Fig. 3B). Confocal analysis showed an altered plasma membranecytosolic distribution of ZO-1 in cells treated with p38 inhibitors ( Fig. 3C; supplementary material Fig. S3C). Treatment with p38 inhibitors in combination with T/I stimulation led to the appearance of cells with a fibroblastoid shape, and which no longer expressed ZO-1 at the plasma membrane cell edges (Fig, 3C, arrows). On the other hand, the expression of vimentin, a mesenchymal marker, increased upon p38 inhibition ( Fig. 3B; supplementary material Fig. S4). These results suggest that p38 inhibition might oppose the expression of different epithelial proteins during EMT induction, while increasing levels of mesenchymal markers.
Blockade of p38 decreases expression of Snail1 and induces expression of Twist
We next analyzed the role of p38 in the expression of transcription factors known to directly regulate E-cadherin expression in our experimental conditions. We focused first on Snail1, which is a main regulator of E-cadherin expression and is upregulated upon stimulation with TGF-1 plus IL-1 in our experimental system (Yanez Mo et al., 2003). Stimulation of primary MCs with T/I induced a rapid increase in Snail1-encoding mRNA expression that was inhibited by pretreatment with SB203580 (Fig. 4A), and also by pretreatment with the ERK pathway inhibitor U0126, as previously reported (Strippoli et al., 2008). This effect was also reflected in reduced Snail1 protein expression detected by western blot (Fig. 4B) and by immunofluorescence (Fig. 4C). Interestingly, simultaneous pretreatment with SB203580 and U0126 caused a stronger inhibition of the induction of mRNA encoding Snail1. We analyzed whether this effect was due to an involvement of mitogenand stress-activated kinases 1 and 2 (MSK1/2), which are activated by both ERK and p38 (Vermeulen et al., 2009). However, pretreatment with H89, an inhibitor of MSK1/2 and protein kinase A, ruled out this hypothesis (Fig. 4D). Because increased expression of Snail1 cannot account for decreased E-cadherin upon p38 inhibition, we analyzed the effect of p38 inhibitors on the expression of the zinc finger transcription factors Twist, ZEB1, ZEB2, E47 and Slug. Pretreatment with SB203580 or BIRB 796 had no significant effect on ZEB1 and ZEB2, E47 and Slug, but increased the expression of the bHLH protein Twist (Fig. 4E). The role of p38 on Twist expression was also confirmed by p38 silencing (Fig. 4F). p38 thus has opposing actions on the expression of Snail1 and Twist in MCs.
p38 inhibition increases NF-B nuclear translocation, DNA binding and transcriptional activity in MCs
NF-B plays a major role in EMT induction in many experimental settings, including MCs (Huber et al., 2004;Strippoli et al., 2008;Solanas et al., 2008). Moreover, Dorsal, a Drosophila homolog of p65 NF-B, has been shown to control the transcription of Snail and Twist during development and innate immunity in Drosophila (Furlong et al., 2001), indicating a role in the expression of Snail and bHLH factors; and p38 activity can, depending on the experimental conditions, enhance or limit NF-B function (Orr et al., 2005;Gazel et al., 2008). Pretreatment of MCs with SB203580 or BIRB 796 increased the intensity of NF-B nuclear staining induced by cytokine treatment (Fig. 5A). To analyze NF-B interaction with specific DNA sequences upon p38 inhibition, we performed a p65 NF-B binding essay (supplementary material Fig. S5) using MeT5A cells, an untransformed MC line widely used in peritoneal MC research (Rampino et al., 2001). BIRB 796 pretreatment moderately increased NF-B binding to a specific probe both in unstimulated cells and upon T/I stimulation (Fig. 5B). To test the effect of p38 on NF-B transcriptional activity, we transfected MeT5A cells with a luciferase reporter construct containing multiple NF-B binding sites (KBF-luc). Pretreatment of these cells with BIRB 796 increased basal and cytokine-induced luciferase activity (Fig. 5C). This experiment was reproduced in MeT5A cells infected with the p38AGF dominant-negative construct (Fig. 5D). These results strongly support a role for p38 in limiting NF-B activity in MCs during EMT.
p38 inhibition enhances phosphorylation of TAK1 on Thr187 and of p65 NF-B on Ser536, and reduces PP2A phosphatase activity in cytokine-stimulated MCs
We next examined the possible role of TAK1 in p38-regulated NF-B activation. TAK1 plays a major role in the activation of NF-B and MAPK pathways in response to TGF-1 and IL-1 (Shim et al., 2005). Pharmacologic inhibition of p38 before T/I stimulation increased the phosphorylation of TAK1 at Thr187, which is necessary for TAK1 kinase activity (Singhirunnusorn et al., 2005), and also enhanced the phosphorylation of p65 NF-B at Ser536 (Fig. 6A). These changes coincided with a reduction in the levels of the inhibitor of B (IB), probably due to increased IB phosphorylation and subsequent degradation. Moreover, a timecourse study showed a prolonged TAK1 phosphorylation in cells pretreated with BIRB 796 (Fig. 6B). Because okadaic acid, an inhibitor of serine/threonine phosphatases [especially protein phosphatase 2A, PP2A (Westermarck et al., 2001)] strongly increased TAK1 phosphorylation (Fig. 6B), we analyzed whether p38 inhibition might affect PP2A activity in this experimental system. As shown in Fig. 6C, p38 inhibition by BIRB 796 reduced PP2A activity in cells treated with T/I. Collectively, these results suggest that inhibition of p38 leads to the activation of the TAK1-NF-B pathway, and that this effect might be mediated by inhibition of protein phosphatases, such as PP2A.
TAK1 and NF-B mediate the effects of p38 inhibition on E-cadherin expression and on the acquisition of a spindle-like phenotype
To directly demonstrate the role of enhanced NF-B signaling in the downregulation of E-cadherin induced by p38 inhibition, we 4324 Journal of Cell Science 123 (24) infected MCs with a retroviral vector encoding an IB superrepressor, a non-degradable mutant form (Ser32Ala and Ser36Ala) of the repressor IB. The IB super-repressor rescued the enhanced downregulation of E-cadherin expression observed upon pharmacologic inhibition of p38 (Fig. 7A). Moreover, E-cadherin expression was also rescued by retroviral infection with TAK1 D175A, a catalytically inactive mutant of TAK1 that behaves as a dominant-negative mutant, and a similar result was obtained by pretreating MCs with a derivative of 5Z-7-oxoeanol, a highly specific and potent inhibitor of TAK1 catalytic activity (Yao et al., 2007), both for protein and mRNA expression (Fig. 7B,C). Interestingly, 5Z-7-oxoeanol inhibited the basal and cytokineinduced phosphorylation of p38 (Fig. 7B), supporting our hypothesis that p38 MAPK and the TAK1-NF-B activation pathway are linked by a negative feedback loop. Treatment with 5Z-7-oxoeanol also totally blocked the acquisition of a spindle-like phenotype, characteristic of EMT, upon cytokine stimulation in cells treated with p38 inhibitors (Fig. 8). These experiments demonstrate a causal link between p38 and NF-B, through the modulation of TAK1 activity, in the control of E-cadherin downregulation and EMT induction in MCs.
Discussion
This study aimed to characterize the role of p38 MAPK in the EMT undergone by primary mesothelial cells, in particular in the control of E-cadherin expression. Our results demonstrate that p38 maintains E-cadherin levels in MCs, acting as a break on cytokineinduced E-cadherin downregulation by modulating the TAK1-NF-B activation pathway. To our knowledge, this is the first study to show a role of p38 in supporting E-cadherin levels, and indicates a previously unknown role for p38 MAPK in impeding the progression of EMT. As demonstrated with different experimental approaches, inhibition of p38 activity led to downregulation of Ecadherin expression in MCs. RT-PCR experiments showed a parallel reduction of E-cadherin-encoding mRNA, suggesting that p38 acts in this experimental system at the level of mRNA expression. p38 inhibition was able to further reduce E-cadherin levels upon T/I treatment, which induces EMT in MCs (Yañez-Mo et al., 2003). We obtained opposite results by treating MCs with sodium arsenite, a known p38 activator. Hence, p38 activation counteracted E-cadherin downregulation and disappearance from the plasma membrane, as shown by western blot and confocal microscopy analysis.
4325
p38 maintains E-cadherin expression Treatment with p38 inhibitors also induced a disassembly of the cytokeratin filament network and a reduction of cytokeratin levels. This phenomenon was slower (56 hours) than E-cadherin downregulation (24-48 hours). p38 has been reported to phosphorylate cytokeratin at Ser73, this event allowing the formation of keratin granules, or Mallory bodies in the hepatocytes (Wöll et al., 2007). In MCs, cytokeratin disassembly and downregulation might favour the acquisition of a more motile phenotype, sustained by the increase of other intermediate filaments such as vimentin (Yañez-Mo et al., 2003). Also, prolonged treatment with p38 inhibitors reduced the expression and cellular localization of ZO-1. In particular, the appearance after treatment with T/I plus p38 inhibitors of cells with a fibroblastoid shape, which no longer localize ZO-1 at cell-cell junctions, suggests that an EMT process is taking place in these experimental conditions.
On the other hand, p38 inhibition increased the expression of vimentin, a mesenchymal marker. Vimentin has been reported to be phosphorylated upon p38 activation, and recent studies demonstrate that the p38-hsp27 pathway affects vimentin expression and filament assembly (Liu et al., 2010).
We focused our study on the control mechanism of E-cadherin transcription by p38. We analyzed the expression of transcriptional factors relevant for E-cadherin downregulation, such as zinc finger proteins of Snail, ZEB and bHLH families (Peinado et al., 2007). We found that p38 inhibition limited the expression of Snail1, a factor that is rapidly induced by T/I treatment and plays a major role in the induction of EMT in MCs (Strippoli et al., 2008). With this respect, p38 inhibition is similar, albeit less strong, to the inhibition operated by ERK, which is a major Snail inducer. This result does not account for the role of p38 in controlling E-cadherin levels. Besides Snail, E-cadherin repression could be regulated by a wide array of factors. We analyzed the expression of other Ecadherin repressors and found that p38 inhibition leads to an increase of a bHLH factor, Twist, whereas other transcription factors such as ZEB1, ZEB2, E47 and Slug were not significantly affected by p38 inhibition. The expression of Twist upon p38 inhibition might account for the downregulation of E-cadherin during the induction of EMT in MCs. Interestingly, Twist has been reported to regulate Snail1 expression (Smit et al., 2009). In Drosophila, Twist is induced by Dorsal, a NF-B analog (Jiang et al., 1991). NF-B is an important mediator of inflammation, has been shown to play a role in a tumor model of EMT, is emerging as a central player in many models of EMT, and controls EMT in MCs (Huber et al., 2004;Strippoli et al., 2008). Similarly to E- immunofluorescence. MCs were treated as for B. Cells were fixed, permeabilized, stained with a mouse monoclonal antibody against Snail1 (red) and a rabbit monoclonal against E-cadherin (green) and subjected to confocal microscopy analysis. Panel shows immunofluorescence staining of Snail1 and E-cadherin expression. Cell nuclei are shown in blue (Hoechst 33342). Only Snail1 from T/I-stimulated cells is shown. The results shown are from a representative independent experiment of three performed. (D)Effect of MSK1/2 inhibition on Snail1 expression. MeT5A cells were pretreated with DMSO, 10M H89 or SB203580 plus U0126 and stimulated for 12 hours with T/I as indicated. Quantitative RT-PCR was performed on total RNA, with Snail1-encoding mRNA amounts normalized to expression of histone H3-encoding mRNA. (E)Effect of pharmacologic p38 inhibition on the mRNA expression of the zinc finger transcription factors Twist, Slug, ZEB2, E47 and ZEB1 in MCs. Cells were pretreated as for B and mRNA expression determined by quantitative RT-PCR on total RNA, with normalization to histone H3-encoding mRNA levels. (F)Effect of p38 shRNA silencing on Twist expression. Cells were infected with lentiviral vector encoding a specific p38-targeting shRNA. Quantitative RT-PCR was performed on total RNA, with normalization to histone H3-encoding mRNA levels. Control cells were infected with nonspecific shRNA. cadherin, NF-B has also been demonstrated to inhibit cytokeratin expression in mammary epithelial cells (Chua et al., 2007). Thus, we hypothesize that p38 might control E-cadherin and cytokeratin levels through its effect on NF-B activity.
We investigated the relationship between p38 and NF-B in our experimental system. We found that p38 inhibition led to increased NF-B nuclear translocation, binding to specific DNA sequences and transcriptional activity. These effects were seen both at basal levels and in cells stimulated with T/I. Depending on the stimuli and the experimental models, p38 can both activate or inhibit NF-B transcriptional activity. Various stimuli known to induce p38 activation, such as salicylate, sorbitol, arsenite and UV radiation, can induce NF-B inhibition (Orr et al., 2005;Ivanov, 2000).
Interestingly, Twist plays a role in a major negative feedback for NF-B activity. Twist2 -/mice show uncontrolled inflammation due to aberrant NF-B activity (Sosic et al., 2003). We therefore studied how p38 inhibition might lead to increased NF-B activity. We focused on TAK1, a MAPK kinase kinase rapidly induced by both IL-1 and TGF-1 and playing a role in the activation of p38, JNK and NF-B signaling (Shim et al., 2005). Pretreatment with 4327 p38 maintains E-cadherin expression p38 inhibitors increased TAK1 phosphorylation at Thr187 induced by T/I. This event correlated with increased phosphorylation of NF-B at Ser536, which is linked to NF-B activation, and with increased IB phosporylation and degradation. All these events might account for increased NF-B activity in cells pretreated with p38 inhibitors. TAK1 controls the NF-B pathway upon TNF stimulation by activating the high-molecular-weight IB kinase (IKK) complex, which phosphorylates IB at Ser32 and Ser36 and NF-B at Ser536 (Sakurai et al., 2003). Moreover, TAK1 has been recently demonstrated to mediate TGF-1-induced NF-B effects in a tumor model of EMT (Neil and Schiemann, 2008). Our results thus suggest that the effects of p38 inhibition on NF-B are related to increased TAK1 activity. Because TAK1 phosphorylation is increased upon treatment with okadaic acid, a serine/threonine phosphatase inhibitor, we hypothesized that p38 might regulate TAK1 activity via this route. In agreement with this hypothesis, we found that treatment with BIRB 796 markedly inhibits PP2A activity in MCs stimulated with T/I. It has already been demonstrated in other experimental settings that p38 coimmunoprecipitates with PP2A and regulates its activity in response to diverse stimuli, including TNF and sodium arsenite (Grethe et IB and TAK1 levels were analyzed to confirm overexpression, and -tubulin was detected as a loading control. (B)MCs were pretreated with the TAK1 inhibitor 9-epimer-11,12-dihydro-(5Z)-7-oxozeanol for 1 hour, followed by pretreatment for 24 hours with p38 inhibitors as in A. Controls were pretreated with DMSO. Cells were then either not treated (NT) or stimulated for 24 hours with T/I. The representative western blot shows the levels of E-cadherin expression and phospho-p38. Expression of -tubulin was detected as a loading control. Densitometric analysis of this blot is shown underneath. The results shown are from a representative independent experiment of three performed. (C)MCs were treated as in B, and E-cadherin-encoding mRNA expression was measured by quantitative RT-PCR performed on total RNA. Histone H3-encoding mRNA expression was used for normalization. Bars represent means + s.e.m. of three independent experiments. Images show the results of a representative independent experiment of three performed. Bottom left: mask of cell areas used for MetaMorph analysis of cell morphology. Borders were tracked and masks were created from representative fields from pictures taken with a phase-contrast microscope (shown in supplementary material Fig. S6). Bottom right: elliptical factor quantification of the experiment described above by using the MetaMorph software. Bars represent means + s.e.m. of three independent experiments. *P<0.05 for DMSO-treated cells compared with cells treated with SB203580 or BIRB 796. Avdi et al., 2002;Westermarck et al., 2001). Moreover, PP2A is stably associated with TAK1 and negatively regulates its activation by TGF-1 (Kim, S. I. et al., 2008). Alternatively, p38 might affect TAK1 activity through the phosphorylation of the TAK1 subunit TAB1, as proposed by Cheung and co-workers (Cheung et al., 2003).
Our analysis of TAK1-NF-B signaling showed that inhibition of the TAK1-NF-B pathway rescues the downregulation of Ecadherin induced by p38 inhibition. These results are summarized in the model shown in Fig. 9. TAK1 and NF-B play a major role, according to our results, in mediating the effect of p38 on Ecadherin expression; however, we cannot exclude TAK1-NF-Bindependent effects of p38 on E-cadherin in our experimental system. In this regard, TAK1 would activate p38, which in turn would trigger a negative feedback loop on the TAK1-NF-B pathway to control excessive EMT.
Our results show that p38 promotes E-cadherin expression in MCs both under basal conditions and upon cytokine treatment. However, the difference between basal and cytokine-stimulated conditions in this experimental system might be simply a quantitative matter because MCs constitutively produce low levels of TGF-1 (Loureiro et al., 2010) that might favor the transition towards a mesenchymal phenotype. Because the decision between epithelial and mesenchymal fate is the outcome of a balance of numerous extracellular signals (Kalluri and Weinberg, 2009), we hypothesize that basal p38 activity in untreated MCs might contribute to the maintenance of the epithelial state, helping to prevent excessive transdifferentation when the epithelium is exposed to inflammatory stimuli.
In this regard, unlike ERK, p38 activity is increased in confluent cultures with respect to dispersed cells (Houde et al., 2001;Faust et al., 2005); this is a particularly interesting finding because cell scattering is characteristic of EMT. We confirmed that p38 is activated in MCs under resting conditions, and that its activation increases with confluency. p38-mediated inhibition of EMT induction might reflect findings in other experimental systems, which show that p38 controls cell differentiation and might contribute to proliferation arrest and tumor suppression (Houde et al., 2001;Perdiguero et al., 2007;Wagner and Nebreda, 2009). Moreover, a recent study identifies p38 at a crossroad of proinflammatory and anti-inflammatory responses (Kim, C. et al., 2008). On the other hand, p38 activity induced by pro-inflammatory stimuli, concomitant with other pathways (such as JNK and ERK), might be responsible for the effects linked to chronic inflammation and EMT, such as increased cell motility and invasion or extracellular matrix protein production, as demonstrated in other experimental systems (Bakin et al., 2002;Bhowmick et al., 2001). In contrast to our results, recent studies (Zohn et al., 2006;Liu, Y. et al., 2008) reported a downregulation of E-cadherin caused by p38 activation during mouse gastrulation and in lung morphogenesis. Interestingly, in these and other studies, mainly performed in tumoral or developmental models, E-cadherin is not regulated by p38 at a transcriptional level. These discrepancies with our results can be explained by the heterogeneity of different experimental approaches and EMT models.
Overall, a deeper understanding of the role of p38 in a chronic inflammatory model (such as the EMT of MCs) might be useful to better understand the role of this kinase in inflammation and other pathophysiological systems, in order to design more efficacious and selective anti-inflammatory strategies.
Isolation and culture of mesothelial cells
Human mesothelial cells (MCs) were obtained by digestion of omentum samples from patients undergoing unrelated abdominal surgery (Stylianou et al., 1990). Samples were digested in 0.125% trypsin containing 0.01% EDTA. Cells were cultured in Earle's M199 medium supplemented with 20% fetal calf serum, 50 U/ml penicillin, 50 g/ml streptomycin, and 2% Biogro-2 (containing insulin, transferrin, ethanolamine and putrescine) (Biological Industries, Beit Haemek, Israel). To induce EMT, MCs were treated with a combination of human recombinant TGF-1 (0.5 ng/ml) and IL-1 (2 ng/ml) (R&D Systems, Minneapolis, MN) as described previously (Yanez-Mo et al., 2003). Although TGF-1 and IL-1 both induce EMT phenotypic changes when administered separately, combined stimulation is necessary to induce genuine EMT. The cytokine doses used were within the range of those detected in peritoneal dialysis fluids from peritonitis patients (Lai et al., 2000) and are similar to those used in previous studies (Yanez-Mo et al., 2003;Yang et al., 1999). The human mesothelial cell line MeT-5A (ATCC, Rockville, MD) was cultured in Earle's M199 medium as above and stimulated with the same doses of TGF-1 and IL-1.
4329
p38 maintains E-cadherin expression Equal amounts of protein were resolved by SDS-PAGE. Proteins were transferred to nitrocellulose membranes (Amersham Life Sciences, Little Chalfont, UK) and probed with antibodies using standard procedures. Nitrocellulose-bound antibodies were detected by chemiluminescence with ECL (Amersham Life Sciences).
Confocal microscopy and immunofluorescence
Cells were fixed for 20 minutes in 3% formaldehyde in PBS, permeabilized in PBS containing 0.2% Triton X-100 for 5 minutes, and blocked with 2% BSA for 20 minutes. For E-cadherin staining, cells were fixed and permeabilized in cold methanol for 5 minutes. Secondary antibodies (conjugated to Alexa Fluor 647, Alexa Fluor 488 and Alexa Fluor 541) and Hoechst 33342 were from Pierce Chemical (Rockford, IL). Confocal images were acquired with a Leica SP5 Confocal Microscope. The elliptical factor (length/width) of cells as a measure of elongation was determined by using the MetaMorph software (Universal Imaging). For the membrane and cytoplasm measurements, fluorescence intensity profiles were analyzed with Leica LAS-AF software and represented graphically with Prism Graph-Pad 5.0.
Infection of MeT5A cells and MCs with retroviral and lentiviral vectors
MCs were infected with pRV-IRES-CopGreen retroviral vectors (Genetrix, Madrid, Spain) encoding p38AGF, a dominant-negative mutant of p38; WT p38 . MCs were infected with a super-repressor IB mutant harboring mutations Ser32Ala and Ser36Ala. The IB super-repressor blocks NF-B nuclear translocation induced by TGF-1 and IL-1, whereas empty vector has no effect (Strippoli et al., 2008). MCs were also infected with a catalytically inactive TAK1 mutant (Asp175Ala, from Phil Cohen's laboratory, University of Dundee, Scotland); or with empty pRV-IRES-CopGreen vector as control. Twenty-four hours before infection, MCs were seeded into six-well plates (2ϫ10 5 cells per well), and retrovirus-producing 293T cells were seeded at 3ϫ10 6 cells per 10-cm plate. For infection, 293T cell supernatants were filtered through a 0.45-m filter (Whatman, Dassel, Germany), and 5 g/ml polybrene (Sigma-Aldrich, St Louis, Missouri) was added to the filtrate. Thereafter, medium was removed from the MCs and replaced with 293T cell supernatants containing the retrovirus. This process was repeated twice at 24-hour intervals. Twenty-four hours after the last exposure to retrovirus, infection efficiency was monitored by fluorescence microscopy (Carl Zeiss, Standort Göttingen, Germany) or FACS analysis (BD FACS Canto, Becton-Dickinson Laboratories, Mountain View, CA). Alternatively, MCs were infected with virus encoding shRNA specific for p38, which was a kind gift from Angel Nebreda, CNIO, Madrid, Spain. MC infection was performed according to the manufacturer's protocols.
RNA interference
MCs were grown on 48-well plates (4ϫ10 4 cells per well) for 24 hours and transfected twice with a 72-hour interval using Dharmafect 1 (0.8 l per well) and the ON-TARGETplus SMARTpool directed against human Twist1 (60 pmol per well) or a nonspecific control siRNA (Dharmacon, Lafayette, CO). At 24 hours after the last transfection, cells were treated as indicated and RNA was extracted 48 hours later. Knockdown efficiency and E-cadherin levels were determined by quantitative PCR (see above).
NF-B p65 DNA binding assay
MeT5A cells were grown to confluence in 10-cm dishes and treated as indicated. Cells were scrapped in 1 ml of cold complete medium, pelleted and washed with cold PBS. Nuclear extracts were prepared using the NE-PER nuclear and cytoplasmic extraction reagents (Thermo Scientific, Rockford, IL) and protein concentration determined with the Bio-Rad protein assay (Bio-Rad Laboratories, Munich, Germany). Nuclear extracts (7 g) were used to determine p65 DNA binding activity with the NF-B p65 transcription factor kit (Thermo Scientific) according to the manufacturer's instructions. The chemiluminiscent signal was measured with the GloMax-Multi Microplate Multimode Reader (Promega).
Cell transfection and luciferase assays
NF-B transcriptional activity was measured by transient transfection of MeT5A cells with the KBF-luc reporter plasmid and subsequent luciferase activity assay (Castellanos et al., 1997). Briefly, 2ϫ10 5 cells were transfected with 2 g of the KBF-luc reporter plasmid together with 500 ng of the reporter plasmid pRL-null, which bears a promoterless Renilla Luciferase gene (Promega, Madison, WI). Transfections were performed by incubating cells for 4 hours with a mixture of DNA and Lipofectamine 2000 at a ratio of 1:2.5 (Invitrogen, Carlsbad, CA) in serum-free medium. After transfection, cells were pretreated overnight with vehicle (DMSO) or U0126 (20 M). Cells were then stimulated with T/I for the times indicated.
Luciferase activity was measured with the Dual-Luciferase Reporter Assay System (Promega) according to the manufacturer's instructions and determined in a Sirius Single Tube Luminometer (Berthold Detection Systems, Pforzheim, Germany). All experiments were performed in triplicate.
Statistical analysis
Statistical significance was determined with a Student's t-test (unpaired t-test) that was performed using Graph pad 4.0 software, where P values of <0.05 were considered significant. | 2017-06-26T09:51:53.368Z | 2010-12-15T00:00:00.000 | {
"year": 2010,
"sha1": "b7b664c5296265fd76f8139ac56f065781294b56",
"oa_license": "CCBY",
"oa_url": "http://jcs.biologists.org/content/123/24/4321.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b7b664c5296265fd76f8139ac56f065781294b56",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
55422570 | pes2o/s2orc | v3-fos-license | Quantum Cheshire Cats
In this paper we present a quantum Cheshire Cat. In a pre- and post-selected experiment we find the Cat in one place, and its grin in another. The Cat is a photon, while the grin is its circular polarization.
I. INTRODUCTION
'All right,' said the Cat; and this time it vanished quite slowly, beginning with the end of the tail, and ending with the grin, which remained some time after the rest of it had gone.
'Well! I've often seen a cat without a grin,' thought Alice, 'but a grin without a cat! It's the most curious thing I ever saw in my life!' No wonder Alice is surprised. In real life, assuming that cats do indeed grin, the grin is a property of the cat -it makes no sense to think of a grin without a cat. And this goes for almost all physical properties. Polarization is a property of photons; it makes no sense to have polarization without a photon. Yet, as we will show here, in the curious way of quantum mechanics, photon polarization may exist where there is no photon at all. At least this is the story that quantum mechanics tells via measurements on a pre-and post-selected ensemble.
II. CHESHIRE CATS
In the following experiment, the "cat" is a photon in two possible locations, |L and |R . The "grin" corresponds to its circular polarization state. The two basis states for circular polarization are |+ and |− . In terms of horizontal and vertical linear polarization states |H and |V , respectively, they are |+ = (|H + i|V ) / √ 2 and |− = (|H − i|V ) / √ 2. Suppose that the photon is initially prepared in a state |Ψ , which is in a superposition of two locations |L and |R and horizontally polarized. A simple way to prepare such a state is to send a horizontally polarized photon towards a 50:50 beam splitter, as depicted in Fig. 1. The state after the beam splitter is |Ψ , with |L now denoting the left arm and |R the right arm; the reflected beam acquires a relative phase factor i. We would like to post-select the state |Φ , In other words, we would like to perform a final measurement that gives the answer "yes" with certainty whenever the system is in the state |Φ and the answer "no", again with certainty, whenever the state is orthogonal to |Φ . We will then consider only those cases in which the answer "yes" is obtained. Such a measurement can be experimentally realized in an optics setup, as depicted in Fig. 1. The measuring device comprises a half-wave plate (HWP), a phase shifter (PS), a beam splitter (BS 2 ), a polarizing beam splitter (PBS) and three photon detectors (D i ). The HWP is chosen such that |H ↔ |V . The PS is chosen to add a phase factor i on the beam, BS 2 is chosen such that if a photon in the state (|L + i|R )/ √ 2 impinges upon it, then it will certainly emerge from the left port (i.e. the detector D 2 will certainty not click). The PBS is chosen such that |H is transmitted and |V is reflected. Given these choices, if the state immediately before the HWP (i.e. the state of the photon entering the measuring device) is |Φ , then D 1 will click with certainty. A photon in any state orthogonal to |Φ will end up either at detector D 2 or at D 3 . We thus want to consider the experimental arrangement depicted in Fig. 1, which is nothing more than a modified Mach-Zehnder interferometer equilibrated such that in the absence of the HWP and PS, a photon entering BS 1 from the left will certainly emerge from BS 2 towards the right.
We will focus only on those cases in which detector D 1 clicks. Inside the interferometer (i.e. in between the regions denoted by pre-and post-selection in Fig. 1), the photon is thus described by the pre-selected state |Ψ and post-selected state |Φ . It is the properties of the photon in these pre-and post-selected states that is the focus of this paper. Let us first ask which way the photon went inside the interferometer. We will show that, given the pre-and post-selection, with certainty the photon went through the left arm. For suppose that we check the location of the photon by inserting photon detectors into the arms of the interferometer. Let them be non-demolition detectors in the sense that they do not absorb the photon and do not alter its polarization. In mathematical terms, these detectors measure the projection operators Π L = |L L| and Π R = |R R|. Suppose first that we insert one such detector into the right arm. Is it possible to find the photon there? No, it is not. If we find a photon there, then the state |Ψ after this measurement will be |Ψ = |R |H , which is orthogonal to the post-selected state |Φ = (|L |H + |R |V )/ √ 2. Hence the post-selection could not have succeeded in this case (i.e. detector D 1 could not have clicked). Thus the non-demolition measurement in the right arm never finds the photon there, indicating that the photon must have gone through the left arm. If instead we perform a non-demolition measurement in the left arm, given the post-selection, it will always indicate that the photon is there. We can even perform non-demolition measurements in both arms simultaneously, and they will always indicate that the photon was in the left arm. The Cat is therefore in the left arm. But can we find its grin elsewhere?
Suppose instead of the measurements above, we place a polarization detector in the right arm. Since we know the photon is never in the right arm, surely no 'grin' can ever be found there, and this detector should never click. Surprisingly however, the polarization detector in the right arm does click. We will discover that there is angular momentum in the right arm.
Formally, a polarization detector in the right arm can be defined as where The observable σ (R) z has three eigenvalues, +1, −1 and 0, corresponding to the eigenstates |R |+ , |R |− and the degenerate subspace spanned by |L |+ and |L |− , respectively.
If a photon ends up at D 1 then the corresponding measurement of photon position Π R never finds that photon in the right arm. Yet, surprisingly, a measurement of σ (R) z may sometimes find angular momentum there. Indeed, the conditional probability of σ (R) z yielding the result +1, given that the photon ends up at D 1 , is non-zero. Similarly, there is a non-zero conditional probability that the measurement will find angular momentum -1 in the right arm.
We seem to see what Alice saw-a grin without a cat! We know with certainty that the photon went through the left arm, yet we find angular momentum in the right arm.
But could this conclusion really be right? It is, ultimately, open to the following criticism. We never actually simultaneously measured the location and the angular momentum. Indeed, our conclusions above were reached by measuring location on some photons and angular momentum on others. The immediate implication is that all we have here is a paradox of counterfactual reasoning, in a class with other such paradoxes in quantum mechanics, e.g. [1,2]. That is, we have made statements about where the photon is, and about where the angular momentum is, that are paradoxical as long as we don't actually perform all the relevant measurements simultaneously. But let us see what actually happens if we try to measure the location and the angular momentum at the same time.
Suppose that we simultaneously insert detectors for Π R , Π L and σ commute, their ordering in the right arm does not matter.) What we see now is that whenever σ (R) z indicates net angular momentum, Π R yields the value 1, indicating that the photon in fact went through the right arm; whenever σ (R) z does not indicate angular momentum, Π R yields the value 0, indicating that the photon went through the left arm. The paradox thus evaporates. This is the standard resolution of such counterfactual paradoxes in quantum mechanics: measurements disturb each other 1 therefore the conclusions drawn from separate measurements do not hold when measurements are performed simultaneously. Hence one is tempted to conclude that the paradox is nothing other than an optical illusion. In the next section, however, we will show that there really is a Cheshire Cat and it is not an optical illusion. But doing so requires a subtler method.
III. WEAK MEASUREMENTS
We have reached the central claim of this paper. Ending the analysis with the resolution just presented, which is the common way of resolving quantum paradoxes, would be premature and would miss the essence. As discussed above, the disturbance due to intermediate measurements is a standard rationale for dismissing such paradoxes. However, there is always a trade-off between disturbance and precision . That is, the disturbance due to measurements can be limited, at the price of accepting a certain level of imprecision (i.e. errors) in the measurement. It is then interesting to see what such limited-disturbance measurements -which are performed simultaneously -can tell about our paradox. As we will now show, by adopting this strategy we regain the paradox that was prematurely lost.
We shall first present the specific scheme we have in mind to perform a limited-precision limited-disturbance measurement. This scheme is is very similar to those used in certain optical beam experiments presented in Refs. [3,4], and can be performed with present-day technology.
The detectors measuring Π L , Π R , σ (L) z and σ (R) z would be realized by replacing the detector D 1 with a CCD camera, with the vertical and horizontal displacements of the beam serving as measurement pointers. For example, a flat glass sheet in the left arm, with its normal tilted at a small angle above the beam axis, displaces upwards a photon passing through it, by a small amount that we can define to be one unit δ of displacement. Then observing such an upward displacement of the beam in the CCD camera will indicate photons passing through the left arm. Similarly, a measurement of angular momentum could be an optical element producing a horizontal displacement of the beam in accordance with photon polarization.
The beam will have a characteristic cross-sectional width or "waist" ∆. The precision of the measurement and the degree to which it disturbs the photon depend upon the magnitude of the displacement δ relative to the width ∆. When δ is much larger than ∆, the measurement is precise; we can say with certainty, for a given photon, whether it is displaced or not. At the same time the disturbance of the photon is large, because the location of the beam becomes entangled with what is measured. By contrast, ∆ δ characterizes the so-called weak measurement regime, or in other words the regime of limited disturbance. In this regime, any given photon does not reveal whether the beam has been displaced or not; but repeating the measurement N times reduces the uncertainty in the beam displacement to approximately ∆/ √ N . Thus the displacement can be detected to any desired accuracy by repeating the measurement sufficiently many times.
In the context of pre-and post-selection, the above strategy is a specific implementation of a general measurement strategy known as weak measurements, and the results they yield, the so called weak values [5,6], have already given new insight into many paradoxical situations in quantum mechanics [7][8][9].
In more detail, denoting by A any operator measured as above, or in any weak measurement scheme, it is well known from standard results that the average shift of the pointer (or in the above the average shift of the beam) will be where A w is the weak value of A, and where |ψ is the pre-selected state and |φ is the post-selected state. Again, to repeat, the value A w is what the pointer of the measuring device indicates when A is measured, with a measurement interaction that disturbs the measured system only weakly, on an ensemble of systems all pre-selected in the state |ψ and post-selected in the state |φ . Moreover A w is the effective value of the observable A for any system interacting with this ensemble, as long as the interaction is weak [10].
1 Surprisingly, when measured on a pre-and post-selected ensemble, even commuting observables such as Π R and σ (R) z may disturb each other.
Let us now consider the story of our setup, as told by the weak values. The weak values are as follows: where we have defined σ for the left arm in analogy with σ (R) z for the right arm. Thus the story as told by the weak values is that the photon is in the left arm (since Π L w = 1 and Π R w = 0) while the angular momentum is in the right arm (since σ (L) z w = 0 and σ (R) z w = 1). The crucial point is that in principle all of these values apply simultaneously, since all of the weak measurements can be performed at the same time. In our specific scheme we can only measure any two at the same time, for example σ (R) z w and Π R w , which indicate that there is a grin but no cat in the right arm. Alternatively we can measure any other pair and all results will be consistent with the paradox. We have finally found our Cheshire Cat.
IV. N -ELECTRON CHESHIRE CAT
So far, we have considered an optical realization of the Cheshire Cat. The reason is that the proposed optical experiment can be implemented with current technology, as we hope it soon will be. A drawback of this particular optical realization, however, is that it reveals the Cheshire Cat only as an average over many repetitions of the experiment. In this section we describe an alternative setup, one which is beyond the reach of current technology, but which reveals the Cheshire Cat in a stronger sense. In this setup the Cheshire Cat is seen only rarely-yet when it is seen, it is seen unambiguously and not as an average.
Consider N (distinguishable) electrons, prepared in a superposition of locations |L and |R in two boxes. All the electrons are polarized along the x-axis. The pre-selected state of the electrons is Let us imagine that we can perform a measurement which allows us to post-select the electrons in the state The probability of post-selecting this state is | Φ N |Ψ N | 2 = 4 −N , which is exponentially small in the number of electrons. But when this post-selection succeeds it yields a Cheshire Cat that can be detected and measured with high precision. Indeed, let the "Cat" itself-its position-be defined by the mass of the electrons. It will be possible to perform a measurement that is both weak (i.e. does not appreciably disturb the pre-and post-selected states) and precise (will yield the number of electrons in each box up to an uncertainty of √ N , which is insignificant relative to N when N is large) [6]. Such a measurement, e.g. by means of a gravitational probe, will find all N electrons (up to uncertainty √ N ) residing in the left box. Namely, the weak value of Π L between the pre-and post-selected states |Ψ N and |Φ N , respectively, is while the weak value of Π R is However, the "grin", which here could be the magnetic field in the z-direction, will be equally measurable, weakly and precisely, by a suitable magnetic probe. The field will be found emanating from the right box, with field strength proportional to N , again with uncertainty of √ N . That is, σ z Π R w = N and σ z Π L w = 0. Crucially, in principle all of these measurements can again be made simultaneously. In keeping with a general analysis [6], this (technologically impractical) weak value and the previous (optical) weak value are conceptually equivalent.
V. CONCLUSIONS
We have shown that Cheshire cats have a place in quantum mechanics -physical properties can be disembodied from the objects they belong to in a pre-and post-selected experiment. Although here we have only presented one example in full detail, where a photon is disembodied from its polarisation, it should be clear that this effect is quite general -we can separate, for example, the spin from the charge of an electron, or internal energy of an atom from the atom itself. Furthermore it is important to realise that it is not just pointers of well-prepared measuring devices that indicate that the properties are disembodied -any external system which interacts weakly with the pre-and post-selected system will react accordingly [10].
This therefore opens many intriguing questions, both conceptual and applied ones. First of all, how will an electron with disembodied charge and mass affect a nearby electron? In an atom with the internal energy disembodied from the mass, what will the resulting gravitational field look like? What sort of thermal equilibrium will be achieved by a system whose two degrees of freedom are separated? Furthermore, when considering more than two degrees of freedom, can we separate them all from each other? Can photons impart angular momentum to one object while their radiation pressure is felt by another object?
On the applied side, we may ask whether Cheshire cats have the potential to be useful in precision measurements, just as weak measurements have now shown themselves to be useful as a powerful amplification technique [3,4,[11][12][13][14]. As an example, suppose that we wish to perform a measurement in which the magnetic moment plays the central role, whilst the charge causes unwanted disturbances. The question that arises is whether it might be possible to remove this disturbance, in a post-selected manner, by producing a Cheshire cat where the charge is confined to a region of the experiment far from the magnetic moment. We believe that such potential applications are an interesting possibility that deserve further investigation. | 2014-01-03T09:29:05.000Z | 2012-02-03T00:00:00.000 | {
"year": 2012,
"sha1": "aa09b35a05c5c580f03187bff80ae765385e4c32",
"oa_license": "CCBY",
"oa_url": "http://iopscience.iop.org/article/10.1088/1367-2630/15/11/113015/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "aa09b35a05c5c580f03187bff80ae765385e4c32",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
256337339 | pes2o/s2orc | v3-fos-license | Conductometric titration as a technique to determine variation in conductivity in perfluorosulfonic acid materials for fuel cells and electrolyzers
One important requirement for a polymeric material to be used as a membrane in fuel cells or water electrolyzers is its high ionic conductivity. In this research work the redeveloped conductometric titration was used to determine conductivity variation with the objective to improve the precision of the determination and reduce the time of operation. Results obtained by changing the experimental conditions of the techniques: reaction rate and conductometric titration, which are two related techniques, are presented. The reaction rate allows to know the chemical kinetics of the neutralization reaction between Nafion®117 membrane and a solution of sodium or potassium hydroxide, the order or pseudo-order reaction and the half-life period. This last parameter is used to carry out the conductometric titration which permits to determine the total acid capacity of this type of polymeric materials. The experimental conditions studied are: type and time of agitation and working temperature control. Good results were obtained in the techniques where the nitrogen bubble stirring was applied, throughout the determination. This procedure ensures a liquid medium with properties near isotropy, suitable for this analysis. The temperature controlled by a thermostat allows isolating the system of temperature variations and permits to compare the results between determinations. Time reduction was ~48 times lower, if 24 h is considered necessary to reach reaction equilibrium.
Introduction
Fuel cells and water electrolyzers are energy converters. Water electrolyzers convert electrical energy into chemical energy, while fuel cells turn chemical into electrical energy. Polymer electrolyte membrane fuel cells gained public attention when an ionic exchange resin was used as electrolyte for a space application by General Electric in 1959 [1]. 1 year later, the same company, trying to overcome the disadvantages of alkaline water electrolyzers developed the concept of a new electrolyzer that used a solid polymer electrolyte instead of the liquid alkaline electrolyte [2].
Proton exchange membrane fuel cells have acquired importance for applications that require rapid start-up and quick response to load changes [3]. Proton exchange membranes are the key component of those types of devices and the most important requirements are: high proton conductivity, low electronic conductivity, good chemical stability and good thermal stability, low permeability to fuel and comburent, low electroosmotic drag coefficient, good mechanical properties and low cost [4].
To analyze the conductivity of polymeric materials that can be used as membrane in this type of devices, conductometric titration was redeveloped.
Conductometric titration is an analytical technique based on the mobility difference, that is, ions of a certain mobility are replaced by other ions that possess a different mobility. This technique presents advantages if it is compared with the acid-base or potentiometric titrations, when particular systems are studied. Those systems are the ones that generate products of considerable solubility or hydrolysis products at the equivalent point. Apart from that, the conductometric titration maintains its accuracy in relatively diluted as well as concentrated solutions. This technique can be applied to study colorless and color solutions. The measurement electrodes are the only feature which does not form part of the solution because they do not affect the system under study (the reaction) [5]. However, in the acid-base titration the indicator can interact with the system or produce contamination. The use of indicators adds another disadvantage since the wide range of pH that they present can influence in the value of the determination error [6].
The disadvantage that the conductometric titration presents is that the system under study does not have large concentrations of strange electrolytes, which can interfere with the reaction because, as a consequence, it would considerably reduce the results precision [5].
Originally, this technique required a complex process, so it was not widely used but, as technology improved, the devices used to measure conductivity were simplified and some researchers decided to apply it. The bibliography presented as follows tries to show different cases where the conductometric titration was applied over diverse types of polymeric materials or their monomers (such is the case in this research work) with the intention of obtaining particular information from them or their reactions. It is very important to observe the steps followed in each of the determinations proposal to analyze the time spent in the application of the technique and if any substance is incorporated which can alter the results of the determination. The examples presented are: Waltz et al. used the conductometric titration to calculate the amount of amine end-group in nylon 66 (polyhexamethylene adipamide). The pre-treatment of the sample under study, consisted of dissolving the polymer using purified phenol and shaking the system. Then 95% ethanol and distilled water were added. After that, conductometric titration was carried out using, 0.1 N hydrochloric acid and slow stirring. As phenol was not a good solvent for the determination of the carboxyl end groups because after equivalent point, it reacts with the base, the benzyl alcohol is used instead. Even though the conductance found was lower than that expected for the solvent previously used (phenol-ethanol-water) and the cut point of the two straight lines was not so sharp, the results obtained by conductometric titration were in agreement with the ones obtained by titration using an indicator [7]. In this research work to analyze the amine and carboxyl end-group is necessary to dissolve the sample of polymer under study, which considerably increases the time needed for the determination. Another important statement presented, is to find the correct solvent or mixture of solvents that do not incorporate an error in the interpretation of the results obtained. Erbil et al. determined the copolymer composition and monomer reactivity ratios by conductometric titration of acrylamide and itaconic acid. There, the pre-treatment consisted of dissolving 0.1 g of solid polymer with 30 mL of 0.1 N sodium chloride using a magnetic stirrer. 0.1 N sodium hydroxide was used as titrant. From established mixtures of homopolymers (of acrylamide and itaconic acid), the equivalent point was obtained and the data were used to construct a calibration curve that allowed estimating the acidic comonomer content and calculating the copolymer composition [8]. In this case, it is also necessary to dissolve the polymer to obtain the parameters above expressed by the inflection points of the titration curves, increasing the time needed to carry out the experiment and added ion species in the system. Bochek et al. suggested the conductometric over the potentiometric titration to calculate the esterification degree of polygalacturonic acid. Usually, the procedure consists of determining the number of free carboxy groups, with phenolphthalein as indicator and with the same solution, obtains the number of esterified carboxy groups by back titration. This technique needs a pretreatment to dissolve the pectin under study. The pectin was dissolved by wetting with ethanol and with the addition of distilled water heated at 40°C. The system was stirred for 2 h. They emphasized that the color turn of the indicator occurs in a relatively wide range of pH so this fact can result in a considerable error in the determination. Their results showed that conductometric titration is the technique that offers more similar results to those published in the literature, if compared with the results obtained by potentiometric titration [6]. Two important conclusions can be remarked from this research work. The first one is the benefit of not using an indicator in the conductometric titration technique, so the error of this kind can be disregarded and the second one, which was related to the first one, is the advantages of implementing this technique over the potentiometric because it offers more accurate results. dos Santos et al. presented two different methods to obtain the degree of deacetylation of the linear polyaminosaccharide, chitosan: CHN elemental analysis and conductometric titration. After a purification procedure the sample of chitosan was dissolved using 0.05 M HCl and it was stirred for 18 h at room temperature. Titration resulted in a secure and inexpensive method if compared with the equipment-dependent and more expensive CHN elemental analysis [9]. The time needed for purification is long and once again the use of HCl may incorporate ion species that can alter the reading of the results. Okubo et al. applied conductometric titration to study the relative distribution of carboxyl groups in a polymer emulsion of styrene/butyl acrylate/methacrylic acid in serum, at surface and inside particle. All the samples were pre-treated and conductometric titration was carried out with 0.02 N potassium hydroxide, at room temperature. The pre-treatment of the sample takes around 9 h where it is necessary first, adjusting the pH to 2 by the addition of 0.2 M HCl and then stirring for 2 h. The resulting emulsion was ultra-centrifuged for 2 h with the purpose of separating the serum and polymer particles and then the polymer particles were redispersed in distilled deionized water. This process was repeated three times and the supernatants were collected in each step to measure carboxyl groups in serum [10]. The methodology used in this work although simple, required a long time to obtain good results and the necessity of incorporating HCl to adjust the pH, may alter the results of the determination as it was mentioned in the above examples where it was used.
In the investigation work carried out by Everett et al., it was suggested that when the conductometric titration was applied to surface characterization of polystyrene lattice, solids content had to be greater than 2% w/w. The reason to use that concentration or greater was that at lower concentrations the time needed to achieve the system equilibrium was extended. They established 36 h to finish a complete titration experiment [11]. In the same work, they explained the benefits of adding an electrolyte to the system. If this type of conductometric titration was carried out in the presence of equal parts of solid content and electrolyte (1:1), the time needed to complete the titration was reduced to about 8 h. To justify the variation of time between determinations (without and with the presence of electrolyte as potassium bromide), they suggested a slow conformational change during neutralization of the polymer chains carrying the acid groups. They exposed their concern about the results obtained by other researchers, who applied the conductometric titration in very short periods of time, less than 30 min, because they thought that those results did not correspond to true equilibrium [12]. This research work established two important facts. The first one is that researchers that used the conductometric titration technique previously, needed to study the kinetics reaction to know when the reaction analyzed reached equilibrium, under the experimental conditions that were established. The second important fact is to know the time needed for the reaction to reach equilibrium. The time data has to be taken into account to carry out the conductometric titration.
It is important to remark that in five out of six different investigation works mentioned above, where the conductometric titration was carried out, the sample had to be pretreated, that is to say that it was necessary first, to dissolve the material under study to be able to apply the titration. So, as mentioned, this involves more time and the possibility to incorporate contaminants or ion species that can alter the results of the conductometric titration carried out. From the work of Everett et al., it is evident that it is very important to study the kinetics reaction before applying the technique and have the knowledge of the necessary time for the reaction to reach the equilibrium for the correct application or interpretation of the results obtained from the conductometric titration technique.
In general terms and conditions, the conductometric titration technique consists of the addition by burette of small and equal quantities of titrant to the system under study. The final system, solution to be titrated and the titrant, is agitated after every addition and then the conductivity is measured.
The registered conductivity is used to plot a graphical representation as a function of the volume of titrant added. This graphical representation consists of two straight lines that cut in the equivalent point [5,13,14].
In optimal conditions the error of the determination is 0.5%, so part of this paper deals with different modification in the technique to obtain more accurate results [5].
The aim of this research work is to look for the better conditions to put in practice the conductometric titrations of polymeric materials to determine the total acid capacity and the equivalent weight, without taking the reaction to an end in a complete way. The changes proposed are related to the way in which the system is agitated: with magnetic or bubble stirring; or the time during the agitation is applied to the system: along all the determination or only during a specific period of time. Another parameter analyzed is the possibility to set the temperature in which the experiment is carried out or not. All the parameter modifications have the purpose of reducing the time of the determination and the accuracy in the results obtained by the technique. So the parameter modifications that affect the results in the direction exposed above will be selected.
Practical theoretical framework of conductometric titration
To present the system proposed in this investigation work, a sample of Nafion Ò 117 perfluorosulfonic acid (PFSA) is immersed in 70 mL of distilled water. This represents the system to be titrated. Potassium or sodium hydroxides are used as titrant, so the reaction involved in this titration is a neutralization of a strong base with a perfluorosulfonic acid. The Nafion Ò 117 membrane is the acid, the hydroxides are the bases. The chemical reaction is represented as follows: where R is the copolymeric matrix of the membrane, M is the cation that in this case are sodium and potassium [15].
In a typical conductometric titration of strong acid with a strong base the conductivity first decreases because the hydrogen ion of the acid is exchanged by the cation of the base. The mobility of the hydrogen ions is the highest (349.6 cm 2 / X mol at 25°C) so when the reaction occurs, those ions form part of the molecule of water which has a low dissociation constant. The conductivity measured is the conductivity of the cation. When the equivalent point is reached the conductivity starts to increase in proportion with the amount of base added. The reason for this result is the mobility of the hydroxide ions (199.1 cm 2 /X mol at 25°C) [5,[13][14][15][16].
In the system under study, where a piece of Nafion Ò 117 membrane was immersed, first conductivity stays low because the membrane exchange protons with the medium, due to the addition of the base, to form water. In that molecule the protons and hydroxide ions are not available as mobile ions so this is the reason for the conductivity to stay low. When the equivalent point is reached the conductivity starts to increase because the membrane does not have any more protons to exchange and the conductivity is the result of the hydroxide ions added in excess. The equivalent point in these types of materials represents its total acid capacity (TAC). So the conductometric titration allows knowing an important property of these polymeric materials as the total acid capacity and can be provided in a shorter period of time.
In general, to obtain that value, the polymeric material is exposed to an excess of sodium chloride to exchange all its protons and form hydrochloric acid. Then the hydrochloric acid is titrated with sodium hydroxide using methyl orange as indicator [17]. This technique could take around 24 h if the analyst wants to be sure that the membrane has exchanged all the protons and then, the extra time needed to carry out the titration. The conductometric titration can reduce the operation times to 30 min or less without the need for system equilibrium.
In a previous work having the purpose of knowing the reaction rate of the neutralization reaction, a sample of Nafion Ò 117 PFSA was immersed in a solution of 70 mL of distilled water and a specific amount of sodium or potassium hydroxide of known concentration. Once the membrane is immersed in that solution, the conductivity is registered every minute during an established period of time. Those results were used to plot a graphical representation of conductivity as a function of time and through this analysis it could be possible to determine the order or pseudo-order reaction. The neutralization reaction presents one as an order or pseudo-order reaction. From this previous work it can be concluded too, that for the reaction between a piece of Nafion Ò 117 PFSA and sodium hydroxide, 7-h period is approximately necessary to complete 99% of the reaction. Therefore, considering the neutralization reaction of these materials is not such a rapid reaction, the previous study of the kinetics and the conditions that can affect it becomes essential. These facts lead to the modification of the original conductometric titration technique to achieve the development of that technique in reasonable periods of time [18].
From the graphical representation of the first-order reaction the rate constant can be obtained and with this value, the half-life period [19,20]. The half-life period is defined as the time needed for the reaction to complete half of the concentration of the reactants, and this is the time used as a parameter to carry out the conductometric titrations. This reduces the time needed to obtain the value of the total acid capacity to a considerable extent.
Reaction rate
The previous study of the reaction rate of the reaction used in the conductometric titration, allows knowing the chemical kinetics, the order or pseudo-order reaction and the half-life period. This latter parameter is very important in this particular determination because it allows reducing the time needed to perform the conductometric titration. As it was mentioned before, to attain the total acid capacity of polymeric material in specific determinations (different of conductometric titration) it is necessary to wait around 24 h to assure system equilibrium. The knowledge of the half-life periods allows developing the conductometric titration without reaching the system equilibrium and consequently reducing the operation times. As the reaction rate is affected by the modification of agitation and temperature, it is convenient to analyze case by case to see how it is influenced as the value of the half-life period too.
Reaction rate with magnetic stirrer
The technique to establish the reaction rate of the neutralization reaction consists of a Pyrex beaker where 70 mL of distilled water is added to the sample of membrane under study. The Pyrex beaker is dipped into a big water container, thermically insulated, which has achieved room temperature. This container was installed on top of a magnetic stirrer (Decalab S.R.L.; 2000 rpm; 280°C) and the stirrer. Throughout the determination the system is kept stirred. Besides, inside the Pyrex beaker, the bench conductivity meter is added. That conductivity meter is a Eutech Instrument Pte. Ltd/Oakton Instrument CON 510. The meter was packaged with a two ring stainless steel Ultem-Body Conductivity/TDS electrode, which cell constant K = 1.0, with built in temperature sensor for automatic temperature compensation and an integral electrode holder. When the room temperature is reached in the Pyrex beaker, the magnetic stirrer is switched on and the amount of titrant needed to exchange all the counter ions of the membrane is added. The titrant is supplied through a burette (IVA, certificate number A-03819, serial number 005-02-08, tolerance ± 0.05 mL, uncertainty ± 0.03 mL, K = 2). The conductivity is measured every minute during long periods of time that can change between 1 and 5 h depending on the case. It will be named as reaction rate with magnetic stirrer.
Reaction rate with partial magnetic stirrer
In the second technique applied to determine the reaction rate inside the Pyrex beaker, 70 mL of distilled water is added together with the amount of titrant needed to exchange all the interchangeable ions added through a burette.
This system is stirred for 1 h to homogenize the solution with a magnetic stirrer. When that time goes by the Pyrex beaker is dipped into a thermostat (HAAKE C and F3 Fisons) for 1 h to reach the working temperature of 19.0 ± 0.1°C. Then, when the Pyrex beaker is in place the bench conductivity meter is added and when the working conditions are reached, the sample of membrane is immersed in the solution (distilled water plus titrant) and the conductivity is measured every minute during an established period of time. This technique will be named as reaction rate with partial magnetic stirrer.
Reaction rate with bubble stirring
In the third technique designed inside the Pyrex beaker, 70 mL of distilled water is added together with the amount of titrant added through a burette. This system is stirred for 1 h to homogenize the solution with a magnetic stirrer. After that time, the Pyrex beaker is dipped into a thermostat for 1 h to reach the working temperature of 19.0 ± 0.1°C. Then the bench conductivity meter is placed inside the Pyrex beaker with the stirring system that consists of a 2 mL plastic pipette that is connected by a hose to a tank of nitrogen. The flow of nitrogen is controlled by a flow meter (Cole-Palmer Mass Control System 0-500 SCCM, Display and LSPM w/DC-62 cable) at 0.229 g/min (standard cubic centimeter/minute N 2 at 25°C and 14.6696 PSIA). Finally the sample under study is added into the Pyrex beaker and every minute during an established period of time, the conductivity is measured. This determination is going to be named as reaction rate with bubble stirring.
In all the techniques the results obtained in the experiments are used to plot a graphical representation of the conductivity as a function of the time.
When the graphical representation of conductivity (j) as a function of time was plotted, the experimental data obtained (experimental conductivity, j exp ) was divided by the constant of the cell (K cell ), using Eq. (1): The values plotted (rectified conductivity, j r ) are the results of the application of this equation.
To obtain the value with the constant of the cell, a solution of potassium chloride was prepared. The concentration of that solution was chosen among those that could be found in the bibliography because the conductivity of those types of solutions was determined with accuracy at different temperatures. The expression that defines the constant of the cell (K cell ) is (2): where R is the resistance of the measured solution. As the resistance is inverse to conductivity, replacing and rearranging the equation remains as (3): where j bibl is the value of the conductivity found in the references. 0.7462 g of potassium chloride was dissolved in 1000 g of distilled water. 70 mL of that solution was added into a Pyrex beaker; the Pyrex beaker into a thermostat to reach the working temperature and then the conductivity meter was placed inside the beaker too. The working temperature selected was 18°C and the experimental value obtained for the conductivity was 0.00135 S/cm 2 and the value obtained in the bibliography was 0.00122 S/cm 2 . This result gives us a cell constant of 1.11 [18]. In a previous work, the reaction rate was determined and it was concluded that the order or pseudo-order kinetics for the neutralization reaction is 1. The graphical representation of the first-order kinetics is the natural logarithm of j as a function of time and j was obtained by (4) where j ? is the last value of conductivity determined in the experiment.
Conductometric titration with magnetic stirrer
In the same way the techniques of reaction rate change, the conductometric titration changes too.
In the first conductometric titration proposed, in a Pyrex beaker, 70 mL of distilled water is added with the sample of membrane. The Pyrex beaker is placed inside a big water container that is located over the magnetic stirrer. The stirrer is placed in the Pyrex beaker with the bench conductivity meter and when the room temperature is reached, the magnetic stirrer is switched on. After an established period of time a specific amount of titrant is added, through a burette, into the Pyrex beaker and the conductivity of the solution is measured and registered [15]. It will be named as conductometric titration with magnetic stirrer.
In the conductometric analysis, a graphical representation of conductivity as a function of volume of titrant added is plotted. The equation applied to the experimental conductivity (j exp ) obtained in the assay is (5) where j exp0 is the conductivity of the system (70 mL of distilled water and the sample of membrane) before the addition of titrant; F is the dilution correction factor. This factor is calculated by Eq. (6) where V is the initial volume of the system and v is the volume of titrant added.
Conductometric titration with partial magnetic stirrer
In the second experiment the preparation of the Pyrex beaker is the same, but it is immersed in the thermostat with the bench conductivity meter. 1 h later, when the working temperature of 19.0 ± 0.1°C is reached, a specific amount of titrant is added through a burette but the established time between additions is divided in two; the first portion is used to stir the system with the magnetic stirrer (outside the thermostat) and the second portion to reach the working temperature inside the thermostat [21]. This technique is named as conductometric titration with partial magnetic stirrer. The data processing in this experimental part is the same as the one presented below (conductometric titration with magnetic stirrer). To plot the graphical representation of conductivity as a function of time, the conductivity used is the one that results from applying Eq. 1.
When the first-order kinetics was plotted to calculate the value of conductivity the equation used is 4, but in this case the j ? is the value of conductivity after 24 h.
Conductometric titration with bubble stirring
In the third conductometric titration, 70 mL of distilled water is added into the Pyrex beaker with the membrane. The beaker is placed inside the thermostat and after that the bench conductivity meter and the stirrer system are placed inside the beaker. The stirrer system consists of a 2 mL plastic pipette connected to a tank of nitrogen by a hose. The flow of nitrogen is controlled by a flow meter at 0.229 g/min. When all the elements are in the right place and the working temperature is reached, a specific amount of titrant is added through a burette, after certain periods of time, the conductivity is measured and registered. This technique will be named as conductometric titration with bubble stirring.
For the experiments carried out in this section, the data processing is the same as the ones exposed in the experiments of partial magnetic stirrer with temperature control.
These measurements allow plotting a graphical representation of the conductivity as a function of the volume of titrant added.
Reaction rates
In the first technique, reaction rate with magnetic stirrer, where the temperature is controlled into a big water container, the limitation is that the temperature was limited to the room temperature, so this limitation did not allow comparing the results obtained in a determination, for example by duplication, because the temperature during the day can change or is not the same on the day when the experiment is repeated. As it is known, the mobility of the majority of ions, increases as temperature rises, in approximately, 2% for each degree [5].
In the second technique, reaction rate with partial magnetic stirrer, where a thermostat is used to establish a working temperature, but the agitation is only applied at the beginning of the determinations, two important limitations were found. The first values obtained had to be dismissed because the membrane takes some time to respond to the solution where it is immersed, so the fluctuation of the determinations can be seen. The second limitation found is that reaction rate is lower.
In the third technique, reaction rate with bubble stirring, where the thermostat is used and a bubble stirring is incorporated, the limitations of the previous techniques are improved. The thermostat allows working at the temperature chosen for the experiments avoiding the fluctuations between determinations and the stirring system favors the diffusion process between the membrane counter ions and the titrant ions used. Figure 1 shows the results obtained in the study of the reaction rate of two solution of KOH where the time of agitation is different. Figure 1a is the result of stirring the solution throughout the determination with a magnetic stirrer, and Fig. 1b is the result of stirring the solution with a magnetic stirrer for only 1 h before the determination begins.
The magnetic stirrer seems to allow the reaction of neutralization to happen from the beginning of the determination to the end, without abrupt fluctuation in the values obtained along the experiment. The agitation only in the preparation step of the system (1 h) caused fluctuation in the determinations along all the experiment and a delay in the development of the reaction until the working rate for that experimental conditions, is reached. The agitation allows reactants to be more in contact and this benefits the reaction progress. Figure 2 presents the results obtained for the behavior of two systems where a solution of NaOH is added but these two systems were stirred in two different ways. Figure 2a shows the results obtained using a magnetic stirrer and Fig. 2b using a bubble system.
The magnetic agitation in this case, seems to be insufficient to provide boundary conditions to study this type of system, if it is compared with the same system stirred by nitrogen bubbles.
Conductometric titrations
The conductometric titration, allows knowing the total acid capacity of the membrane under study as well as the equivalent weight. The total acid capacity can be achieved finding the cross-point of the two straight lines of the graphical representation of conductivity as a function of the volume of titrant added after every specific period of time.
Dupont in the specification sheet of the properties of Nafion Ò 117 PFSA membrane establishes the typical value of the total acid capacity between 0.95 and 1.01 meq/g (that can be expressed as 0.98 ± 0.03 meq/g). The technical sheet expresses that the value of this property is obtained by base titration to measure the equivalent sulfonic acid in the polymer [22].
The results of the conductometric titrations of samples of Nafion Ò 117 PFSA membrane are presented in Figs. 3, 4, 5 and 7. The working conditions of samples of Fig. 3 are presented in Table 1.
Due to prolonged exposure of the membrane to the alkaline solution (24 h) it can be affirmed that the neutralization reaction is almost totally complete in both cases, making the process nearly independent of the agitation type. This fact is evidenced in the low conductivity values obtained in the first straight line of the two graphical representations (Fig. 3a, b). The values of the calculated TAC are 0.924 and 0.917 meq/g, respectively.
The values obtained in both determinations are very close to each other, and the absolute errors that they presented with respect to the value specified by Dupont are very similar too. Figure 4 shows the behavior of the Nafion Ò 117 membrane under the conductometric titration using magnetic and bubble stirring. The linear equations of the two straight lines are presented too. The total acid capacity for the first system (with magnetic agitation) is 1.14 meq/g, and for the system with nitrogen bubbles agitation is 1.07 meq/g. It seems that the system under bubble agitation presents a more similar value to that presented by Dupont, and the absolute errors verified those results presenting a lower value. Working conditions are presented in Table 2. It can be observed too, that the reaction is not completed due to the reaction times established in the determinations and this fact is evident in the first straight line that is not totally horizontal with the coordinate axis. It presents an upward slope. This is the result of the mobilities of the hydroxide ions and cations that do not react with the membrane yet. However, TAC can be accurately calculated. Figure 5 presents the results of two samples where one of them was stirred throughout the determination (Fig. 5a) and the other was stirred half of the time between determinations, in this particular case 14 min of the 29 min wait. During the remaining 15 min, the system was immersed again into the thermostat to reach the working temperature. Table 3 establishes the working conditions of those systems and the absolute errors.
The systems had the same behavior of the ones presented in the previous figure (Fig. 4). The straight line that corresponds to the results of the neutralization reaction presents an upward slope. The values obtained for the TAC are 1.108 and 0.96 meq/g, respectively. The use of the thermostat improves the value of the determination, although only a little, approaching the value presented in the literature. The absolute error showed that although the values obtain are smaller, the absence of a thermostat could increase it to double.
Finally, in Fig. 6, the results of the kinetics study of the neutralization reaction of a sample of Nafion Ò membrane with a solution of potassium hydroxide are presented. Figure 6a is the graphical representation of the data recorded and in Fig. 6b; the mathematical treatment of these experimental values is shown, taking into account the first-order kinetics. From the graph, the slope that allows us to calculate the half-life period of 30 min is obtained. Figure 7 shows the results of the first straight line (which represents the conductivity of the system after the neutralization reaction) of the conductometric titrations, of different samples of Nafion Ò 117 membrane, where the only variable that was changed is the time used between the aggregates of titrant. It should also be noted that the determinations are carried out using a thermostat and were stirred throughout the experiment by nitrogen bubbles. Working conditions and absolute errors are presented in Table 4.
The times between aggregates were: the half-life period (30 min) calculated in previous experiment, a reduced halflife period (23 min) and then study the system after 24 h between determinations. The results obtained for the TAC are: 1.07; 0.967 and 0.9173 meq/g, respectively. All values are close to those proposed in the literature. This allows claiming that the technique can be used for the determination of this property reducing waiting times in these systems.
The absolute errors by its fluctuation evidenced that an in-depth study of the bubble system is needed to guarantee the isotropic characteristic of the liquid medium. Two relevant aspects will be the position of bubbler and the flow of nitrogen with the objective to rich medium characteristic and accuracy in the results.
The application of this technique in the determination of the total acid capacity presents two improvements with respect to the bibliography listed in the introduction. The first one is that the sample under study does not have the necessity to be dissolved. This reduces the possibility of contaminating the system with the incorporation of ionic species that can alter the results reading, for example by their interaction with the reaction which is taking place. The second one is that it reduces the time needed to carry out the complete determination, because as it was shown the pre-treatment takes at least 2 h, depending on the case. So the technique offers the direct determination of the property without pre-treatment of the sample and the previous analysis of the suitable solvent for the material.
The titration analysis to obtain the total acid capacity of weak acid cation exchanger, for example has two pretreatments: one to convert the material to the standard acid form and the other exchanges all the protons to the media with the purpose of making a back titration with thymolphthalein [17]. This determination can take a long period of time if the analyst wants to assure the equilibrium of the system in the two pre-treatments. Once again the technique proposed in this work has two improvements. The sample does not need to be pre-treated and it is not necessary to use an indicator. This has the extra benefits of not incorporating a contaminant into the system and the influence in the value of the determination error as a consequence of the wide range of pH that they present.
On the other hand, when compared with the original technique it offers another application, with great accuracy and simplicity of operation. It is recommended to study the chemical kinetics to reduce the operation times.
Conclusions
In this research work, both experiments developed, reaction kinetics and conductimetric titration, showed that the working temperature is be properly maintained with a thermostat and remains stable to achieve determinations that can be compared with each other and avoid the fluctuation of the conductivity. The nitrogen bubble system presents good results, not only because it is one that can be carried out within the thermostat, but also offers the process a liquid medium with features showing a tendency to isotropy. More work has to be done in this respect to guarantee isotropy, studying the flow of gas delivered into the system and the best position of the bubbler.
A suitable agitation throughout the experiment provides the liquid medium isotropic characteristics. This is a key condition for studying these systems ensuring the homogeneity of the medium in which the membrane under study is immersed. If the medium is not isotropic, values obtained are unreliable for the determination of total acid capacity or equivalent weight.
The best operational conditions are: working at the same temperature using a thermostat and suitable agitation of the system throughout the determination.
The study of the kinetics allows calculating the half-life period that abruptly reduces the long periods of time that have to wait to determine the value of TAC using other techniques. The time required to analyze a reaction with a half-life period of 30 min is of 6 h (for the whole experiment) in contrast to 24 h to acid-based titration to reach only reaction equilibrium, then the time needed to make the titration itself has to be taken into account.
The sample does not necessarily have to be pre-treated like the cases of the other conductometric titrations mentioned and the acid-based titration, and as a direct consequence the time of operation is reduced. The technique proposed offers advantages if it is compared with other techniques such as acid-base titration. As the determination does not need the presence of an indicator, it does not have the possible determination error for the wide range of pH that an indicator presents.
The technique designed does not need to dissolve the sample of polymeric material to be studied, so it is does not incorporate contaminants that can alter the results. The value of the total acid capacity can be obtained avoiding the pretreatment of the material, and as a consequence the contamination of the sample with other types of electrolytes that can affect the results of the conductometric titration technique. | 2023-01-29T15:15:16.881Z | 2017-02-17T00:00:00.000 | {
"year": 2017,
"sha1": "cfa6ba2fe92f43f258a9dc6951c888630bbed0a7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40095-017-0230-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "cfa6ba2fe92f43f258a9dc6951c888630bbed0a7",
"s2fieldsofstudy": [
"Engineering",
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": []
} |
159028404 | pes2o/s2orc | v3-fos-license | Bottom-Up Energy Transition Narratives: Linking the Global with the Local? A Comparison of Three German Renewable Co-Ops
Bottom-up transition narratives help to enable the implementation of energy transitions. Yet, scholarship shows that little light has been shed on how bottom-up transition narratives change during the course of transition. By proposing a framework that envisions bottom-up transition narratives, we analyze narratives on three German bottom-up renewable energy initiatives to address this gap. Relying on semi-structured interviews with innovators and adopters, we show that, during the establishment phase, the analyzed narratives take non-place-bound factors like climate change as a point of contention. At the same time, narratives underscore place-bound factors as, for instance, civil society’s knowledge and participation as means for an alternative, non-rent-seeking energy system. During the adoption phase, the analyzed narratives travel easily. This represents a paradox because bottom-up energy transition narratives move beyond their local, place-bound origin in order to be reproduced in different spatial settings. By so doing, bottom-up energy transition narratives diverge from their original message. By falling short on the promotion of citizen’s participation, they begin to promote sociotechnical systems that differ little from the sociotechnical systems from competing, rent-seeking energy industries during the innovation adoption pathway. Our comparative approach outlines how bottom-up energy transition narratives adapt to this trade-off during innovation adoption events. We discuss what this means for bottom-up energy transitions and conclude that bottom-up energy transition narratives are faced with a fixity–travel dilemma during the adoption phase.
Introduction
Narrative approaches have loomed large in environmental research [1].Narratives-sequences of connected imagined or real events-unite qualities of language-and communication-oriented approaches and draw on notions of issue frames and discourse and influence them.They reproduce stories of the past in order to take positions on issues of the future [2][3][4].In the social scientific literature, narratives have therefore been described as a means of deliberatively pushing for change [5].Sustainability transition scholarship's interest in narratives has grown with reference to domestic transition pathways [6], sustainability education [7], sustainable architecture, food systems, or mobility [7,8] and domestic, publicly declared energy transitions [9,10].Spatially oriented transition literature has enriched the understanding of the link between local spaces, global problems, social interactions and innovation narratives [11][12][13].Actors, involved in innovative niches, produce transition narratives about sustainable futures which adopters take up upon to reconfigure sociotechnical regimes and by so doing push sustainability transitions [14][15][16].Thereby, sociotechnical transition narratives "can capture complex interactions between agency and changing contexts, time, event sequences, making moves in games, and changing identities."[13] (p. 34).
Whereas the "grand narrative of transition" [12] (p.22) pursues a broader systemic perspective and takes narratives conceptually as a given, literature on bottom-up transition narratives differs by asking how such narratives locally emerge with different "affective attachments" [11] (p. 1620).Bottom-up transition narratives are thus normative and therefore political because they propose what the future ought to be as understood in a given place and time and as conditioned by local identity.According to Feola and Nunes [12] (p.3), they link "spaces, places and scales of transition approaches" and yet help to materialize innovations, which transitions can build on.Scholarship on bottom-up energy transitions has described how local issues provoke innovators to produce such narratives and how "activist-adopters tend to be motivated by local issues" [17] (p.20).Consequently, and according to Brown et al. [11] (p. 1620), transition "does not work without (local) places because those places offer the milieu-and the affective attachments-through which generic senses of responsibility, resilience, and relatedness may be most easily imagined and held together."Thereby, bottom-up transition narratives give adopters a communicative anchor point to easily assess the meaning of a specific bottom-up transition.As highlighted by a variety of scholars, this points to the issue of agency to which transition literature has largely responded with civil society's organizations [11,12,18].In their aim to foster local sustainability, civil society's bottom-up organizations are self-servicing [19] in responsible, value creating ways, as is the case, for instance, for local energy services [12].
During the innovation adoption phase, materializing bottom-up innovations become a successful past on their own and consequently force innovators to step out into wider society [12,16].This may present a challenge [20].Innovators might prefer to remain "disconnected" because of downsides, resulting responsibility challenges or because of "a survival strategy due to their continuous dependence on crowd sourced [ . . .] money to sustain their operation that occupies most of their time when not 'practicing sustainability'" [19] (p.45).There might be, however, a point of no return where such retention becomes impossible and innovations gain momentum in different local settings.Once travelling to different places, the strong link between place and innovation may represent a challenge for bottom-up transition narratives.Contrasting with different place-bound, local issues, expectations [12] and different "affective attachments" [11] (p. 1620), bottom-up transition narratives might be perceived differently.At the same time, sociotechnical transition pathways might change and contrast the original (non-place-bound) problem orientation of bottom-up transition narratives [12].Whereas the change of innovator's roles during transitions is well reported [21], we think, and here we agree with Feola and Butt [17], a more concentrated focus on place-bound and non-place-bound narrative factors and their relation to each other could help to enhance the understanding of changing bottom-up transition narratives.This regards the changing reference to both global (non-place-bound) [13,17] and local (place-bound) factors [11,12], mutually shaping bottom-up transition narratives.
To contribute to the above-mentioned gap, this article looks at the example of 'bottom-up energy transition narratives'.Two questions lead our analysis.What place-bound and non-place-bound factors make up for bottom-up energy transition narratives and how do they relate to each other during the establishment phase of bottom-up energy transitions?How do place-bound and non-place-bound narrative factors of bottom-up energy transition narratives change in their relation to each other during the adoption phase of bottom-up energy transitions?Whereas the first question addresses actor's perception of global pressing problems and their relation to envisioned local sociotechnical futures to achieve bottom-up energy transitions, the second question addresses the reproduction of bottom-up energy transition narratives in spatial sociotechnical contexts that differ from the originating spatial context.To help situating them better, we look at the analyzed transition narratives from a multilevel perspective and relate place-bound narrative factors to the sociotechnical niche level ("Technological niches form the micro-level where radical novelties emerge.These novelties are initially unstable sociotechnical configurations with low performance.Hence, niches act as 'incubation rooms' protecting novelties against mainstream market selection."[15] (p.400)).Non-place-bound narrative factors are related to the sociotechnical regime level ("The sociotechnical landscape forms an exogenous environment beyond the direct influence of niche and regime actors (macro-economics, deep cultural patterns, macro-political developments).Changes at the landscape level usually take place slowly (decades)."[15] (p.400)) and the sociotechnical landscape level [13][14][15].("The sociotechnical regime concept accommodates this broader community of social groups and their alignment of activities.Sociotechnical regimes stabilize existing trajectories in many ways: cognitive routines that blind engineers to developments outside their focus."[15] (p.400)).
To contribute to the understanding of bottom-up transitions and their relation to narratives, the article proposes a case study design that compares the narratives of three different German renewable bottom-up energy initiatives that emerged during the last two decades and are still undergoing processes of growth and change.We refer to the three renewable energy co-ops Elektrizitätswerke Schönau eG (EWS) (online: ews-schoenau.de),Solarcomplex AG (online: solarcomplex.de.) and Berchumer Initiative für Solare Energien e.V. (BINSE) (online: binse.org.).By analyzing multiple sources of data, a case study approach enables us to adopt comparative perspectives on what we call bottom-up energy transition narratives.What the three initiatives have in common is that they all consisted of a civil society group with intense interaction with its social life worlds.This core group discusses the initiative's ideas and standpoints as well as strategic decisions, which were taken in close circles at the beginning.
The remainder of this article is organized as follows.Based on the understanding that narratives evolve in steady communication between innovators and adopters [22], the next section lays out the framework we propose to use to analyze the narratives in question.Here we assume that, in the process, narratives are contextualized in spatial socio-cultural contexts that allow them to unleash their effects and favor innovation.The section after the framework provides insights on the sources and methods used for analysis.Drawing on different sources, Section 4 displays the narratives in their spatial relation to sociotechnical innovations.Section 5 discusses the narratives and Section 6 concludes.
Innovators
As a starting point, energy transition narratives originate from pressure groups (innovators) and are communicated with adopters [22,23].Narratives can be understood as a product of direct or indirect communication, like third-party communication or even reproductions in interviews, or websites and brochures, protest banners, symbols, or brands [5,22,24,25].Bottom-up innovators, or civil society pressure-groups as looked at in this article, might develop counter-narratives and challenge the beliefs and actions of those who contest the idea of local energy transitions [26][27][28].This means that, on the one hand, energy transition narratives transport stories about (key) actors supporting the idea of energy transitions [5], as for instance about heroes [25].However, on the other hand, innovators also develop counter-narratives, directed against the beliefs and actions of actors who contest the idea of energy transitions [26][27][28][29].
Adopters
The adopters looked at in this article form part of the civil society.Adoption means that the initiatives which embed actor groups-the early and late majority to use the terminology of Rogers [22]-begin to adopt the newly evolving sociotechnical innovations.This leads to a spreading of newly evolving sociotechnical renewable energy landscapes on the local (BINSE), regional (Solarcomplex), or even domestic levels (EWS).It appears that successful sociotechnical innovations comprising both the dimensions of materiality and social action [30] as basic to sustainability transitions are probably the strongest narratives themselves, given that they are brought into existence due to efforts by pioneers and adopters.This means that the energy transition narratives looked at in this article always tell a story about a community living in a city or a district.Their representative function paves the way to adoption in cognitive terms [5].
Non Place-Bound Factors
Global factors are out of reach for initiatives and encompass factors that determine the sociotechnical system (regime) or the global (landscape) level [13][14][15][16].Energy transition narratives represent actors' reflection of how global issues affect places like cities or villages [25,31,32].In this respect, energy transition narratives can be understood in two ways.First, they represent "planning techniques" for innovators, visualizing "through attention to and projections of the condition of future social spaces should current trends continue."[11] (p. 1608).
Second, global pressures symbolize compulsory, easy-to-assess metaphors [33,34] for adopters (civil society), describing global problems from a local perspective.Both understandings encompass global threats like climate or environmental changes or catastrophes, telling that the "(risk laden) future is pressing upon the present" [11] (p. 1619).
Second, bottom-up energy transition narratives ontologically refer to the successfully established sociotechnical innovations [24,31,36,38].Those are perceived by adopter groups in different spatial contexts who begin to highlight the meaning of an innovation and its successful establishment as a means for change [28,[39][40][41].Therefore, stories about successfully implemented local energy transitions create narratives about possible local energy futures and by so doing promote the actions and practices [42] of certain individuals [26][27][28]32].
Interdiscursivity
In later stages of development, bottom-up energy transition narratives undergo changes in regard to the relation between place-bound and non-place-bound factors [43].Hence, bottom-up energy transition narratives begin to diverge from their original place-bound context [44].Adoption pathways change in unforeseen ways and groups in different spatial contexts begin to adopt innovations and highlight different meanings of their innovation and its successful establishment [22].Energy transition narratives are therefore interdiscursive [40] because they interconnect with new place-bound and non-place-bound discourses and other narratives [5].Notwithstanding, this can cause controversy.
Sources
To analyze energy transition narratives, we mainly rely on 45 semi-structured interviews (for the complete interview list, please see Appendix A) with innovators, adopters (e.g., early supporters, clients), observers and experts in the SPREAD project [38] (Scenarios of Perception and Reaction to Adaptation-SPREAD.The project addressed the two overarching main questions.First, under what conditions do small sustainability innovation projects (like renewable energy co-ops, aiming at sustainable energy production, distribution, and consumption) gain traction and adopt their ideas to foster energy transitions (and if so, how can such diffusion processes be speeded up).Both authors conducted research in this project.For more information on the project, please visit the following website: www.kulturwissenschaften.de/en/home/project-70.html).Interviews were gathered between 2011 and 2014 in three steps.First, from 2011 to 2012, initiators were interviewed face-to-face and asked how activities started.Second, in 2012, experts who consulted the initiatives in the past were interviewed face-to-face reflecting on developments.Fellow campaigners of the initiator circles and also clients and investors of the initiatives were, except for four semi-structured face-to-face interviews, interviewed by telephone between 2012 and 2014.This enabled us to better understand the reproduction of narratives in different spatial contexts.The interviews are informative on how narratives are constructed with regard to global and place-bound factors which shape bottom-up energy transitions [38], as analyzed here in this article.Additionally, we draw on secondary sources, such as initiative websites, newspapers and academic publications.
Our interview interpretation is based on a so-called secondary analysis.Glaser [45] (p.11) describes secondary analyses as studies "of specific problems through analysis of existing data which were originally collected for other purposes."This can enable rich process narratives to emerge, since the authors know the social-cultural context of the gathered data [46,47].Furthermore, the analysis utilizes two basic comparative case study techniques.One is a within-case comparison, which "analyse[s] data to offer insight into the characteristics and causal processes of particular cases."[48] This includes a comparison of narratives with interviews conducted with adopters (late clients), living distanced to the original place of innovation, to carve out changes of narratives travelling through space and during the course of time.The other technique is the cross-case comparison of the three different cases to draw on general characteristics of the narratives analyzed here [49].
Procedure
Our analysis of energy transition narratives builds on two main steps.First, as innovation narratives [5], we propose that bottom-up energy transformation narratives spring from concrete and detectable events that take place in specific locations at specific times in the past.In this article, this refers to developing sociotechnical innovations in specific places.They emerge by being contrasted with possible futures that become manifest in the meaning of innovations and entail transitions, but also contrast to the status quo of the current energy regime.Therefore, section four starts by reflecting on the social-cultural context of narratives during the innovation establishment phase; this concerns actors and their problem orientation.Even though we highlight the interactionist perspective on the origins of narratives (namely that narratives are never a product of isolated social action), it seems that specific actors are required to articulate and communicate them in the first place.Based on the analysis of innovator interviews, local futures narratives are then displayed and then compared to the developing innovation establishment phase.
Narratives link places like cities, regions or districts; they also connect individuals and groups linked to those spaces, such as communities or families [43].Literature has highlighted the ability of narratives to transmit memories of events by referring to specific places or infrastructures, for instance, of cities [44].In this article, this concerns the specific sites of renewable energy producing techniques, such as biomass (e.g., wood, biogas) heating systems in houses, or photovoltaics on roof tops.It also includes wind parks or electricity grids.On account of this, second, we focus at the changes which energy transition narratives seem to undergo when reproduced during the adoption phase in spatial sociotechnical contexts which differ from the originating spatial context (Section 4.2).We thus look at protagonists, but also account for adopters' interpretations and perceptions of narratives that could influence the story told [41].We further compare the narratives to the innovation adoption path to highlight potential inconsistencies of narratives and critically reflect on them instead of taking them as consistent and coherent documentations of the past [13,42].For this reason, the concepts underlying this analysis is (a) a within-case comparison and (b) a cross-case comparison of three energy transition narratives during the innovation establishment phase and the innovation adoption phase and are displayed in Table 1.This comparison is also the basis for the discussion in Section 5 and is concluded in Section 6.The next Section 4 turns to the compared case studies.The first Section 4.1 deals with the origin of the narratives during the establishment phase, while Section 4.2, examines changes in narratives during what the article calls the adoption phase of the three researched renewable energy initiatives.
•
EWS.In 1988 the association Parents for Nuclear-Free Future (EfaZ; German: Eltern für atomfreie Zukunft) [50] was founded in Schönau, a city located in the South German Black Forest region in the state of Baden-Württemberg.In 1990, after a locally initiated energy saving campaign, EfaZ began articulating ambitions in the local council to repurchase the local electricity grid from the local electricity utility, since the utility's contract with Schönau was about to expire and since this would allow them to later design tariffs (interviews 16,38).A local debate evolved over the question of whether a citizen's initiative or a professionalized firm should operate the Schönau grid (interview 17).After campaigning and two local referendums (1991 and 1996) [38], the people of Schönau voted for the experiment with the new energy initiative.The newly founded company began organizing the takeover of the grid in 1997 with Elektrizitätswerke Schönau Ltd. after a nationwide crowdfunding campaign.This ultimately brought EWS to a larger audience in Germany, spreading the success story of Schönau [38].
•
Solarcomplex.Located within the Konstanz district and close to Lake Constance in the state of Baden-Württemberg, a group of sustainability-oriented people met every week in Singen Workshops (German: Singener Werkstätten), to debate issues of sustainability in light of the Agenda 21 process that was underway at that time [32] (Agenda 21 is an action program designed by the United Nations to foster sustainable development in the 21st century.The program was agreed upon in Rio de Janeiro in 1992 and aims also at community action).Soon a firm was founded which operated less confrontationally than EWS and started as a crowd-funding initiative that promoted solar roof panels.From the beginning, Solarcomplex launched public campaigns using the narrative of citizens' participation and professionally printed brochures to inform citizens of the Western Lake Constance region about their idea to turn the region into a 100-percent renewable energy region.Further, Solarcomplex incorporated clear benchmarking goals, and internal hands-on learning processes helped to adapt the adoption pathways of renewable energy technologies as needed.This included photovoltaic system, hydro power and bioenergy generation for heating (wood and biogas) [51].
Initially, the motivation to set up BINSE in Berchum was the enthusiasm for photovoltaic technologies of its founder.Berchum is a district of Hagen, a city which is located in the state of North Rhine-Westphalia.In 2002, the installation of a community photovoltaic unit on the rooftop of the youth education center of the local Protestant church was successfully realized [38].This marked the start of a close cooperation with local churches that would soon become a successful operational model.The project's success was not only due to the cooperation with the church, it was also due to the cooperation of citizens who gave loans for investments (interview 3).
Non-Place-Bound Narrative Factors
EWS emerged as a result of the nuclear accident in Chernobyl in 1986 [52].Influenced by local anti-nuclear protest groups, EWS innovators articulated their strong opposition against nuclear energy production and consumption as a result of a materializing nuclear threat.Activists and associations stressed the story of parents saving the future of their children.This motive was also behind the later founding of the firm EWS.As underscored in interviews 1, 9, 10, 27, and 27, the parents were influenced in their activism by local church groups and contact with local forest rangers, who were directly confronted with the effects of Chernobyl on the natural environment [50].
Inspired by artistic work of one co-founder of Solarcomplex, the main problem orientation was climate change (interview 4).Group members already knew each other from school, from the local Boy Scouts association, and from business (interviews 2, 4, 6, 7).From 1997, after failing to set up a barter exchange network in the city of Singen, the group turned towards local economic solutions and blamed the voluntary character of the barter exchange ring for its nonproductivity (interviews 6, 36).This can be regarded a change towards regime integration.
Initially looking for a photovoltaic solution for lighting in the basement of his own house, an interviewee (interviews 3, 19, 33) from BINSE reported that he soon learned about renewable energy producing technologies with a colleague from the University of Hagen who was considered a technology pioneer and experimented with photovoltaics and e-mobility.Several of his projects were presented in Berchum (interviews 18,34,35).Highlighting the principle of the integrity of creation (sustainability) as emphasized by the church, a group started their first community campaign prior to the foundation of BINSE (interviews 3, 18, 19).
Place-Bound Narrative Factors
Local church groups and contact with local forest rangers, who were directly confronted with the effects of Chernobyl on the natural environment, fundamentally changed the discussion of nuclear energy in Schönau and provoked strong sentiments against it.When EWS tried but failed to convince the local electricity distributer and domestic policy makers to stop distributing nuclear energy in Germany, the group decided to focus more narrowly on local activity as for instance energy saving [53].According to EWS co-founders (interviews 1, 42, 43), there was a decisive moment after one year in which attention shifted from opposing energy policy towards the option of locally supporting an energy transition: the idea of at least doing something against nuclear electricity industries by promoting more efficient citizen-funded ways of energy production, as for instance small block heat and power plants [38] (interviews 1,14,15).
The desire to act instead of just talking also gained traction in the group in Singen [54,55].Founded in 2000, the firm operated in a rather business-like manner.Solarcomplex's vision was to transform the regional electricity and heat production regime into a 100-percent renewable energy region by 2030 [56].Since there are no fossil fuel deposits or oil, gas, or coal and no nuclear or coal power plants in the region, it imports energy and sends capital to other regions.Solarcomplex therefore linked the regional welfare and sustainability debates.The narrative promoted at the beginning picked up on the climate-change discourse and was linked to the possibility to use citizens' capital to bring about change (interviews 2, 36).
BINSE sought to install a citizen-funded photovoltaic facility on top of the roof of the Protestant Parish Hall in Berchum.Soon the local protestant church started a sustainability education program (interview 25).The idea of the integrity of creation allowed for a framing of innovative renewable energy technologies as sustainable and desirable in rather conservative circles [57].Unlike those involved in EWS or Solarcomplex, the BINSE-actors were never part of the anti-nuclear movement or the environmental movement that had evolved in Germany since the 1970s.By envisioning a local energy transition, these actors drew rather on the idea of local resistance and strove to distinguish themselves from the rest of the city of Hagen, which Berchum was involuntarily merged with in 2002 (interviews 3, 18, 19).
Intermediate Results
Perceiving global pressures, the three initiatives felt the need to envision local sociotechnical solutions.All three initiatives experienced a paradigm shift from talking to acting.Consequently, as a main ingredient of their promoted bottom-up energy transition narratives, the initiatives stood not against, but for something: a local energy vision into the future.Whereas EWS started as a protest group and later offered a grid repurchasing option due to a window of opportunity, Solarcomplex and BINSE started to promote concrete sociotechnical portfolio ideas right from the beginning.The EWS case therefore strongly underscores how place-bound socio-cultural dynamics can reshape the routes of narratives.
Table 2 provides an overview of narrative factors during the establishment phase.In all of the three cases, the core group shared long-standing, strong relational bonds and used private social networks.To promote their local futures ideas, innovators of all three initiatives talked to friends, colleagues and neighbors and engaged in direct local campaigns and distributed flyers and brochures [38,58].Using citizens' capital to finance their initiatives was seen as the central mechanism to push for local energy transitions.This linked civil society to the narratives as a cornerstone for putting more sustainable local futures into practice.
Innovators pointed to a local alternative, offering participative pathways to change local energy futures with citizens' capital that was invested in local and renewable options that symbolized a local energy transition towards sustainable futures.Almost inevitably, the initiatives became counter-narratives in their own right that stood against Germany's centralistic and top-down-organized energy policy.
At their heart, the initiatives set the conditions under which they, driven by narratives, appeal to civil society's capacities (time, funding) to democratically participate in public decision making on local energy futures.Civil society translated the global to the local.Furthermore, EWS is a good example of how civil society defends transition narratives by appealing to democratic institutions.This is a powerfully mobilizing message in itself.
Moreover, the three early initiatives have in common that they first attached themselves to public institutions.Again, this underlines the importance of place-bound social capital.This strategy aimed to help the initiatives to attract attention and to gain trust, since local public bodies enjoyed prominence but underscores their place-bound dependence during their establishment stage.
•
EWS. From 1999, after the liberalization of the German electricity market (due to EG directive 96/92), EWS's next step was to distribute electricity not only in Schönau but all over Germany.EWS also started promoting the idea of local bottom-up energy transitions and the extension of its grid and its organization to neighboring communities and cities like Titisee-Neustadt (further, natural gas and electricity concessions started in the following small communities: on 1 January 2011, Fröhnd, Schönenberg, Tunau, and Wembach. 1 January 2012.On 1 January 2013 followed Wieden, Aitern, Utzenfeld, as well as Schönau-Aiterfeld as part of the city of Schönau [53]).Citizens from neighbor communities were given a say in the city council and participated in the capitalization of the operated electricity grid which they, like EWS, bought from the former operating utility.[59].This concerns also the collaboration with small neighboring towns like Engen, where passive energy house settlements relying on photovoltaics and wood pellet heating were built [60].) and makes particular use of photovoltaics and bioenergy heating systems, as professionalized during the establishment phase, but also a biogas plant and waste heat utilization (2018).In August 2012, an industry representative group, IG Hegauwind, was founded to escape discursive deadlocks in the regional wind sector.Since existing regional wind zoning rules were not sufficient, their own wind measurements started in 2013 [61] (Founding members IG Hegauwind were the Bürger-Energie Bodensee e.G., EKS AG, Gemeindewerke Steißlingen, Solarcomplex, Städtische Werke Schaffhausen/Neuhausen, Stadtwerke Engen, Stadtwerke Konstanz, Stadtwerke Radolfzell, Stadtwerke Singen, Stadtwerke Stockach, Stadtwerke Tuttlingen and the Thüga Energie [61]).• BINSE.In 2004, BINSE participated for the first time in the national solar and photovoltaic competition (German: Solarbundesliga), which aims to honor innovative communities.This platform helped BINSE to share experiences, and refine its future vision [38] (Between 2002 and 2006, BINSE realized 50 photovoltaic and solar heat projects [62].Later in 2009, BINSE realized a 10 kWp photovoltaic project in the neighboring district of Halden-Hagen on the rooftop of the Protestant-Lutheran Peace Church Halden-Hagen [63].In 2010, a solar recharging station for e-mobility was installed.Furthermore, since 2012, BINSE has installed 20 wood-pellet-based heating systems and organized a platform to gather pellets).For instance, in 2012, a pioneering facility wood-pellet-based heating system was constructed in the parish house of the local Berchum Protestant church [64], which obviously became BINSE's experimental incubation room (interviews 5, 34).Soon BINSE started other projects that did not rely on cooperation with local churches.
Non-Place-Bound Narrative Factors
EWS-interviewees underscored that the (regime) liberalization of the German electricity market in 1998 allowed the feed-in of small quantities of electricity into the grid and free choice of utilities [53].It potentially meant that clients who voted against EWS during the local referendums in 1991 and 1996 could decide to change their electricity distributor, which would have caused EWS to go bankrupt and lose the grid (interviews 1, 10, 20).To protect their electricity grid, the initiative's next step was to promote the distribution of electricity not only in Schönau, but all over Germany, to compensate for potential losses.Interviewees tell that, in 2011, Fukushima had a strong catalyzing effect and EWS won more clients after this (landscape) catastrophe (interviews 1, 16, 17).
From 2002, Solarcomplex has actively pushed the narrative of wind energy as the only option for Baden-Württemberg to realize its energy transition [59], since wind was seen as the most efficient and effective way to produce renewable electricity [56] (interview 6) (In 2017, three wind turbines were bought in Renquishausen which operated since 1996 and a small wind park (six turbines) was inaugurated in Verenafohren, projects are also planned in Bonndorf and Linach [59]).Wind energy projection is a serious barrier since it still faces public, NGO, and political resistance from actors aiming to protect the natural landscape in its current form as revealed in interviews.An (landscape) argument used by Solarcomplex is that Baden-Württemberg is the German state most affected by the nuclear phase-out, since it had the highest shares of nuclear energy before Fukushima [56].In 2010, a (regime) change of governing party, from Christian Democratic to Green, was necessary to push wind energy in the region.This occurred in 2010 (interviews 4, 5, 7, 12).
Unlike those involved in EWS, the BINSE-actors were never part of the anti-nuclear movement or the environmental movement that had evolved in Germany since the 1970s.In 2006, the directors of BINSE declared their renewable energy vision for 2050.Fossil and nuclear energy production, so they said, had led to climate, environmental, and political crisis, while solar energy is free and always available, even in Berchum [65].It seems the narrative shifted ex-post towards an anti-nuclear argument (interviews 19, 44) after Fukushima.
Place-Bound Narrative Factors
EWS's green energy distribution strategy broke with the narrative of the local community as a center of transition, which had been promoted during the establishment phase and was framed as the symbolic energy politics of grassroots organizations (interview 1) (As it turned out, nobody left EWS after the liberalization of the German energy market and the distribution business grew [66]).As a reaction to the criticism over the surprising diffusion pathway, EWS started to export its participative community model to neighboring communities like Titisee-Neustadt since 2011 (interview 17) (In 2015, the city of Titisee-Neustadt faced a lawsuit from the Federal Cartel Office, which claimed it ignored other regional applicants for the grid concession.In 2017 the city of Titisee-Neustadt won the case because the then consulting law firm made mistakes [67]).Further, a support program for small scale electricity production (photovoltaics) was set up by EWS to sustain private, efficient energy production; this was financed by electricity tariffs, voluntarily payed by EWS electricity customers [52].In so doing, the company tried to stick to its original narrative to push for bottom-up transitions, financed by citizens.
Contrary to the Solarcomplex 2002 study on the potential of regional renewable energy production, the company soon hit the limits of rooftop energy generation (interviews 4, 6, 7).As a consequence, ground-mounted photovoltaic systems with much higher energy production rates were discussed; they have been used since 2006 (interviews 11, 12).A discussion about Solarcomplex's ground-mounted photovoltaic systems led to a food versus fuel controversy with regional farmers because of the excessive use of land of these systems.As a result, Solarcomplex avoided to site on farmland to not contradict to the narrative of local value creation [38,58] (interview 26) (a similar discussion about Solarcomplex's bioenergy systems led to its stagnating regional adoption since it became too wood-and biomass-consuming (interviews 2, 4, 6)).The bioenergy village narrative faced the same barrier.As explained by an interviewee (interview 6), Solarcomplex began to forge alliances to gain support for wind energy in the Hegau region.This strategy, illustrating the narrative 'we face resistance, but together we will protect out idea', did prove effective (interviews 2, 36, 37).Nevertheless, the cooperation with IG Hegauwind represented an admission of sorts, that it would be impossible for Solarcomplex to pursue the goal of a renewable energy region alone and that cooperation with experienced municipal utilities was necessary [63,68].
Networks helped to establish contacts that later turned into projects (interviews 3,18,19,35) and helped BINSE to discover new networks [57].Later, in 2003, Berchum was already publicly called a solar village [65].Even the mayor of Hagen and a member of the Bundestag paid BINSE projects frequent visits.This shows how the city district gained popularity.The decisive organizational change of the BINSE-diffusion pathway compared to earlier occurred, when the initiative was opened up to citizens, who furthered adoption [38].Other German pioneer projects had demonstrated that change towards sustainable energy production is possible; the Berchum strategy was therefore to save energy and simultaneously increase energy production with renewables [65].This Berchum narrative builds on locally gathered experiences showing that such a transition is possible with citizen's funding.
Adopters and Bottom-Up Energy Transition Narratives
Whereas narratives mostly circulated among the core group of innovators at the start, the narrative distribution pattern changed in the adoption phase and began to involve a broader majority-adopters.EWS clients living far away from the small city of Schönau in bigger cities like Essen, Hamburg or Cologne describe EWS as a small community, successfully rebelling against nuclear monopolies and changing the German electricity market by pushing for local change.They further stress that buying EWS-electricity contributes to the local and democratic approach of this company.This is remarkable since EWS started green electricity distribution in 1999, buying green electricity in Norway and distributing it in Germany.This is an activity which must be considered spatially detached from Schönau and which does not differ from the operational models of other utilities distributing green electricity.
Also, clients of Solarcomplex (interviews 13,22,29,30,31,37) highlight the importance of citizen-funded regional economic circles and local value adding and local sustainable change.However, in contrast to interviewees of the other initiatives, the Solarcomplex clients strongly emphasize the benefit of green and secure investments.Therefore, these narrative reproductions shift away from a sole focus on sustainability towards green economy rhetoric via local change.
In contrast to the above example, economic motives play only a partial role in the BINSE narrative, even though one interviewee (interview 34) highlights economic motives.Another interviewee (interview 33) remembers the oil crises in the late 1970s/early 1980s, and stresses the need for more ecologically and technologically sound solutions.This strong technology focus differs to the narratives reproduced by interviewees of the other initiatives.However, rather dominating seems the reproduced narrative of citizens' contribution to the community in interviews with clients of BINSE (interviews 33, 34), which also included the local church community (interview 33).
Intermediate Results
What the three initiatives share is a moment in which their business models began to work and innovations gained momentum, so that the envisioned energy transitions accelerated and evolved in spatial patterns.During this process, the three initiatives share topoi of protection by adoption.Narratives have in common that they argue for the protection of their initial success with processes of innovation adoption (EWS-grid, Solarcomplex's wind energy, BINSE's new benchmarks).Finding ways to overcome material, natural, technical and bureaucratic barriers is therefore the narratives' climax stressed by innovators.
Fukushima represented a non-place-bound argument which catalyzed transitions.For EWS this catastrophe was, again, proof for its anti-nuclear strategy.Both Solarcomplex and BINSE used Fukushima as an argument to justify their newly pushed adoption pathways, whereas place-bound arguments against barriers are represented by the figure of cooperation and social networks as means to overcome barriers.
Table 3 provides an overview of place-bound and non-place-bound narrative factors during the adoption phase on different levels.
Bigger projects made new financing schemes (and bank loans) necessary.Instead of relying solely on symbolic, local, and voluntary community action like the early initiatives, the initiatives' narratives transformed.It now transmitted a vision of steadily growing and professionalizing firms that offered citizens an option to democratically contribute to the German energy transition.Citizens were offered to invest in shares, stocks or electricity products as an option to democratically choose energy visions [38,57,58].
Interviews with adopters reveal that contact, trust in, and a strong bond and identification with initiators are important factors in relation to whether the narratives of initiators are reproduced by clients.This also holds true for interviewed clients of BINSE and Solarcomplex.These findings suggest that trans-locally reproduced narratives refer to key innovators.Furthermore, all interviewees of the initiatives refer to the initiatives' democratic bottom-up character, supported by citizens' engagement.The framings of the citizens' role in local energy transitions described in interviews vary however due to initiatives' organizational differences and spatially differing patterns of adoption.At the same time, there seem to be individual differences in motives.However, the achievements themselves were crafted into a narrative of success, showing citizens that change is indeed possible [38,57,58].
Points of Contention on Landscape Level during the Establishment Phase
As highlighted elsewhere, non-place-bound narrative factors represent civil society's points of contention to engage in bottom-up energy transitions [17][18][19][20].In this article, we identified the nuclear disaster Chernobyl and dangerous climate change as basic elements of the compared bottom-up energy transition narratives.As expected and as asserted by the cited environmental narrative literature, all interviewees picked up on such points of contention as highlighted in Section 4.1.2.
It could be assumed that it is the nature of bottom-up transition initiatives on niche level to relate to regime reconfigurations as non-place-bound factors.The case of the EWS narrative is representative for such a perception: EWS heralded its strong opposition to regime actors (government and industries) and thereby became a counter-narrative.In this article it was shown that bottom-up energy transition narratives oppose those of local traditional utilities that have been established for a considerable time period.They can therefore be understood as counterparts of the narrative of successful industrial revolution [29], even though it should not be forgotten that the growth of a centralized "energy-production systems is actually in and of itself a sociotechnical success story" [30] (p.519).However, even though EWS employees contrast EWS to other domestic renewable electricity distributors (interviews 9, 10), there is little difference in their core activities.Identical arguments-against centralistic energy systems-were put forward by interviewees of BINSE and Solarcomplex (interviews 1-3, 7, 8, 11, 12, 36).
Nevertheless, it should be recalled that regime reconfigurations were the very reason for establishing Solarcomplex and BINSE.Only the introduction of the German feed-in tariff (on regime level) made their business model possible.We therefore assume that stressing the counter-narrative forms part of the strategic repertoire of bottom-up initiatives, whether integrated at regime level or not.Consequently, it could be argued that it is the essence of bottom-up energy transition narratives to artificially highlight the tension with the sociotechnical regime and landscape level.
Readjusting the Catastrophe during the Adoption Phase
We identified readjustments in terms of how the analyzed narratives frame global pressures.In regard to the adoption phase, our findings contrast with the argument that transition narratives are "tied together by a central theme" [69] (p.357) for especially two reasons.First, problem orientations are not fixed.They can be adopted ex-post as arguments for local action during the adoption phase.All of a sudden, the anti-nuclear position became an argument for BINSE and Solarcomplex, as climate change became for EWS.Second, problem orientations can become discarded as was shown for the case of BINSE: the narrative shifted ex-post towards an anti-nuclear argument and the idea of integrity of creation lost ground.The same was reported by EWS interviewees for the establishment phase: the anti-nuclear idea that was basic to this early community became less prominent in attempts to make the idea attractive to other, less political Schönau citizens.This shows that bottom-up energy transition narratives can diverge from their original global problem orientation.Consequently, bottom-up energy transition narratives can be tied together by different (non-place-bound) themes at a given time.
The quality of the nuclear disasters Chernobyl and Fukushima should nevertheless be considered.Different than climate change, Chernobyl and Fukushima did not require planning techniques to predict the disasters' local effects.They immediately became real in places and affected people's daily life.This might explain why BINSE and Solarcomplex began picking up on the anti-nuclear argument.In the case of EWS, Fukushima performed as a consolidation of the firm's original narrative which originally built on the Chernobyl disaster.
However, methodologically it is difficult to assign narrative factors either as non-place bound or as place-bound.According to interviews, also adopters picked up on Fukushima.Therefore, local narrative factors-adopters, people living in places-influence the social construction of global pressures and must therefore be understood as perceiving and articulating links between the global and the local.
Establishing Place-Bound Stories of Civil Society's Success
Bottom-up energy transition narratives metabolize global pressing problems into local, sociotechnical futures through civil society's actions [11,12,[17][18][19][20].Four central place-bound aspects were identified when comparing bottom-up energy transition narratives.First, the analyzed narratives have in common that they underscore their local materialized success.The idea of centralized electricity distribution was opposed with more sustainable, local forms of energy production and consumption.Consequently, bottom-up sociotechnical innovations became sociotechnical narratives of success in their own right.This soon drew increasing numbers of experts, celebrities, and politicians in the renewable energy communities, as interest in the initiatives continued to grow.Highlighting sociotechnical (place-bound) achievements is therefore a central strategy of bottom-up energy transition narratives.
Second, and this is related to the above made assertion, the analyzed narratives also politicize the discursive construction of sociotechnical energy transition places.Renewable energy technologies became symbolic for bottom-up transitions and developed into spatial brands of possible energy futures (e.g., solar city, renewable energy region).Such brands transmit specific meanings, politically colonizing local discourses and becoming strong narratives.For instance, Schönau was called a 'solar city' (referring to the solar roof of the local Lutheran church in Schönau); Berchum was soon described as a 'solar village'.Solarcomplex promoted a '100-percent renewable energy region' [38,56].This shows, even though Schönau and Berchum do not generate much solar energy in relative terms, and even though the Western Lake Constance region is not yet a 100-percent renewable energy region, these framings already point to what ought to be (a tradition) in the future.This politicization-effect is also true for sites (e.g., wind parks, photovoltaic roofs), spatially indicating new ways of energy production.
Third, representing civil society and actively highlighting its role in bottom-up energy transitions, the three compared narratives have in common that they frame civil society as the main resource of success.This is in line with literature on actor roles in transition [21].Moreover, and as likewise underscored by Frantzeskaki et al. [19], interviews highlighted social capital like knowledge and ideas as central means for successful bottom-up energy transitions.This is further confirmed by interviewee's references to social networks as seedbeds for local change.
Fourth, interviewed innovators often stressed voluntary action.For instance, BINSE owes its success to the steady and voluntary engagement of pensioners, who used all their available time to push the adoption of many projects (interview 3).The same voluntary character assigned to the analyzed narratives was reported by interviewees of EWS and Solarcomplex (interviews 1, 2) (moments of deprivation were elsewhere highlighted [38,57,58]).This stressed the narrative of bottom-up energy transitions as a public, voluntary affair without economic aims, instead of a privately mounted, rent-seeking project.We therefore see a storyline suggesting that bottom-up energy transition narratives promote niche protection by activating all available resources.Reported crowdfunding strategies speak for this.This role of protection is completely assigned to civil society in its struggle for just and democratic energy systems, as highlighted in literature [70][71][72][73][74].
Adopting Local Success Stories and the Fixity-Travel Dilemma
During the adoption phase, narrative plots can bifurcate from local contexts because energy transitions materialize in new emerging transition pathways, caused by newly developing and mostly unforeseen dynamics.Third parties confirm that this holds true for EWS and Solarcomplex.In an interview (interview 37), a regional energy expert and Solarcomplex activist interestingly compared Solarcomplex with EWS in this regard.He clearly states that the initial idea was to regionally gather capital, but, confronted with the need to keep up with the adoption benchmark, financial needs made organizational changes necessary.Consequently both companies gave up their original argument of locality of change (Figure 1).Comparing the adoption phase with the initial idea of bottom-up energy transitions as promoted in the analyzed narratives, a couple of controversies can be identified.First, by falling short on the promotion of citizen's participation, energy transition narratives risk promoting sociotechnical systems that differ little from the sociotechnical systems from competing, rent-seeking energy industries during the innovation adoption pathway.Therefore, by disarticulating citizen's participation, bottom-up energy transition narratives advocate de-localization: initiatives promote (place-detached) green electricity products or shares as a citizens' democratic voice for sustainable energy systems and yet a voice against the status quo of energy industries.Paradoxically, the reproduction of bottom-up energy transition narratives by adopters contrasts with these findings.All interviewees refer to the initiatives' participative character, supported by citizens' engagement.Innovators enjoy trust and status in their groups [63], identification with the pressure group seems therefore an important factor for successful adoption of bottom-up energy transitions, whether or not they are diverging from original narrative contents.
Nevertheless, BINSE should be understood as a model for enabling a local but deep energy transition, aiming at neighbor cultures but also seeking to attract regional and national attention for Berchum.Seen from this perspective, BINSE truly sticks to its original narrative of local change.
Second, the analyzed narratives were iteratively and deliberatively changed when innovators began reflecting upon barriers and specific side effects of ongoing energy transitions pathways.A good example is the cooperation of EWS with the city of Titisee-Neustadt in order to balance out the distributive activity with more local engagement as originally stated.EWS interviewees point to this controversy, expressed in company members' discomfort with the new economy of scale of electricity distribution.EWS, they claimed, only invested in Schönau but not in the surrounding region due to the small returns (interviews 9, 16, 17).EWS's support for electricity distribution, so it seems, went hand in hand with a trade-off between citizens' engagement and a locally crowd-funded financing scheme.Another example is provided by Solarcomplex's adjustments of its biomass adoption path.These examples show how reflection influenced the adoption pathways.Even though controversies are a less reported issue in BINSE interviews, interviewees describe that projects were realized in the beginning which could not be opposed by the city of Hagen or the church (interviews 3, 33, 34).We relate this to cultures of learning which become institutionalized in social contexts.Nearly all interviewees related this to interactions with adopters in social networks (interviews [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]36).
To sum up, bottom-up energy transition narratives constantly face what we call a fixity-travel dilemma.Adoption pathways transform into new sociotechnical narratives.If they diverge strongly from their original spatial context and disarticulate their initial message, they ultimately run the danger to become stories of controversy.Innovators find themselves in a steady reflection upon the contradiction between what is and what ought to be.This forces them to balance reason.Innovators have to decide whether to stick to a specific adoption path or not and employ strategies accordingly.Protecting their local source of identity (the initial energy transitions success) by promoting adoption is one way to deal with changing pathways.Another way of maneuvering nimbly is re-designing transition pathways to them becoming more suitable and compatible with employed narratives.This means spatially non-emerging bottom-up transitions materialize in less-conflicting narratives of their own as is the case with BINSE.
Conclusions
The qualitative method proposed here, which combined within-case and cross-case comparisons, helped to understand what place-bound and non-place-bound factors make up for bottom-up energy transition narratives and how those factors relate to each other during the establishment phase of bottom-up energy transitions.It was shown how place-bound and non-place-bound narrative factors change in their relation to each other during the adoption phase of bottom-up energy transitions.
We highlighted the character of energy transition narratives in several respects.Generally, bottom-up energy transition narratives help to establish local sociotechnical innovations.During the establishment phase, non-place-bound narrative factors represent civil society's argument to engage.At the same time, narratives suggest social proximity and underscore place-bound factors which enhance transitions.They promote participative local, renewable energy futures and highlight civil society's role in "holding the future together" [11] (p. 1613).Sociotechnical artefacts become an object of opposition, material references become therefore an important ingredient of bottom-up energy transition narratives and a steady reference point of success and confirmation that change is indeed possible.
When bottom-up energy transitions gain momentum during the adoption phase, narratives travel easily beyond their local origin in order to be reproduced in different local settings.They can be understood as entrance cards for energy transitions when travelling from place to place and when attracting adopters.This makes bottom-up energy transition narratives an important link between districts, towns and areas.However, bottom-up energy transition narratives can cause path dependencies when they force innovators to follow the envisioned beaten track.At the same time, they become products of quick materializing changes and disarticulate citizens' participation.In this regard, non-place-bound factors play an important role in enhancing transitions.
During the adoption phase, bottom-up energy transition narratives might begin to discard their place-bound origins (e.g., citizens' focus vs. business focus) and indirectly begin to promote sociotechnical systems that differ little from place-detached energy systems of the status quo.This is when energy transitions step out of a niche.At this point, bottom-up energy transition narratives face a travel-fixity dilemma which forces innovators to take uneasy either/or decisions and to compromise.This can also be regarded an expression of niche-regime level tensions [13][14][15].
In addition, even though energy transition narratives promote to be for something, they must be understood as counter-narratives, opposing those of local traditional utilities that have been established for a considerable time period.Energy transition narratives can therefore run the danger to become polemic and polarized; here the good, there the evil.Furthermore, even though energy transition paths disarticulate original messages, adopters, when reproducing energy transition narratives, strongly refer to the original core message of the narratives: to citizens' participation.In this article, we related this to adopters' strong identification with innovator pressure groups.
Nevertheless, and in our view this is paradoxical, the detachment from the promoted political message of citizens' participation in energy questions makes narratives strong: becoming increasingly non-place-bound, bottom-up energy transition narratives become assigned to an increasing number of adopters from different places and thereby begin to "stretch policy spaces" for emerging bottom-up energy transitions [10] (p.1026).This stabilizes domestic energy transitions which build on bottom-up energy transitions.
Figure 1 .
Figure 1.Place-bound and non-place-bound factors during the establishment phase and the adoption phase according to the sociotechnical landscape-, regime-and niche-level.
Table 1 .
Cross-case and within-case comparison scheme.
Table 2 .
Overview of narrative factors during the establishment phase.
Table 3 .
Overview of place-bound and non-place-bound narrative factors during the adoption phase on different levels. | 2019-05-20T13:04:04.164Z | 2018-03-22T00:00:00.000 | {
"year": 2018,
"sha1": "c4e55da6e9e697226806b2850e054cabe8dee6bb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/10/4/924/pdf?version=1525344784",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "70e65873ffaa34ac88bef3231764cda6fa63ff6d",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": [
"Economics"
]
} |
237365541 | pes2o/s2orc | v3-fos-license | Incorporating Acupuncture Into American Healthcare: Initiating a Discussion on Implementation Science, the Status of the Field, and Stakeholder Considerations
Introduction The field of implementation science is the study of methods that promote the uptake of evidence-based interventions into healthcare policy and practice. While acupuncture has gained significant traction in the American healthcare landscape, its journey has been somewhat haphazard and non-linear. Methods In June 2019, a group of thirty diverse stakeholders was convened by the Society for Acupuncture Research with the support of a Patient Centered Outcomes Research Institute, Eugene Washington Engagement Award. This group of stakeholders represented a diverse mix of patients, providers, academicians, researchers, funders, allied health professionals, insurers, association leaders, certification experts, and military program developers. The collective engaged in discussion that explored acupuncture’s status in healthcare, including reflections on its safety, effectiveness, best practices, and the actual implementation of acupuncture as seen from diverse stakeholder viewpoints. Objectives A primary goal was to consider how to utilize knowledge from the field of implementation science more systematically and intentionally to disseminate information about acupuncture and its research base, through application of methods known to implementation science. The group also considered novel challenges that acupuncture may present to known implementation processes. Findings This article summarizes the initial findings of this in-person meeting of stakeholders and the ongoing discussion among the subject matter experts who authored this report. The goal of this report is to catalyze greater conversation about how the field of implementation science might intersect with practice, access, research, and policymaking pertaining to acupuncture. Core concepts of implementation science and its relationship to acupuncture are introduced, and the case for acupuncture as an Evidence Based Practice (EBP) is established. The status of the field and current environment of acupuncture is examined, and the perspectives of four stakeholder groups––patients, two types of professional practitioners, and researchers––are explored in more detail.
Context of This Discussion
In June 2019, under the auspices of the Society for Acupuncture Research (SAR) and with Patient-Centered Outcomes Research Institute (PCORI) grant funding, a diverse group of stakeholders met to initiate conversation about the dissemination and implementation of acupuncture within American healthcare. The selected stakeholders represented a cross-section of the professional field of acupuncture, including patients, clinicians, researchers, payers, philanthropists, and representatives of local, state, and federal governments.
In this article, the perspectives of four stakeholder groups are explored in more detail--namely, patients, professional practitioners, and researchers. This subset of the full collection of stakeholders was chosen because of the authors' knowledge bases, and because a thorough exploration of these and the other categories is precisely the goal for future works. The authors of this paper continued to meet and expand on the discussion for nearly a year after the initial meeting. This work represents an attempt to not just describe the meeting and initial findings, but also to demonstrate how the concepts outlined at the in-person meeting can inform future work that can further connect the fields of acupuncture and implementation science. To our knowledge, this was the first gathering of such a broad and diverse stakeholder population with intent to discuss the intersections of acupuncture and implementation science. It is the hope of all stakeholders that this document serves to catalyze greater conversation about how issues related to practice, access, research, and policy-making in acupuncture can be informed by the field of implementation science.
Introduction to Implementation Science and Its Relationship to Acupuncture
The field of implementation science is the study of methods that promote the uptake of evidence-based interventions into healthcare policy and practice. 1,2 This field can be divided into dissemination research and implementation research. Dissemination research focuses on identifying effective ways to spread knowledge on evidence-based practices (EBPs) to relevant stakeholders. 1,2 Implementation research, in contrast, is concerned with the development of "implementation strategies" that can be used to increase the integration of EBPs within clinical settings. 1,2 Acupuncture is an EBP that has been rigorously tested for a number of clinical applications; however, to date, the integration of acupuncture into modern healthcare systems has not been guided by implementation science. To support and optimize this effort, there is a need for acupuncture researchers, clinicians, and administrators to be more aware of concepts central to the field of implementation science. Collaboration among varied stakeholder groups is needed especially for acupuncture, as its optimal methods of application, training, and scope are yet to be defined.
Effective dissemination or implementation of an evidence-based practice within an established healthcare system may hinge on how the information is packaged, and whether the approach is systematic and comprehensive. Conceptual models and frameworks from the field of implementation science have been developed for various purposes, and are useful in guiding the systematic development of dissemination and implementation strategies in complex real-world settings. 3 Process models are particularly useful when describing or guiding the process of translating research into practice, and outline the phases or stages of the research-to-practice process, from the discovery and production of research-based knowledge to the implementation and use of research in various settings. 3 Other uses of implementation science frameworks include: identifying determinants of successful implementation of an EBP (facilitators and barriers); identifying implementation strategies that optimize or contend with these determinants; or to help evaluate whether dissemination and/or implementation strategies were successful. [4][5][6][7] Conceptual components that influence the determinants include staff familiarity and training needs surrounding the EBP, patient acceptance, and structural factors such as work-flow dynamics, billing, resource allocation, and time management issues. 4 Advocacy within relevant departments, the presence of an expert in the EBP, and adequate funding are also requisite. 8 When, however, the EBP to be incorporated is both unfamiliar and challenges assumptions of the existing paradigm and payment models, the barriers to dissemination of information about, and actual implementation of the EBP become more challenging. Translational research suggests that comparative effectiveness and cost-effectiveness studies conducted in real-world settings are sufficient for the uptake of a new intervention into usual, allopathic care, at least when that intervention is consonant with the existing medical paradigm. [8][9][10][11][12] Providers from the field of complementary and integrative health (CIH) may face additional challenges when entering mainstream clinical care settings. Unique barriers for CIH treatments exist. These include: cost of care provision, payment pathways, reimbursement potential, compliance, staffing of the EBP, patient preferences and individual belief systems, and other variables.
Acupuncture is an example of an EBP that has been rigorously tested and which is regarded as both clinically effective and cost-effective for multiple conditions, but that has not yet seen the incorporation into mainstream healthcare that one might expect. Despite the substantial body of basic science and clinical research in acupuncture, the practice of acupuncture continues to be steeped in a metaphorical language that is culturally foreign to the United States and to the biomedical paradigm. 13 This leads to skepticism regarding the scientific basis of acupuncture and presents an essential challenge for integration into a conventional healthcare setting.
Defining "acupuncture" can be equally challenging, as many practitioners use acupuncture as part of a comprehensive system of health practices, but others use it as an isolated modality. Some practitioners utilize tailored treatment strategies, while others use protocolized treatment strategies. Many styles of acupuncture application are also seen in practice. Acupuncture may therefore be considered a complex, multifactorial set of procedures. 12 Returning to the core frameworks of implementation science, we can see that identification of facilitators and barriers, creation of strategies to optimize or contend with these determinants, and the consideration of evaluation models for quantifying and qualifying "success" demand novel thought.
Distinguishing Underlying Rules for the Dissemination and Implementation of Acupuncture From Specific Examples of Success
Participants of the working group were asked to identify and discuss examples where acupuncture has been successfully implemented within a multidisciplinary medical setting. The group discussed the components of the EBP (acupuncture) which included: into which departments or settings was acupuncture implemented; to what populations was acupuncture delivered; for what or which condition(s) was acupuncture applied; in what way were the finances considered, and others.
Distinguishing between the components of the EBP and the implementation strategies is often a fundamental challenge in implementation research. 6 More specific examples of this distinction emerged during the working event. In one instance it was noted that acupuncture had improved uptake in an inner-city, African American community after engaging elders from local churches, treating them, and then having those individuals be ambassadors to the rest of their community.
The underlying rule might be that when engaging community patient groups, practitioners should seek trusted individuals (aka "champions") within that demographic group that can allow entr ee into the broader community with which they wish to connect. 14 In another setting, a legislative success for acupuncture was achieved in Washington State by shifting a focus on emphasizing only the "weak to moderate evidence base" for efficacy (due to low sample sizes) to highlighting that the evidence still does show strong clinical significance in the domain of effectiveness. 15 Focusing on the effectiveness of acupuncture (how it performs in the real-life setting) rather than the efficacy (how it performs relative to placebo or control in the research setting) was more useful to legislators and regulators trying to solve real-life problems. This lesson reveals an underlying rule for acupuncture dissemination and implementation-the "slice of the evidence" presented must be most relevant to the decision-maker in question.
In a third example provided after the event, it was noted that in the military, when addressing barriers to the implementation of complementary and integrative medicine (CIM) into the VA and military healthcare systems, collaborators developed a strategy of teaching a variety of providers (medics, corpsman, nurses and physicians) a protocol-driven, auricular acupuncture technique coined "battlefield acupuncture." Over a 19month period, 2,712 providers were trained. This was used to expose clinicians to the benefits of acupuncture for pain management, and opened the door to the VA actively engaging medical and licensed acupuncturists in the care of their patients. 16 The underlying rule was that creating familiarity and personal experiences with treatment bridged a gap for this specific group of providers.
Current Status of Acupuncture in Implementation Research
Implementation research is defined by the NIH as "the methods to promote the systematic uptake of clinical research findings and other evidence-based practices into routine practice, and hence improve the quality and effectiveness of health care." 17 While the use of "implementation strategies" to improve access to acupuncture in usual care settings has been done in practice, few have formally evaluated the effectiveness of implementation strategies on implementation outcomes. For example, stakeholder engagement and partnering with "champions" have been used, yet it remains unknown whether these or other strategies result in either improving the perceived acceptability, appropriateness, or feasibility of the implementation strategy in usual care medical settings, or for increasing the adoption of acupuncture specifically in these settings for a given condition.
Implementation strategies typically map directly to important barriers or facilitators, so identifying these determinants of successful implementation is often an important first step in implementation research. Developing and testing implementation strategies in real-world settings is needed, with special focus paid to how the strategy leads to a measurable outcome.
Most of the literature on acupuncture use in usualcare medical settings has been descriptive, for example, providing examples of how acupuncture has been added to a hospital or community health center. [18][19][20][21] The Veteran's Health Administration has described the use of acupuncture and identified barriers and facilitators to a wider range of CIH treatments for pain. Through interviews of 149 key stakeholders across eight VA locations, key facilitators (eg, program champions, leadership support, patient attitudes) and barriers (eg, difficulty hiring CIH providers, funding, coding/documenting CIH use, physical space) to successful CIH implementation have emerged. 22 Additional research is needed to elucidate barriers or facilitators that may be unique to acupuncture for pain or other health conditions. Clinical trials evaluating information on implementation and effectiveness outcomes are starting to emerge in the literature. For example, a trial evaluating acupuncture's effectiveness for improving pain and symptom management in hospitalized patients with pain also interviewed stakeholders on barriers and facilitators to implementing acupuncture on busy inpatient hospital floors. 23,24 They found that stakeholders perceived the following as important barriers: lack of setting-specific data (acupuncture for pain in inpatient setting), provider time constraints, financial barriers including out-of-pocket costs to patients and lack of profit to the hospital, and uncertainty about whether acupuncture would be appropriate or helpful for a particular inpatient (and when during inpatient stay should it be applied?). 24 Facilitators to acupuncture's use for pain and symptom management in a busy in-patient setting, included the opportunity for clinicians to observe the benefits of acupuncture among inpatients in their care. It also included witnessing patient demand for acupuncture, clearly seeing the ability of acupuncture to add value, and expanding treatment options for symptom management in the inpatient setting. 24 Lastly, their qualitative findings suggest a number of strategies thought to promote successful implementation (eg, acupuncture program champions, provider education) although the effectiveness of these strategies for implementation outcomes (eg, adoption) should be tested in future prospective clinical trials.
We are unaware of implementation studies that have prospectively compared two or more approaches of implementation. A comprehensive list of 73 implementation strategies organized by nine domains (the Education Resources Information Center (ERIC) taxonomy) is an important guide for selecting and combining implementation strategies. 25 Implementation strategies that address multiple barriers at multiple-stakeholder levels are thought to be better than single strategy approaches, yet, comprehensive strategies with many components may be too expensive or time consuming to be feasible. We are unaware of any implementation science trials that have compared different sets of implementation strategies when implementing acupuncture. Such trials may be needed to help determine high-yield implementation strategies that can inform and focus future implementation efforts. For example, should implementation strategies focus more on the education of clinicians, patients, or both? Is the barrier to implementation actually more defined by the electronic medical record in use and its limitations, or are the most profound obstructions to be found in the domains of administration or reimbursement? Is the engagement of policy makers or hospital administrators most essential? The widespread adoption for acupuncture for pain in usual care settings may, in part, hinge on answering these important questions. Future research should evaluate the use of different strategies to increase the adoption of acupuncture into usual care settings.
Status of Acupuncture as an Evidence-Based Practice in the United States
In recent years, acupuncture as practiced in the United States has emerged as an evidence-based therapy in a growing number of multidisciplinary guidelines, predominantly surrounding its use for pain. In 2015 the Joint Commission revised guidelines for the management of pain to include nonpharmacological strategies including acupuncture. In 2016, the American Society of Clinical Oncology included acupuncture for chronic pain management as part of their adult cancer survivor care recommendations. Acupuncture was noted to be effective for low-back pain by the Agency for Healthcare Research and Quality in 2016, which informed clinical practice guidelines by the 2017 American College of Physicians, which recommended acupuncture for acute and chronic low back pain. Similar recommendations for low back pain were adopted in the 2017 clinical practice guidelines of the Department of Veterans Affairs. Most recently, in January of 2020, the Centers for Medicare and Medicaid services approved the use of acupuncture for chronic low back pain.
A recent article by Birch et al. surveyed trends in the inclusion of acupuncture into clinical practice guidelines and positive references to acupuncture between 1991 and 2017. 26 The results included 2189 positive recommendations for acupuncture within 1311 publications, with an increasing frequency trend. Pain was the most common positive reference with 1486 references relating to 107 pain indications. There were 703 recommendations related to 97 non-pain conditions. The groups making these recommendations were noted to show significant diversity, coming from sources such as government health institutions, national guidelines, and medical specialty groups. The data came from around the world, but were especially abundant in Europe, North America, and Australasia.
The Status of the Field and Current Environment of Acupuncture
The rise in acupuncture's inclusion is creditable to many factors, which are intertwined with one another as briefly described below.
Patient Usage Threshold: While still at the loosely anticipated social tipping point of 25% for full selfpropagation proposed by some researchers, the increasing trend in the use of acupuncture no doubt contributes to population uptake. Recent surveys suggest between 1% and 10% of U.S. adults have experienced acupuncture treatments. In 2007, 3.1 million Americans used acupuncture, which was significantly greater than in 2002, when 2.1 million adults received acupuncture treatments, [27][28][29] and there has been a steady linear increase in acupuncture usage from 2002 to 2012, 30 which continues to the present day. Studies of Complementary and Alternative Medicine use have shown benefits to patient perceptions of hope, health options, and improved healthcare experience. These secondary treatment benefits may also be driving the steady rise in acupuncture's popularity and expansion. 12 Growth of the Research Base: In the past two decades, we have seen the explosion of the acupuncture literature base, 31 which has grown in multiple domains including basic science, randomized controlled trials (RCTs), systematic reviews, and discussions of implementation. In addition to many clinical trials (PubMed lists 3627 RCTs from 1998 to 2016), recent research has focused on better understanding the mechanisms of acupuncture. A broad range of innovative basic research studies have identified numerous biochemical and physiological correlates of acupuncture. 32 Mechanisms such as changes in neural processing in central and peripheral domains, release of endogenous opioids, changes in pain signaling, vasodilation and blood-flow dynamic effects, and others have led to numerous plausible, biological mechanisms for acupuncture. Nevertheless, key gaps in evidence regarding the biological basis of acupuncture points and meridians remain a barrier to acupuncture's perceived credibility and uptake. Further dissemination and implementation of acupuncture is likely to amplify the positive findings and increasingly inhibit criticism from stakeholders unaware of acupuncture research. 13 Opioid Crisis: Above all other influencers, the opioid crisis has been the primary driver of efforts to find nonpharmacologic solutions to pain control, as the harms of current pain management strategies using opioids are recognized as a monumental cause of morbidity and mortality. It is estimated that strictly from a costestimation standpoint, harms from opioids near 78 billion dollars per year in the United States alone. 33 The evidence for acupuncture as an EBP to specifically address the opioid crisis was summarized by Tick et al. via the Academic Consortium in Evidence-Based Nonpharmacologic Strategies for Comprehensive Pain Care: The Consortium Pain Task Force White Paper, 34 and by Fan, Miller et al. in the white paper, "Acupuncture's Role in Solving the Opioid Epidemic." 35 Increasing Payer Coverage: There is a trend towards the coverage of acupuncture, especially for pain-related conditions. The inclusion in January 2020 of acupuncture into Medicare specifically for chronic low back pain represents a monumental step forward for acupuncture recognition throughout the U.S. Medicaid programs in a number of states now provide some coverage for acupuncture services, expanding access to demographics who would not generally be able to afford self-pay.
Five states (Arkansas, California, Maryland, New Mexico, and Washington) consider acupuncture to be an essential health benefit. Overall, however, few non-pharmacologic therapies were offered under essential health benefit rules, despite recommendations by the American College of Physicians. 36 Veteran Usage: One of the most striking examples of the increase in acupuncture utilization has been in the Veterans Health Administration (VHA). Acupuncture gained inclusion in the veterans' benefit package following the adoption of the Veteran Health Administration's Directive 1137, the Provision of Complementary and Integrative Health. 37 This provides that acupuncture be available to veterans when it is recommended as part of their care plan. The VHA created a qualification standard for licensed acupuncturists in February 2018, so this professional class is able to be hired to provide acupuncture services in VA medical centers. 38 Although acupuncture has been practiced in the VHA for years by physician acupuncturists, chiropractic acupuncturists, and a small group of dually-licensed individuals, there is an expectation that acupuncture use will grow quickly with the inclusion of licensed acupuncturists providing the care. Tracking of usage and satisfaction by veterans over the coming years will reveal if this prediction is accurate. Following the 2019 MISSION Act, any veteran benefit that cannot be provided in a timely manner in relatively close proximity to the veteran requires the veteran be sent to a community care provider. 39 This includes the use of acupuncture services outside the VA walls through contracts between private, third-party administrators and the VHA.
Department of Defense: The use of acupuncture in the Department of Defense (DoD) system has also been growing. A report published in Medical Acupuncture in 2018 retrospectively reviewed acupuncture usage. 40 It found that in 2014, 15,761 patients in the Military Healthcare System (MHS) database received acupuncture. Pain was the most frequent condition treated per this review. Battlefield acupuncture (a type of auricular acupuncture) was the first technique to be most systematically introduced to the DoD and played a significant role in creating inroads for subsequent, more comprehensive acupuncture strategies. An article in Nursing Outlook discusses the history of attempts to find nonpharmacologic pain options within the DoD and VA systems, and discusses implementation procedures for auricular acupuncture to meet said need. 41 A review of integrative medicine usage in DoD military treatment facilities in general was also published in Medical Acupuncture in 2015. 42 This review showed acupuncture to be outpacing other integrative modalities in trend of use between 2005 and 2009. It should be noted that medical doctors and chiropractors have played the largest roles in moving these therapies forward within the DoD, as licensed acupuncturists have only recently gained a foothold as care providers at DoD healthcare facilities.
A similar article from 2018 notes, "for the estimated 270,000 military service men and women who transition each year to the VHA, acupuncture and other integrative therapies are familiar treatment options for conditions such as chronic pain. 40 Previous research demonstrates that these men and women will actively seek and request these therapies. It can be extrapolated that the normalcy of expectation of integrative care services, especially acupuncture, within the DoD and VA health systems drives both general consumer expectations, and lends a gravitas to legitimacy that only military and veteran endorsement can provide. 40 Professional Delivery Group Development: Two primary groups have emerged as driving forces for advancing the existing dissemination and implementation of acupuncture research: the professional licensure group most broadly known as "licensed acupuncturists" and the body of medical doctors (MDs) and osteopathic physicians (DOs) practicing "Medical Acupuncture." The professional licensure group known predominantly as "licensed acupuncturists" was first established in the early to mid-1980s with the founding of the National Certification Commission for Acupuncture and Oriental Medicine (NCCAOM) in 1983, the Accreditation Commission for Acupuncture and Oriental Medicine (ACAOM) in 1983, and the Council of Colleges for Acupuncture and Oriental medicine (CCAOM) in 1982.
The first professional national association for the field was also established around this time and was known as the American Association of Oriental Medicine (AAOM). This professional licensure group has been in development since the establishment of these core professional component organizations and has now produced approximately 38,000 graduates. 34 The licensure group has diverse titling, including "Licensed Acupuncturist," "Doctor of Oriental Medicine," "Acupuncture Physician," "Registered Acupuncturist," "Traditional East Asian Medical practitioner," "Doctor of Acupuncture," and others as determined by state jurisdiction. These practitioners, however, share common core educational pathways. Critical developments are still underway for this professional group (see below), with three states still having no regulation, and Michigan becoming the 47th state to establish a practice act in December 2019.
In 2018, this professional licensure group was officially recognized by the Bureau of Labor Statistics (BLS) as a trackable profession, thus adding governmental endorsement as a strong legitimizing factor to the licensure group. Acupuncturists as a professional group also became seated with the American Medical Association Health Care Professions Advisory Committee (HCPAC) in 2019, engaging this perspective to the national conversation surrounding the coding and billing of medical procedures. In 2015 the American Society of Acupuncturists was founded and as of 2019 has brought together 34 state level professional associations working in the realm of regulation and professional membership. Through this federation style structure, more than 5000 practicing licensed acupuncturists have been brought into organized medicine.
The practice of "Medical Acupuncture," while lacking a specific content definition, has been an important force in engaging already practicing medical doctors to the professional practice of acupuncture. Soulie de Morant first brought acupuncture to the medical community in France in the 1930s. Dr. Paul Nogier published his famous paper in France on auricular acupuncture in 1957 (Loci Auriculomedicinae). 43 In the United States, physicians have been practicing Medical Acupuncture since the 1980s, and the American Academy of Medical Acupuncture was founded in 1987. Most broadly, the term "Medical Acupuncture" has been used to describe licensed physicians practicing acupuncture, regardless of the style of acupuncture practice. Currently, 300 hours of continuing medical education in acupuncture and possession of an active medical license is the standard for physicians to claim acupuncture as a professional practice.
Medical doctors have been of paramount importance for the inclusion of acupuncture into mainstream medicine, and this continues to grow. This group most directly bridges the gap between acupuncture as a novel EBP and mainstream medical settings, having the certification of the dominant licensure group within their home medical domain. This growing interprofessional interest is likely to be somewhat self-reinforcing in promulgating interest in acupuncture.
We have recently seen a push to incorporate "dry needling" into the practices of physical therapists and athletic trainers. Though claims of a unique difference between dry needling and acupuncture continue to be made, dry needling is widely considered to be a subset of acupuncture practice (aka "orthopedic acupuncture") within mainstream, professional medicine. 44 It should be recognized as well that the chiropractic medicine practitioner community has many professionals who also incorporate acupuncture into practice. The Council of Chiropractic Acupuncture and the benchmark-setting American Board of Chiropractic Acupuncture were established in 2005 and have set standards comparable to the Medical Acupuncture expectations.
Chiropractic acupuncture practice has, however, been more difficult to organize and consolidate as a recognizable force in mainstream medicine, legislation, and research, likely due to a number of factors. These include that the chiropractic licensure group is both still largely external to mainstream medical systems (unlike the MD and DO providers), and that the licensure group is not entirely committed to this specific system as its primary modality of practice (as are the licensed acupuncturists). Nonetheless, the presence of chiropractic providers offering acupuncture, and advertising this service even prior to and despite the establishment of its professional standard-setting bodies, created a familiarity with the term "acupuncture" that softened the soil for its general propagation. The combination of the factors above, from the opioid crisis to military usage, synergistically support one another in expanding acupuncture's need, normalcy, availability, legitimacy, expectation of use, service availability, and affordability within the American healthcare system.
Stakeholder Groups
As acupuncture and its influence have permeated and been influenced by the factors above, and as we recognize numerous interested parties in the needs outcry and programmatic, professional, and financial populations affected by demand, a more systematic description of stakeholder groups reveals its relevance. For implementation to proceed in a more scientific pattern, reviewing the obstacles and opportunities for each stakeholder category may be of primary utility. Therefore, defining these categories was one of the defining efforts of the SAR/PCORI working group.
Participants at the original meeting identified eight groups of initial influence: individual healthcare consumers, policy makers, institutions, payers, researchers, associations, providers, and educators ( Figure 1). This list of stakeholders was considered sufficiently complete to help create a working model with the understanding that each domain could be broken down into further subdomains, and that stakeholder groups were not truly distinct. One individual might be a member of numerous stakeholder groups, and other permutations of categories might be derived. Further, categories can be subcategorized. For example, given current trends, the "Provider" category could be broken down into MDs/DOs, licensed acupuncturists, chiropractors, nurses, physical therapists, and even athletic trainers. Each of these groups has dramatically different obstacles and opportunities. Patient groups could be subclassified by race, income level, educational background, gender, age, etc.
For the purposes of initiating the exploration of acupuncture's intersection with implementation science, and recognizing that for each discussion of the characteristics, needs, and challenges of each stakeholder domain multiple papers could be generated, this primary paper focuses on an overview level of only four stakeholder groups: patients, licensed acupuncturists, medical acupuncturists, and researchers. Again, the authors and primary working group hope to inspire further and more in-depth explorations of all domains, and the permutations that emerge from that work. The discussions below are intentionally brief and meant to be exemplary rather than comprehensive.
Patient-Related Dissemination and Implementation Issues: A framework for considering challenges to the dissemination of information about acupuncture, and to the implementation of acupuncture as a health practice in the general patient population, can be structured using the framework identified by Herman et. al in her 2019 SAR presentation, 45 and originating in the work of Levesque et al. 46 (Figure 2) This matrix provides an excellent example of studying the underlying challenges various populations have regarding accessing all medical practices. It applies concepts mentioned earlier as to the value of identifying underlying structural rules affecting the environment of implementation and dissemination in general as distinct from any specific EBP.
It can be readily seen that many of these areas become more complex when presented with the challenge of acupuncture. Global Advances in Health and Medicine Transparency, Outreach, Information, Screening (Approachability) and Health Literacy, Health Beliefs, Trust and Expectations (Ability to Perceive): This domain may be limited by exposure to the core concepts of Chinese medicine, including expectations for treatment frequency, intensity, duration, experience, and anticipated investment. It may be limited by consumer experience with the primary provider groups, and becomes more complex by virtue of the language variation, novel diagnostic techniques, and overall framework of familiarity that fosters easy connection. Patients may rely on primary care providers to direct their care as well. Referral by an already trusted medical provider to acupuncture as a treatment option also would strongly reinforce public acceptance of this modality.
If the primary provider does not refer for treatment, many patients will not be aware of the option, and hence not perceive its potential. They may also need this type of endorsement to overcome fear of the unknown or skepticism to the practice. Patients and healthcare providers are largely unaware of the educational standards for the various provider groups and cannot therefore direct patients to the most-qualified providers. Patients, as well, are unaware of the educational infrastructures that develop and train acupuncture practitioners.
Professional Values, Norms, Culture, Gender (Acceptability) and Personal and Social Values, Culture, Gender, Autonomy (Ability to Seek): Acupuncture and Chinese medicine present a novel infrastructure with different healthcare values and norms. Some examples include the belief that illness holds valuable lessons and that it is consequent to prior actions in many cases, and that all disharmonies exist in a web of interrelationship. While not in conflict with current biomedical knowledge, in practice it is rare to see implementation of care that is premised on these underlying assumptions.
Care in Chinese medicine is also often non-or differently-hierarchical in nature, with patients expected to play a seminal role in the healing process. Patients may be less inclined to do more intensive treatment regimens compared to those involving simple pill consumption, despite more robust potential benefits (for example, a patient may be more willing to take ibuprofen for pain than engaging in regular self-care activity and stretching, despite the more global benefits the latter clearly provide.) Geographic Location (Availability and Accommodation) and Living Environment, Mobility, Social Support (Ability to Reach): There is a limited number of acupuncture providers, and access to acupuncture does not consistently extend into many rural areas. The infrastructure of many practices is also limited in terms of ease of scheduling and other accessibility.
Many acupuncture providers will not be able to provide extra support such as childcare during appointments or later hours. There are an estimated 38,000 licensed acupuncturists nationwide (if every graduate of the licensed acupuncturist training pathway were to be working), including 10,000 in California, with a handful of other states being overly represented in the distribution of licensed acupuncturists nationwide. With possibly 5000-10,000 medical acupuncturists trained, access to acupuncture services is extended, but these providers mostly are unable to provide acupuncture full-time, and services may be limited to experimental settings or specific hospital-based situations (eg, surgery and oncology).
Direct and Indirect Costs (Affordability) and Income, Assets (Ability to Pay): Much care remains fee-forservice and cash-based. This limits accessibility considerably, despite models attempting to remove the cost-barrier. Insurance companies only inconsistently cover services, and service coverage is often unduly limited to pain conditions.
Technical and Interpersonal Quality (Appropriateness) and Empowerment, Information, Adherence, Caregiver Support (Ability to Engage): This domain is often limited by training standards. The development of professionalism within the field is variable, with some providers notably having more skills and experience with patient engagement than others. Lack of familiarity with available services on the part of the patient may also lead to a lack of encouragement to pursue therapy. For example, family members may be less likely to encourage regular attendance for acupuncture than for physical therapy. Physicians may also be unaware of the evidence base for acupuncture, or more evidence may be needed, limiting formal scientific approval of acupuncture for the presenting condition. This may directly or indirectly inhibit patient engagement. A lack of public education also limits acceptability and exposure, thereby limiting the knowledge to seek the therapy.
Practitioner-Related Dissemination and Implementation Issues
Despite the gains noted above that have contributed to the growth of the licensed acupuncturist field, acupuncture as an EBP and its delivery by this group has hurdles to overcome. These include factors such as a lack of conceptual familiarity and acceptability, limitations in accessibility and adequate supply, patient acceptance, cultural expectations and concerns, reimbursement, and others.
Acupuncture emerges out of a complex, Chinese historical diagnostic and treatment framework, yet it can be applied in a simplified manner and from a westernized viewpoint that is suited to pain relief. This dichotomy alone leads to vast disagreements among those who are fundamentally in support of acupuncture's effectiveness through many frameworks of application. Lack of unity in messaging internally intrinsically hampers dissemination about acupuncture, and thereby directly inhibits implementation.
There also remains a lack of agreement on the fundamental interpretation of classical concepts such as "what is an acupuncture point" and "what is an acupuncture channel." Acupuncture theory, as it emerged from classical Chinese medicine, remains in a state of partial translation, with new texts being translated regularly, thus providing an impressive growth of the literature base. There are also numerous styles of acupuncture in practice, from simplified western models to Japanese style, Korean style, Classical Chinese style, Tan, Tung, Five Element, and other schools of application. As understanding of and access to classical material and modern research expands, an optimal curriculum for entry level practice into the field is continuously in discussion.
Scope of practice is particularly varied in this professional group, with some states allowing diagnostic testing, dietary counselling, herbal medicine, injection therapy, worker's compensation inclusion, and body work, while others restrict practice to acupuncture only, with supervision by an MD either still required or only recently withdrawn. Insurance coverage is also highly variable, with conditions covered being considered variably "evidence-based" or "experimental" even within the same family of companies.
Many providers cannot receive third-party reimbursement for a broad swath of services they may provide, including at the most basic level the evaluation and management of patients, including nutritional, lifestyle, and exercise counselling as well as mental and emotional support services. This difficulty in receipt of reasonable remuneration for preventive and mental health services is clearly not unique to this licensure type. Despite these being part of standard training and certification, thirdparty reimbursement systems typically adhere to outdated models of payment, leaving practitioners primarily reliant on reimbursement for acupuncture services alone.
Regional variation in the acceptance of acupuncture in general is also evident, with nearly one third of practitioners in the United States residing in California alone, and higher concentrations of practitioners on the coasts and in larger urban areas. In some areas, acupuncture is part of essential health benefits, but in others, it is excluded from coverage. Because the basic presence of a service and service provider population are key initial factors to introducing that service to a potential patient base, areas with high market penetration advance more quickly into public awareness and into healthcare systems than areas with few practitioners and little public presence. Unlike larger professions such as the medical doctor population with nearly 1 million providers 47 nationwide as of 2016, or the physical therapy population with more than 265,000 48 providers nationwide, the licensed acupuncturist licensure group has at most 38,000 individuals ever trained into the field, with an unidentified percentage actively practicing.
Other challenges to the development of the professional group include poor job prospects postgraduation, with most providers still having to enter into private practice. It is rare that they are able to join a pre-existing group. This hurdle, coupled with the high expense of education for entry, points to a cap on the viability of growth for the licensure group as it stands.
"Gainful employment" rules require graduates of programs that accept federal student loan monies to show the attainment of a certain salary level within a proscribed time period post-graduation. 49 These rules have threatened to lead to the closure of up to half of the accredited U.S. licensed acupuncturist programs, and may again be a threat to programs in the future. These economic risks pose barriers to entry and hinder the attraction of new applicants to the field.
Research-Related Dissemination and Implementation Issues
Researchers have been trying to develop appropriate methods for the study of acupuncture. Prior reliance on placebo-style models for the study of acupuncture are being abandoned due to inherent problems with its design. 12 The utilization of "verum acupuncture" and "sham acupuncture" that underlie this study model has proven problematic due to the considerable treatment noise from the procedure identified as "sham." Sham acupuncture is not biologically inert in that it provides non-specific treatment effects along with some potentially specific effects of acupuncture, depending on the administration style. Experimentation with this research strategy and efforts to use so-called "sham" treatments have yielded results difficult for decision-makers to interpret.
The participation of the patient in the wellness process (with particular emphasis on emotional regulation, good dietary habits, and the avoidance of excesses) is an essential component of the classical system from which acupuncture emerged. Acupuncture would have been performed only after other methods of healing had been attempted, and with efforts to guide the patient's thinking and behaviors to benefit the therapeutic process. Treatment would continuously adapt, not only based on the presentation of the patient, but also on the time of day the patient was treated, in what season, and in what climate.
Diet, lifestyle, mental state, and overall behavior change was classically expected of patients, thus modern efforts to administer acupuncture as if it were a pill or stand-alone procedure deviate from the very foundations upon which acupuncture is predicated. The therapeutic relationship is also paramount in the treatment process. All of these factors create challenges for research design that wishes to study acupuncture based on its classical roots. 50 These challenges complicate the establishment of clear messaging on the strength of the evidence of acupuncture's effectiveness for a wide variety of clinical conditions. When a working understanding of the acupuncture research base demands nuanced and somewhat deeper appreciations for the research challenges, and when the knowledge of how acupuncture should be contextualized within the studies undertaken (efficacy vs. effectiveness, for example) is critical to knowing how to derive clinical recommendations from a research trial, concise messaging becomes difficult. Providers looking for a "gold standard" of research in acupuncture may be put off by the on-going debates on best research practices, and clinicians looking for quick treatment approaches may be deterred by the need to use acupuncture as a synergistic treatment strategy rather than a more linearly measurable single prescription. Predictably, difficulties in dissemination messaging on best practices, overall, will inhibit implementation clinically.
What is often lost in this discussion is that while many clinical trials report similar clinical outcomes associated with verum vs. sham acupuncture, both sham and verum treatments often outperform usual care only. The German Health Care system chose to adopt payment for low back pain for acupuncture despite this issue with verum and sham, because both methods performed better than usual care. 51 Further confounding the dissemination and implementation of the scientific basis of acupuncture are deficits in training programs for licensed acupuncturists. These programs generally do not emphasize research methodology and fail to generate student-level experience. There is also a lack of access for this group to the literature base, with schools not having access to the comprehensive literature databases utilized by larger university systems. While licensed acupuncturists are the most highly trained group in the classical applications of acupuncture, very little of the entry-level curriculum, and only parts of the recently created doctoral-level curriculum, include significant training on evaluating literature and designing and carrying out research. Students are generally not exposed to research methodology nor asked to create publishable case studies or contribute otherwise to the literature base.
One solution to this had been the attempt to create an industry-specific, peer reviewed publication for the licensed acupuncturist community. The industryspecific, peer reviewed publication that had been available to the licensed acupuncturist community for more than five years (and for nearly 20 years under other titling) was The Journal of the American Society of Acupuncturists (JASA). The publication was an entry level journal and encouraged submissions from new authors and investigators. Providing both an example of what research should look like, and an impetus to publish, the journal encouraged the development of research and publication in the field.
Published in-house predominantly by volunteers, however, the journal lacked the resources to move to a professional publishing platform which would have expanded its online access and its indexing in databases. The journal closed in 2021 and publishing priorities transferred to the journal "Medical Acupuncture" which is the journal of the American Academy of Medical Acupuncture.
Closely working with schools that have Doctor of Acupuncture and Oriental Medicine (DAOM) programs, to require students to create appropriately structured, clearly written, publishable case studies as part of graduation requirements would enable graduates to gain valuable familiarity and comfort with publishing and help them think about cases in a more structured manner. It may also encourage them to refer to these published resources as they practice in clinic.
The primary journal of the MD practice group, Medical Acupuncture, is the official journal of the American Academy of Medical Acupuncture. Members of the Academy receive a print subscription and online access to Medical Acupuncture as a benefit of their membership. This bi-monthly, peer reviewed journal is written "for physicians, by physicians" and was just recently made available to open access. It is now indexed in PubMed Central as well. This important step will also likely encourage greater interest in publication from a broader group of researchers, expanding the formal, professional conversation. The level of involvement of licensed acupuncturists with this journal is in flux, but it may be able to fulfill some of the prior goals set by JASA in addition to its current publication priorities.
Discussion
Despite considerable challenges, acupuncture continues to advance into mainstream American healthcare and into healthcare systems worldwide. The structured dissemination and implementation of acupuncture, however, remains lacking. Advances are indeed being made, but a central coordinating strategy to optimize uptake and impact is lacking.
As the professional acupuncture community gains sophistication in both acupuncture research and clinical application, and as the field gains momentum and begins to more strongly influence multiple stakeholder groups, more attention to the study of the science behind the dissemination and implementation of acupuncture as an EBP is essential.
Primary challenges to this process include the foreign origin of acupuncture, which can limit properly translated concepts. This "otherness" also engenders a general lack of recognition of the multiple potential physiologic effects of acupuncture and its other plausible mechanisms of action. Existing medical infrastructures also are challenged to bring this modality into care, due to their struggles to understand into which domains it is appropriate to integrate, and who are the properly qualified individuals to provide the service. Sustainable payment mechanisms are also lacking, and vast inconsistencies are present in the insurance industry concerning the status of acupuncture as an appropriately reimbursed EBP. Access to providers is also a challenge for numerous reasons, including low total numbers, recognizability, and integration into referral systems.
Strengths of acupuncture and its inclusion as a viable medical option include the rapidly growing evidence base, increasing public demand, the need to find non-pharmacologic options for pain control, and the existence of growing, though challenged, specific professional provider groups. Noteworthy growth of the licensed acupuncturist professional community is reflected in the presence of practice acts in currently 47 states, as well as increased interest in training by medical doctors and osteopathic physicians. Other professional groups such as physical therapists and nurses are expressing strong interest in the ability to provide acupuncture services. This is exemplified by the growth of "dry needling" and the efforts by nurses in Washington State, Arizona, and other locales to incorporate "medical acupuncture" into scope. Patients also increasingly seek acupuncture, reporting it to be both effective and pleasurable, and consumer demand (including among military personnel and veterans) is growing.
In places where acupuncture has been brought into healthcare systems, the identification of a champion for the integration, a knowledgeable provider of the service, reimbursement and funding solutions, and effective integration into the workflow structure each appear to be core elements of success. Effective study of this process, however, is limited, and presents fertile terrain for the field of implementation science.
How to best engage varying stakeholder groups, how to resolve regulatory and legislative obstructions, the means by which to better train providers in understanding and contributing to the scientific literature base, and the best practices in disseminating core information about acupuncture's status as an EBP to each stakeholder group in a way that can be optimally received, all remain areas for growth and development.
Conclusion
Acupuncture stands to benefit considerably from the application of knowledge found within the field of implementation science. If acupuncture is indeed as significant an intervention as it is beginning to appear, even only in the realm of non-pharmacologic pain treatment, its potential impact as an emerging "best practice" in public health could be profound. Those interested in advancing best practices around acupuncture's integration are becoming aware of the field of implementation science, and further dialog between the disciplines is likely to yield impressive information into how health care systems (including providers, consumers, and structures) adapt to the presence of this novel, valuable, evidence based practice.
The research resulting from this working group meeting hosted by SAR and sponsored by PCORI has revealed fertile ground for further exploration, and has also revealed a path forward for this much needed effort. Solutions to problems presented utilizing specific implementation science informed strategies would be of tremendous benefit to advancing this dialog. As the scientific research basis for the support of acupuncture in both basic science and clinical medicine continues to increase, it is time to more intentionally approach the task of developing adequate implementation strategies for the practice of acupuncture within the current medical framework.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 2021-08-27T17:13:06.234Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "12f2bb0d7510f1a71ea67a5d412ba7f6848616da",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21649561211042574",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6b76c9c1319c65a511e66811830da76fd7de5ee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268206253 | pes2o/s2orc | v3-fos-license | A prognostic and immune related risk model based on zinc homeostasis in hepatocellular carcinoma
Summary Hepatocellular carcinoma (HCC) is the third leading cause of cancer-related deaths worldwide. The dysfunction of zinc homeostasis participates in the early and advancing malignancy of HCC. However, the prognostic ability of zinc homeostasis in HCC has not been clarified yet. Here, we showed a zinc-homeostasis related risk model in HCC. Five signature genes including ADAMTS5, PLOD2, PTDSS2, KLRB1, and UCK2 were screened out via survival analyses and regression algorithms to construct the nomogram with clinical characteristics. Experimental researches indicated that UCK2 participated in the progression of HCC. Patients with higher risk scores always had worse outcomes and were more associated with immune suppression according to the analyses of immune related-pathway activation, cell infiltration, and gene expression. Moreover, these patients were likely to exhibit more sensitivity to sorafenib and other antitumor drugs. This study highlights the significant prognostic role of zinc homeostasis and suggests potential treatment strategies in HCC.
INTRODUCTION
The mortality rate of liver cancer has increased in the last decade. 1In the liver cancer, hepatocellular carcinoma (HCC) is the most prevalent type, accounting for approximately 90% of all cases. 2 The pathogenesis of HCC is critically complex with a high degree of heterogeneity, calling for the more specific molecular clarification. 3The treatment options for HCC included surgical resection, ablation, and systemic therapies; 4 however, most patients with advanced HCC still have the poor outcomes with a 5-year survival rate of approximately 20% 1 and a recurrence rate of 70% 5 years after resection. 5The treatment dilemma was probably due to limited longterm benefits and predictive biomarkers for immune therapies, 6 the challenge of drug resistance, 7 and the deficiency of effective drugs targeting the vital mutation of HCC. 3 Therefore, more practicable prediction models for prognosis and drug sensitivity are urgently needed.
Zinc is the necessary and abundant metallic element for the human body, regulating enzymatic activity, protein structure and stability, and gene expression. 8The intracellular zinc homeostasis is tightly maintained by zinc transport proteins, including Zrt, Irt-related protein (ZIP) family, and Zn transporter (ZnT) family. 9,10Zinc deficiency is associated with various pathological conditions like growth retardation, impaired wound healing, and immune dysfunction, 11 or diseases including Alzheimer's disease, 12,13 sickle cell disease, 14 and cancer. 15The dysfunction of zinc and partial zinc transporters affects the tumorigenesis and progression of prostate cancer, 8,16 pancreatic cancer, 17 breast cancer, 18 and HCC. 19,20Furthermore, a recent study revealed that zinc-related nanoclusters can enhance the production of reactive oxygen species and antitumor immunity against HCC in a mice model. 21However, the prognostic role of zinc homeostasis in patients with HCC remains unknown.
In this research, we established a zinc homeostasis-related risk model in HCC based on survival and regression analyses.Five signature genes were defined and experimental researches were performed to explore the function of UCK2 in HCC.Our risk model had predictive stability and reliability and was significantly associated with tumor microenvironment (TME) and drug sensitivity.This study has emphasized the prognostic model based on zinc homeostasis and provided potential therapy strategies for patients with HCC.
Identifying zinc homeostasis-related genes for constructing a prognostic risk model
Univariate Cox regression and Kaplan-Meier (KM) analyses were utilized to assess the relationship between the expression level of zinc homeostasis-related genes and the overall survival (OS) of patients with HCC in The Cancer Genome Atlas (TCGA) and GSE14520 cohorts.There were 2,506 genes in TCGA cohort and 559 genes in GSE14520 cohort significantly associated with OS and the number of intersecting genes was 185 (Figure 1A; Table S4).We also evaluated the expression differences of these 185 genes in the tumor and normal samples in two cohorts (Table S5).These genes were analyzed using least absolute shrinkage and selection operator (LASSO) regression and the returned 10 genes were further analyzed using multivariate Cox regression to establish an optimal prognostic model (Figures 1B and 1C).Five genes (ADAMTS5, KLRB1, PLOD2, PTDSS2, and UCK2) were revealed as independent prognostic elements in patients with HCC (Figures 1D and 1E).Among these genes, ADAMTS5, PLOD2, PTDSS2, and UCK2 were risk factors, while KLRB1 was a protective factor for patients with HCC (Figure 1F).The risk score was calculated as follows: risk score = (0.147X ADAMTS5 expression) + (À0.179X KLRB1 expression) + (0.204 X PLOD2 expression) + (0.334 X PTDSS2 expression) + (0.248 X UCK2 expression).
Validation of signature gene expression and cellular function in HCC
The volcano plots showed that ADAMTS5, KLRB1, PLOD2, and PTDSS2 were stable genes while UCK2 was upregulated in tumor samples compared with adjacent normal samples in the TCGA and GSE14520 cohorts (Figures 2A and 2B).We then examined the expression level of UCK2 in HCC and found that the mRNA expression of UCK2 increased in 19 HCC samples compared with the normal samples (Figure 2C) and in HCC cell lines compared with the MIHA normal liver cell (Figure 2D).Additionally, UCK2 also exhibited a higher protein expression level in tumor cell lines (Figure 2E) and 39 paired HCC tissues (Figure 2F).To further explore the impact of UCK2 in HCC progression, we performed the knockdown assay in HCC cell lines and the protein expression level of UCK2 was substantially decreased in experimental group (Figure 3A).The results of the clone formation assay and the cell counting kit-8 (CCK-8) assay showed that the knockdown of UCK2 critically inhibited the cell proliferation in HCC (Figures 3B and 3C).Additionally, UCK2 deficiency suppressed the migration capability of HCC cells confirmed by the wound healing assay and the migration assay (Figures 3D and 3E).These findings suggested that UCK2 may play a vital role in HCC progression.
Prognostic potency of zinc homeostasis-related signature in the training cohort and validation cohort
In this study, we utilized the TCGA cohort as the training cohort and the GSE14520 cohort as the validation cohort to verify the predictive stability and reliability of the prognostic model.Patients were divided evenly into low-, intermediate-, and high-risk groups based on the risk score.Time-dependent receiver operating characteristic (ROC) curves revealed the prediction accuracy of the model.The area under the ROC curve (AUC) for 1-, 3-, and 5-year OS were 0.81, 0.77, and 0.71 in TCGA cohort and 0.73, 0.71, and 0.68 in GSE14520 cohort (Figures 4A and 4B).KM analysis indicated that patients with lower risk scores performed a better outcome (Figures 4C and 4D).Risk survival status plots revealed that the survival status and expression level of five genes for every patient in two cohorts (Figures 4E and 4F).
The nomogram based on zinc homeostasis-related signature and clinical features
As showed in Figure S1, groups were divided by stage, and age and tumor mutation burden (TMB) level remained the notable survival difference according to the risk score.The result of univariate Cox analysis demonstrated that risk score, stage, age, and TMB level were negatively related with the OS of patients (Figure 5A).Multivariate Cox analysis indicated that the risk score and grade were in connection with the OS (Figure 5B).Subsequently, the nomogram was constructed to predict the survival status (Figure 5C).The AUC for 1-, 3-, and 5-year OS were 0.84, 0.80, and 0.71, respectively (Figure 5D).Furthermore, we performed the calibration curve and decision curve analysis (DCA) to evaluate the predictive accuracy (Figures 5E and 5F).
Functional enrichment analysis of the prognosis signature
The cellular zinc homeostasis was largely maintained by ZIP and ZnT zinc transporters.They are mostly located in the cell membrane; they indicate the potential function of absorbing and releasing zinc between cells and intercellular substance. 22To explore potential zinc homeostasis status in different groups in TCGA cohort, we examined the corresponding mRNA levels of these transporters.The results showed that most ZIP transporters were upregulated in groups with the higher risk (Figure 6A).Gene set enrichment analysis (GSEA) was performed to assess the molecular mechanisms.In the group of patients with higher risk scores, cellular response to zinc ion and immune associated pathways were weakened, such as natural killer (NK) cell activation, neutrophil mediated cytotoxicity, regulation of dendritic cell differentiation, peptide antigen assembly with major histocompatibility complex (MHC) protein complex, CD4-positive alpha-beta T cell lineage commitment, and T-helper 2 cell cytokine production (Figures 6B and S2A-S2F).In contrast, cell proliferation and extracellular matrix (ECM) related pathways, including cell cycle, DNA replication, ECM-receptor interaction, and glycosphingolipid biosynthesis, were enriched (Figures 6C-6F, and S2G-S2I).Moreover, these patients were more relevant with the hippo signaling pathway (Figure 6G).The detail results of GESA analysis are presented in Tables S6 and S7.
Article Immune features of zinc homeostasis-related prognosis signature
To explore the link between tumor immunity and the risk model, the expression of immunosuppressive genes 23 were examined in different risk groups in two cohorts and we found that most genes were upregulated when the risk increased (Figures 7A and 7B).CXCL8, DNMT1, EZH2, ICAM1, LGALS9, SMC3, and VEGFA were the identical immune characteristics positively related with the risk score (Figures 7C and 7D).We also found that most immune checkpoint genes were overexpressed in the groups with higher risk scores in TCGA cohort (Figure S3A).Based on the expression level and the zinc homeostasis-related risk score, CD276, NRP1, and TNFSF4 were the top three genes highly relevant with the risk score (Figures S3B-S3E).The single sample Gene Set Enrichment Analysis (ssGSEA) method was applied to assess the immune cell distribution in two cohorts (Figures S4A and S4B).The results demonstrated that effector memory CD8 + T cell, eosinophil, type 1 T helper (Th1) cell, NK cell, and activated B cell were decreased in the higher-risk groups.In contrast, activated CD4 + T cell and type 2 T helper (Th2) cell were increased in both cohorts (Figures 7E and 7F).
Higher-risk patients were more sensitive to sorafenib and other antitumor drugs
To better understand the potential clinical application of the prognosis signature, CellMiner database was applied to analyze the relation of drug sensitivity and the risk score.Overall, 860 drugs approved by clinical trials or Food and Drug Administration (FDA) were chosen for analysis (Table S8).The risk score was negatively associated with the sensitivity of the by-product of CUDC-305, ARQ-621, BMS-387032, EMD-534085, AT-7519, TAS-116, actinomycin D, and BP-1-102 (Figure 8A).In addition, we utilized pRRophitic package to estimate the drug sensitivity difference in both cohorts and observed that sorafenib had a lower IC 50 in higher-risk groups (Figures 8B and 8C).In this prognostic model, patients with higher risk scores were more sensitive to sorafenib and other clinical trials or FDA-approved drugs.The data of drug screening provided the support of the use of these drugs in the clinical treatment of higher-risk patients with HCC to promote their poor survival events.
DISCUSSION
Zinc homeostasis is an essential biological event in the human body and is strictly regulated by many proteins for maintaining the optimal systemic and cellular zinc level. 9In these mediators, ZIP and ZnT transporters help in importing and storing zinc. 246][27] Regarding HCC, previous studies have repeatedly confirmed that the zinc level had significantly diminished in the HCC tissues compared with the adjacent normal tissues. 28,29Zinc transporters affect carcinogenesis in HCC and zinc deficiency is correlated with early malignancy. 19,20,30Moreover, zinc homeostasis is essential for the immune cell pathway 31 and contributes to a well-functioning immune defense in innate and acquired immune systems. 32The liver has a special and complex immune landscape since its functions as a central immunological organ and this feature provides patients with HCC with possibilities of immunotherapy despite the limited objective response rate. 33Currently, the significance of zinc homeostasis in tumor immunity of HCC remains unknown.Therefore, we established a zinc homeostasis-related risk model to predict the outcome and explored better therapeutic strategies in HCC.
In this article, we retrieved the zinc homeostasis-associated genes from the GeneCards database and subjected them to KM and univariate Cox regression analyses in TCGA and GSE14520 cohorts.There remained 185 intersection genes associated with survival status.Further LASSO regression and multivariate Cox regression analyses were applied to identify five signature genes, including ADAMTS5, KLRB1, PLOD2, PTDSS2, and UCK2 to construct an optimal prognostic model.Among these genes, UCK2 was upregulated in tumor samples in two cohorts.The expression of UCK2 in HCC was confirmed and the results indicated that UCK2 was overexpressed in the tumor groups compared with the normal cell or tissues.We also found that UCK2 was associated with the proliferation and migration in the HCC cell line.It is reported that UCK2 participates in cancer progression and metastasis 34 by inducing the activation of EGFR-AKT 35 and STAT3 signaling pathways in HCC. 36Additionally, PLOD2 has poor influence on the outcome of patients with HCC and was significantly correlated with tumor size and metastasis. 37In a cohort study including 48 patients with HCC, the expression level of ADAMTS5 was reported to reduce in HCC and ADAMTS5 was found to function as an inhibitory factor of tumor angiogenesis and tube formations. 38KLRB1, also known as CD161, is overexpressed in the CD8 + T cells in the TME of HCC and induces the phenotype of low cytotoxicity and clonal expansion. 39he role of PTDSS2 in HCC has not been clarified.
Patients with higher-risk scores were likely to have a worse outcome.ROC curves demonstrated that our prognostic model had a great performance in predicting 1-, 3-, and 5-year OS for people with HCC in TCGA and GSE14520 cohorts.To fully explore the application of the risk signature, we constructed a nomogram consisting of the risk score, tumor stage, age, and TMB level.The ROC, calibration, and DCA curves demonstrated the outstanding predictive capacity of the model.
Furthermore, we investigated the potential mechanisms comprehensively using GSEA method.Functional analysis revealed that the mitosis and ECM-related pathways were enriched in patients with higher risk scores, demonstrating the connection between zinc homeostasis and tumor progression.Zinc affects mitosis 40 and zinc deficiency affects the zinc-dependent enzyme catalysis and intracellular signaling systems associated with cell proliferation. 41Zinc also participates in cancer metastasis by modulating the downstream target protein ZEB1, thereby promoting the epithelial mesenchymal transition (EMT) plasticity in pancreatic 17 and breast cancers. 42The results of GSEA demonstrated that multiple immune-related signaling pathways are more relevant with patients with lower-risk HCC, consisting of NK cell, neutrophil, dendritic cell, and T cell-associated immune activation and response, indicating that the dysfunction of zinc homeostasis may cause immune defense deficiency in the host.The cellular response to zinc ion was decreased while we surprisingly found that most ZIP transporters on the cell membrane were overexpressed in the higher-risk group.We hypothesized that it was partially due to the feedback regulation of the human body, considering that ZIP transporters would be induced to express more when the cell is deficient of zinc. 9In addition, the expression level of most immunosuppressive genes was increased in the higher-risk group.Among them, CXCL8, VEGFA, and DNMT1 can stimulate the angiogenesis and tumorigenesis of HCC. 43ICAM1 participates in efficient extravasation in HCC metastasis. 44EZH2 modulates the epigenetic modification of PD-L1 in HCC cell lines 45 and a recent research of the single-cell RNA sequencing analysis revealed that LGALS9 is mainly expressed in tumor-associated macrophages and the encoding production is involved in immune escape. 46SMC3 is upregulated and associated with a poor outcome in HCC. 47We also found that various immune checkpoints exhibited the higher expression levels in patients with higher-risk scores, implying these patients may be more sensitive to related immune checkpoint inhibitors, providing a selective treatment for people with HCC.Immune cells are vital elements in the TME and they participate in antitumor immunity and tumor immune evasion. 48In this respect, the infiltrations of CD8 + T cells, Th1 cells, B cells, NK cells, and eosinophils reduced while the composition of activated CD4 + T cells and Th2 cells increased.Activated CD4 + T cells can differentiate into several T cell phenotypes including Th1 and Th2 cells under the stimulation of cytokines.However, Th1 and Th2 cytokines play the opposite roles in pro-or anti-inflammatory processes in HCC.Th1 cytokines induce the activation and proliferation of CD8 + T cells and NK cells, thus promote the anti-tumor immunity.Th2 cytokines can suppress the Th1 cell differentiation and active the immunosuppressive cells, such as myeloid-derived suppressor cells. 49These evidences illustrated the critical connection between the risk signature and immune status in HCC, indicating that the higher-risk patients may benefit more from tumor immunotherapies.
Furthermore, eight drugs approved by the clinical trials or FDA were filtered.Recently, BMS-387032 (also known as SNS-023) was found to induce apoptosis and suppress the cell proliferation and migration by targeting the EGFR-AKT pathway in HCC.In addition, the drug combination of BMS-387032 and sorafenib showed the powerful synergistic effect on inhibiting HCC xenografts in vivo. 50Another research declaimed the actinomycin had the selective capacity to inhibit liver cancer stem cells. 51In addition, the higher-risk patients were more sensitive to sorafenib, the FDA-approved drug for patients with advanced HCC since 2007 as a tyrosine kinase inhibitor, 52 and the development of resistance to sorafenib has become a great challenge. 53Therefore, our prognostic model suggested that these drugs with higher sensitivity may be more effective for the higher-risk patients with HCC and rescue their prognosis in the clinical treatment.
In summary, we established a prognostic model by elucidating the specific role of zinc homeostasis in HCC and proposed a potential therapeutic strategy for HCC.
Limitations of the study
As for limitations, the construction and validation of this zinc homeostasis-related risk model were based on two public cohorts of people with HCC.In the subsequent work, we will conduct clinical trials for patients with HCC to evaluate the abilities in predicting prognostic outcomes and drug response effects based on our risk model.Also, it may be not considerate enough that patients were simply divided into three groups, considering the current clinical groups of HCC.More analyses should be carried out to make the risk model more precise in the future work.In this study, we screened out five signature genes and found that UCK2 participated in proliferation and migration of HCC.More surveys will be carried out to investigate the potential function of five signatures genes in HCC progression.The molecular mechanisms of these genes in regulating immune microenvironment and drug sensitivity are needed to be clarified further by experimental supports.
Function enrichment analysis and immunocyte infiltration analysis
We applied GSEA to explore the molecular mechanism in the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways and the Gene Ontology (GO) biological processes using clusterProfiler R package.Nominal p < 0.05, |Normalized Enrichment Score| > 1 and false discovery rate (FDR) < 0.25 were defined as the screening standard.Immunocyte infiltration degrees regarding 28 immunocyte subtypes in the TME were evaluated by GSVA package in R. 57 Drug sensitivity based on the prognosis signature We obtained the NCI-60 drug activity data and the transcriptome data from CellMiner database (https://discover.nci.nih.gov/cellminer/) to screen potential drugs whose half-maximal inhibitory concentrations (IC 50 ) were negatively associated with the risk score.The pRRophitic package (https://github.com/paulgeeleher/pRRophetic/)was used to predict the drug sensitivity difference between different groups in the TCGA and GSE14520 cohorts.
Western blotting
The cell samples were harvested and lysed using radioimmunoprecipitation assay lysis buffer (YEASEN, Cat#20114ES60) at 4 C for 20 min, mixed into 5 X loading buffer (Beyotime, Cat#P0015L), and boiled for 15 min.The protein was separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis gels and transferred to polyvinylidene fluoride membranes (0.45 um, Millipore) at 230 mA for 90 min.The membranes were blocked by 5% milk powder and incubated with primary antibodies against GAPDH (Proteintech, Cat#10494-1-AP) and UCK2 (Proteintech, Cat#10511-1-AP) at 4 C overnight, followed by secondary antibodies.The blots were detected by enhanced chemiluminescence Substrate (Proteintech, Cat#PK10002) and detection system (Azure Biosystems).The quantification of protein band intensity was measured using ImageJ software (National Institutes of Health).
Immunohistochemistry
The formalin-fixed paraffin-sealed tissues sections were dried in the incubator at 59 C for 60 min.The tissues were rehydrated three times for 10 min each in xylene, immersed in absolute ethanol three times, and washed with distilled water.The slides were then coated with the antigen retrieval solution, heated in the microwave oven over medium-high fire twice, and washed in phosphate buffered saline (PBS) twice for 15 min.In addition, 5% bovine serum albumin was added to the slides for 35 min and covered with the primary antibody using the dilution ratio of 1:50 at 4 C overnight, followed by the secondary antibody at 37 C for 35 min.The slides were coated with chromogenic solution, washed in water, stained with hematoxylin, and dehydrated in absolute ethanol and xylene.Finally, we sealed and observed the slides under the microscope.Visiopharm software was used to analyze the staining status and histochemistry score (H-score) was used to evaluate the relative staining intensity as follows: H-score = percentage of weak intensity X 1 + percentage of moderate intensity X 2 + percentage of strong intensity X 3.
Small interfering RNA (siRNA) interference UCK2 siRNA (RiboBio, Cat#stB0003335A-1-5) and scrambled siRNA were transfected into HCC cell lines with Lipofectamine RNAiMAX (Invitrogen, Cat#13778-150) according to the manufacturer's instructions.After transfection for 48 h, cells were harvested for the validation of knockdown efficiency and subsequent functional experiments.
Clone formation assay and CCK-8 assay
For the clone formation assay, cells were cultured in 6-well plates (Corning, Cat#3516) at a density of 1000 cells per well and incubated for 2 weeks.The cells were fixed with 4% paraformaldehyde for 20 min at room temperature and finally stained with 0.1% crystal violet.The quantification of clone numbers was measured using ImageJ software (National Institutes of Health).CCK-8 kit (Beyotime, Cat#C0038) was utilized to evaluate the cell viability.Cells were seeded in 96-well plates (Corning, Cat#3599) at the density of 2000 cells per well and the Biotech-Epoch microplate reader was used to examine the optical density with a wavelength of 450 nm for five consecutive days.
Wound healing assay and cell migration assay
For the wound healing assay, cells were plated into 6-well plates (Corning, Cat#3516) until they reached confluence.The monolayer was then scratched by a tip, washed with PBS, and incubated with serum-free medium.Cells were photographed at 0 h and 24 h post-wounding.The quantification of closure area was measured by ImageJ software (National Institutes of Health).For the cell migration assay, cells were seeded in the upper chambers with 100ul serum-free medium, and the lower chamber was filled with 600ul medium with 10% FBS.After incubation for 24 h, the cells on the bottom surface of the upper chamber were wash with PBS, fixed with 4% paraformaldehyde and stained with 0.1% crystal violet, and observed under the microscope.
QUANTIFICATION AND STATISTICAL ANALYSIS
The Student's t-test or Wilcoxon test was utilized to compare the statistical differences between groups.The Pearson Correlation Coefficient was applied to evaluate the relationship between various factors.All experiments were repeated independently an least three times (nR3).Statistical analysis was performed and plotted using the R software (version 4.1.0)and GraphPad Prism software (version 8.0).Data were represented as mean +/-SEM.P value < 0.05 was defined as significant (*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001).
Figure 1 .
Figure 1.Identifying zinc homeostasis-related genes for constructing a prognostic risk model (A) Intersecting genes associated with the OS of patients with HCC in the TCGA and GSE14520 cohorts.(B and C) LASSO coefficient profiles and lambda curves of 185 intersected genes in the TCGA cohort.(D and E) Univariate and multivariate Cox regression analysis of five signature genes to establish the optimal model.(F) KM survival curves revealed survival difference in patients with HCC according to the expression level of five signature genes.
Figure 2 .
Figure 2. Validation of signature gene expression in HCC (A and B) The volcano plots showed the expression difference of ADAMTS5, KLRB1, PLOD2, PTDSS2, and UCK2 in tumor samples compared with normal samples in the TCGA cohort and GSE14520 cohort.(C) The mRNA expression levels of UCK2 in 19 paired HCC tissues and adjacent normal tissues.(D) The mRNA expression levels of UCK2 in normal liver cells and HCC cell lines.(E) The protein expression levels of UCK2 in normal liver cells and HCC cell lines.(F) The immunohistochemistry and relative histochemistry scores of 39 matched HCC and normal tissues.(**p < 0.01, ****p < 0.0001).
Figure 3 .
Figure 3. Validation of the impact of UCK2 in HCC proliferation and migration (A) Western blot showed the knockdown efficiency of UCK2 in HCC cell lines after transfection for 48 h.(B and C) Clone formation assay and CCK-8 assay for cell proliferation of HCC cells after transfected with UCK2 siRNA and its scramble siRNA.(D and E) Wound healing assay, cell migration assay, and relative quantitative results of HCC cells after knockdown of UCK2.(*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001).
Figure 4 .
Figure 4. Prognostic potency of zinc homeostasis-related signature in the training cohort and validation cohort (A and B) Time-dependent ROC curves of prognosis signature in the TCGA and GSE14520 cohorts.(C and D) KM survival curves demonstrated the prognostic difference of the five gene signatures in the two cohorts.(E and F) Distribution of the survival status of patients with HCC and related expression levels of five signature genes in the two cohorts.
Figure 5 .
Figure 5.The nomogram based on zinc homeostasis-related signature and clinical features (A and B) Univariate and multivariate Cox regression analysis of the risk score and other clinical characteristics including stage, age, and TMB level.(C) Nomogram based on the risk score and clinical characteristics.(D) Time-dependent ROC curve for 1-, 3-, and 5 years and their AUCs.(E and F) The calibration curve and DCA showed the predictive accuracy of the nomogram.(*p < 0.05, ***p < 0.001).
Figure 6 .
Figure 6.Functional enrichment analysis of the prognosis signature (A) Expression of zinc homeostasis-related transporters in the TCGA cohort.(B) GSEA showed that the pathway of cellular response to zinc ion was related to the lower-risk group based on the Gene Ontology (GO) biological processes.(C-G) GSEA showed the significant enrichment of cell proliferation, ECM related pathways, and hippo signaling pathways in the higher-risk group based on the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways.(*p < 0.05, **p < 0.01, ****p < 0.0001).
Figure 7 .
Figure 7.Immune features of zinc homeostasis-related prognosis signature (A and B) Heatmap revealed the expression of cancer-related immunosuppressive genes in the low-, intermediate-and high-risk groups in the TCGA and GSE14520 cohorts.(C and D) The expression of CXCL8, DNMT1, EZH2, ICAM1, LGALS9, SMC3, and VEGFA in different groups in the two cohorts.(E and F) Relative tumor infiltrating immune cells with a remarkable difference in the low-, intermediate-, and high-risk groups in the two cohorts.(*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001).
Figure 8 .
Figure 8. Higher-risk patients were more sensitive to sorafenib and other antitumor drugs (A) CUDC-305, ARQ-621, BMS-387032, EMD-534085, AT-7519, TAS-116, actinomycin D, and BP-1-102 were negatively correlated with the risk score based on the CellMiner database.(B and C) Drug sensitivity of sorafenib in the low-, intermediate-, and high-risk groups in the TCGA and GSE14520 cohorts. | 2024-03-04T16:09:22.028Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "9647a21fad9b30b1120bc704d3e68506bbd74c2b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.isci.2024.109389",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7018f96934fdde41385442ee2229a5f0f43e7f69",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
202095 | pes2o/s2orc | v3-fos-license | Posterior reversible encephalopathy syndrome (PRES) and CT perfusion changes
Posterior reversible encephalopathy syndrome (PRES) can present with focal neurologic deficits, mimicking a stroke and can often represent a diagnostic challenge when presenting atypically. A high degree of suspicion is required in the clinical setting in order to yield the diagnosis. Cerebral CT perfusion (CTP) is utilized in many institutions as the first line in acute stroke imaging. CTP has proved to be a very sensitive measure of cerebral blood flow dynamics, most commonly employed to delineate the infarcted tissue from penumbra (at-risk tissue) in ischemic strokes. But abnormal CTP is also seen in stroke mimics such as seizures, hypoglycemia, tumors, migraines and PRES. In this article we describe a case of PRES in an elderly bone marrow transplant recipient who presented with focal neurological deficits concerning for a cerebrovascular accident. CTP played a pivotal role in the diagnosis and initiation of appropriate management. We also briefly discuss the pathophysiology of PRES.
Background
Posterior reversible encephalopathy syndrome (PRES), as the name suggests, is a constellation of symptoms caused by reversible ischemia most commonly of the posterior cerebral vasculature, thus affecting the parietal-occipital region. Still other vascular territories can be affected in PRES (see Table 1).Various terminologies have been used to describe this condition, including "reversible posterior leukoencephalopathy syndrome" and "reversible posterior cerebral edema syndrome" among others [1]. Hypertension (HTN) is the most commonly identified cause of PRES, followed by medications, eclampsia and systemic factors. The pathophysiology of HTN related PRES is due to a failure of cerebrovascular autoregulation, which in turn results in vasogenic edema. Non-hypertensive PRES may be due to an autoimmune or immune response to various stimuli [2]. The pathology usually affects the posterior brain hemisphere (parietal-occipital region), which may be a consequence of reduced sympathetic innervation in this area. Usually it is a reversible phenomenon, as indicated by the name, but if not recognized early and treated appropriately, permanent brain injury may ensue.
Case presentation
A 70-year-old white female presented to the emergency room with symptoms of a cerebrovascular accident. She had a history of multiple myeloma status post-autologous bone marrow transplant (BMT) with a conditioning regimen of high-dose melphalan 2 weeks prior to presentation. She woke up the morning of presentation and was found to be confused for a few minutes, followed by a gradual improvement in mental status. About an hour later, she started to experience a severe headache associated with blurry vision, and shortly thereafter she became disoriented again. Paramedics identified agitation, right-side neglect, left gaze deviation and right side weakness. On arrival in the emergency department, the patient's headache had resolved, but the patient was still agitated and disoriented. The patient's altered mental status (AMS) required that the history be obtained from the patient's husband. There was no history of recent infection, fever, weight loss or trauma. The review of systems was negative for photophobia, seizures or any other neurological issues. Pertinent past medical history was that of recent BMT with melphalan and poorly controlled hypertension. She had had thrombocytopenia since the time of BMT and chemotherapy. Her admission blood pressure was 221/114 with a mean arterial pressure (MAP) of 145 mmHg. Her admission NIH stroke scale score was 7, with problems in orientation, not following commands, not answering questions appropriately, left gaze preference, reduced blink on stimulus from the right and possible right-sided neglect. Her visual acuity was reduced to finger movements and light perception in both eyes. She was moving her extremities equally, bilaterally. Reflexes were brisk throughout with equivocal plantar response. The rest of the neurological exam was limited, as the patient was not following commands consistently. Our differential diagnosis at that time included cerebrovascular accident (CVA), PRES (due to elevated BP, recent chemotherapy and bone marrow transplant), seizures and complicated migraine. Since there was no motor deficit associated with the neglect and eye deviation, we were obligated to consider a broad differential diagnosis, including PRES. After the initial laboratory workup, we obtained a CT head and a CT angiogram of the head and neck with perfusion studies. The CTA of the head and neck failed to identify any major vessel cutoff or any acute hypo/ hyper density, but the CTP demonstrated increased cerebral blood volume (CBV), cerebral blood flow (CBF) and reduced time to peak (TTP) in the posterior cerebral vascular distribution (see Figures 1 and 2). These imaging features were consistent with PRES, and we initiated intravenous anti-hypertensive medications. An MRI brain was obtained, which showed abnormal restriction in the parietal and occipital areas, confirming the diagnosis of PRES (see Figure 3). Reduction of the patient's systolic BP from 220 to 180 was associated with slight improvement in her visual acuity and orientation within a couple of hours. Notable laboratory data revealed a platelet count of 11,000/μl and hemoglobin of 11 g/dl. All other laboratory tests were within the normal limits. There was no evidence of thrombotic thrombocytopenic purpura (one of the causes of PRES), and she was admitted with the diagnosis of PRES. At the time of admission, PRES was considered to be secondary to malignant hypertension complicated by recent chemotherapy and BMT. Over the subsequent 48 h, she returned to baseline with improvement of her blood pressure to normal range.
Discussion
PRES commonly presents with seizures (74%), altered sensorium or encephalopathy, headache and visual changes [3]. Other neurological features such as aphasia and sensory changes are also seen. PRES can sometimes present similarly to CVA, such as in our case. In such cases, the patient may inadvertently and inappropriately receive thrombolytic therapy. We again stress the point that when a patient presents with stroke-like symptoms, but with an inconsistent neurological exam, then the stroke mimics such as PRES, seizures, migraine and tumor should be included in the differential diagnosis. The etiology of PRES can be broadly divided into five main etiological groups. In order of clinical frequency, PRES etiologies include HTN (61%), cytotoxic medications (19%), preeclampsia or eclampsia (6%) [3], autoimmune and systemic conditions, including sepsis (see Table 2). Currently, the pathophysiology of PRES is controversial. The most accepted theory of HTN-related PRES is that of a hyperperfusion injury model (see Figure 3). There will be a failure of cerebral autoregulation in relation to the sudden elevation of blood pressure. This sudden increase in MAP can lead to arteriolar dilation, hyperperfusion, endothelial vascular damage and disruption of the blood brain barrier. This leads to vasogenic edema and potentially reversible ischemia affecting both the grey and white matter [4]. Another less accepted theory argues that HTN-related dysautoregulation results in vasoconstriction and hypoperfusion injury. However, the above hypothesis fails to explain non-hypertensive PRES, and so some postulate an autoimmune or immune response theory to various stimuli [2].
We suggest that PRES is the result of various etiological factors that lead to blood brain barrier injury either by hyper-or hypoperfusion, endothelial dysfunction, changes in blood vessel morphology, hypocapnia or immune system activation [2,4,5]. It usually affects parietal and occipital area, s but other regions can be involved as well [6].
In our patient we feel that elevated MAP led to regional dysautoregulation, consequently causing hyperperfusion, explaining the findings of increased CBF, CBV and reduced TTP. It is important to recognize that this patient had a recent history of bone marrow transplant with exposure to chemotherapy, which may also be a causative factor in the development of PRES, or may have independently contributed to the tissue injury.
We conclude that the mechanism of PRES is individualized in each patient and depends mainly on the causative factor identified in each case. In the setting of HTN, PRES is most likely due to the mechanism described in the previously discussed theories. In the setting of normotensive PRES, the mechanism may be based on endothelial dysfunction, immune system activation and other systemic features. Although initially edema is vasogenic in nature, a failure to reverse the disease etiology will subsequently cause cytotoxic edema and eventually brain infarction, further emphasizing the importance of early disease recognition.
A high degree of suspicion is required to make this diagnosis as the patient may present atypically. Since many patients with PRES present with inadequate history and AMS, brain imaging plays an important role in the diagnosis of PRES. Even though MRI (particularly T2-FLAIR) is the imaging of choice, CTP can play a significant role that can reveal the cerebral hemodynamics related with this condition. Since there is hyperperfusion the CBF and CBV will be elevated, and TTP will be reduced. The CTP can also have diametrically opposite findings compared to our case [7] (see Table 3). The brain MRI changes on FLAIR with no changes on diffusion weighted images (DWI) suggest vasogenic edema. Changes in both sequences (FLAIR and DWI) indicate that the cytotoxic edema has set in, which may have an unfavorable outcome. The authors imply that CTP is useful but not superior to MRI in the diagnosis of PRES.
The management of this condition depends on the etiology and should be initiated in a timely manner. The treatment of the underlying cause is typically sufficient to reverse this condition. However, a word of caution: this condition can lead to irreversible brain insult if the treatment is delayed or if there is prolonged brain insult, in which case a brain insult then becomes irreversible brain infarctions. Additionally, there may be hemorrhagic complications and raised intracranial pressure contributing further to the cerebral damage [8]. The MAP should be reduced quickly but with caution in the cases of hypertensive PRES. In cases of non-hypertensive PRES, especially in case of neoplastic drugs, the offending agent should be withdrawn quickly to avoid further damage to the blood brain barrier. In the setting of transplant, alternative medications can be substituted to avoid organ rejections [9]. If an autoimmune etiology is suspected, then immunosuppression has a role in the management [10]. Seizures are managed with anti-epileptic medications.
Conclusions
Posterior reversible encephalopathy syndrome is a relatively rare syndrome that sometimes presents as a stroke mimic. As such, it is important for the emergency physician to recognize. Urgent recognition and early initiation of management of this condition are imperative as it directly impacts the neurological outcome. Brain CT perfusion can play an important role in the diagnosis.
Consent
Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal. | 2017-04-01T02:56:27.333Z | 2012-02-29T00:00:00.000 | {
"year": 2012,
"sha1": "c6689e5c37ee891c5ba0575cdfa9f25dd8f0b617",
"oa_license": "CCBY",
"oa_url": "https://intjem.biomedcentral.com/track/pdf/10.1186/1865-1380-5-12",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e75999e85cc2aa8077aeb9e588a355c0e900d9ab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3586398 | pes2o/s2orc | v3-fos-license | The clinicopathologic relevance and prognostic value of tumor deposits and the applicability of N1c category in rectal cancer with preoperative radiotherapy
The clinicopathologic relevance and prognostic value of tumor deposits in colorectal cancer has been widely demonstrated. However, there are still debates in the prognostic value of tumor deposits and the applicability of N1c category in rectal cancer with preoperative radiotherapy. In this study, rectal cancer with preoperative radiotherapy followed by resection of primary tumors registered in Surveillance, Epidemiology and End Results (SEER) database from 2010-2012 were analyzed. There were 4,813 cases eligible for this study, and tumor deposits were found in 514 (10.7%) cases. The presence of tumor deposits was significantly associated with some aggressive characteristics, including poorer tumor differentiation, more advanced ypT category, ypN category and ypTNM stage, distant metastasis, elevated carcinoembryonic antigen, higher positive rates of circumferential resection margin and perineural invasion (all P < = 0.001). Tumor deposit was also an independent negative prognostic factor for cancer-specific survival in rectal cancer with preoperative radiotherapy (adjusted HR and 95% CI: 2.25 (1.51 – 3.35)). N1c category had significant worse survival compared with N0 category (adjusted HR and 95% CI: 2.41 (1.24 – 4.69)). In conclusion, tumor deposit was a significant and independent prognostic factor, and the N1c category by the 7th edition of AJCC/TNM staging system was applicable in rectal cancer with preoperative radiotherapy.
INTRODUCTION
Rectal cancer is one of the most common malignancies of the digestive system.Along with colon cancer, it ranks near the top of cancer incidence and cause specific death worldwide [1][2][3].Surgery is the only curable treatment for early stage cases [4].Although some details remain to be explored, preoperative chemoradiotherapy has become the standard treatment strategy for locally advanced rectal cancer [5][6][7][8][9][10].Adjuvant chemotherapy does not significantly reduce recurrence, improve diseasefree survival (DFS) or overall survival (OS) according to recent studies [11,12].Identification of significant prognostic factors helps to determine subgroups with high risk, who may benefit from subsequent systemic chemotherapy.
The value of tumor deposits (TDs) has been widely explored in colorectal cancer.TDs have been reported to be associated with aggressive tumor features, including vascular invasion [13,14], perineural invasion [13], depth of tumor invasion and regional lymph nodes metastasis [15][16][17].In addition, many studies have confirmed an inverse association of TDs with survival in colorectal cancer.
However, in rectal cancer with preoperative radiotherapy, only three studies investigated the prognostic value of TDs and debates existed.The first study specifically evaluating the prognostic value of TDs
Research Paper
was conducted by Song JS et al.With a retrospective review of 136 rectal cancers staged at ypT3N0M0 after preoperative chemoradiotherapy, they identified TDs in 16 cases.They found no significant differences in both DFS and OS between TDs-negative and TDs-positive cases [18].By contrast, Gopal P et al. analyzed 110 rectal cancers with preoperative chemoradiotherapy, and found TDs to be associated with a trend of higher local recurrence rate and significantly decreased survival [19].Afterwards, another retrospective study by Zhang LN et al. with 310 locally advanced rectal cancer receiving preoperative chemoradiotherapy demonstrated TDs to be a significant negative prognostic factor for DFS and OS [20].The conclusions were inconsistent and all the studies were of small sample sizes.Thus, we conducted this analysis with a large-sized sample based on Surveillance, Epidemiology and End Results (SEER) database to evaluate the prognostic value of TDs in rectal cancer with preoperative radiotherapy.
The relevance of TDs with clinicopathologic characteristics in rectal cancer with preoperative radiotherapy
In general, there were 14,572 rectal adenocarcinoma identified from SEER database in this study.After further selection of cases by the information of surgery and radiotherapy, we obtained 5,439 cases who received preoperative radiotherapy followed by resection of primary rectal cancer.Among them, the information for TDs (Absent / Present) was available in 4,813 (88.5%) cases, and these cases were finally included in this study.TDs were present in 514 (10.7%) cases.The presence of TDs was not associated with gender (Male / Female), age (≤ 59 / > 59 yrs) nor tumor size (≤ 4 / > 4 cm).TDs was present in tumors with more aggressive features, including poorer differentiation, distant metastasis, higher carcinoembryonic antigen (CEA) level, higher rates of circumferential resection margin (CRM) involvement and perineural invasion (all P < 0.001, Table 1).In addition, a sequential elevation of positive rate of TDs was presented along with the progression of ypT category, ypN category and ypTNM stage (all P < 0.001, Table 1).
The prognostic value of TDs in regional lymph nodes negative rectal cancer with preoperative radiotherapy
In the 7 th edition of AJCC/TNM staging system, TDs were adopted in regional lymph nodes negative colorectal cancer to establish a subclassification of N1c category.Since we had demonstrated the prognostic value of TDs in rectal cancer with preoperative radiotherapy, it was interesting to analyze the value of N1c category in rectal cancer with preoperative radiotherapy.There were 3,133 cases classified as regional lymph nodes negative among the 4,813 rectal cancer cases.We conducted univariate and multivariate survival analysis to evaluate the prognostic value of TDs in regional lymph nodes negative rectal cancer with preoperative radiotherapy.The results were shown in Table 3. Univariate analysis identified age (≤ 60 / > 60 yrs, P = 0.01), tumor differentiation (Well differentiated / Moderately differentiated / Poorly differentiated or undifferentiated, P = 0.04), ypT category (Tis+T1 / T2 / T3 / T4, P < 0.001), distant metastasis (No / Yes, P < 0.001), marital status (Widowed / Married / Others, P = 0.01), CRM (Negative / Positive, P = 0.01), perineural invasion (Negative / Positive, P < 0.001) and TDs (Negative / Positive, P < 0.001, Figure 2) to be significant prognostic factor for rectal cancer-specific survival.In multivariate analysis, TDs (P = 0.01, HR and 95% CI: 2.41 (1.24 -4.69)) remained as an independent prognostic factor.In addition, CRM (P = 0.02) and perineural invasion (P = 0.03) were also significant prognostic factors.Older patients (> 60 yrs) were found to have a trend of worse survival than younger (P = 0.06).Since regional lymph nodes negative cases with TDs was classified as N1c in the 7 th AJCC/TNM staging system, our analysis proved the rationale of N1c category in rectal cancer with preoperative radiotherapy.
DISCUSSION
We demonstrated the prognostic value of TDs and the rational of N1c category in rectal cancer with preoperative radiotherapy using the SEER database Firstly mentioned in the 5 th edition of AJCC/ TNM staging system, the definition of TDs had evolved along with the release of subsequent editions.During the evolution of TDs definition, the clinicopathologic relevance and prognostic value of TDs had been widely investigated and confirmed in colorectal cancer [13,15,17,[21][22][23][24].However, the applicability of TDs in rectal cancer with preoperative chemoradiotherapy had been doubted due to pathological changes induced by chemoradiotherapy.The feature of tumor regression might present with tumor nodules surrounded by fibroinflammatory stroma, which might cause confusion to distinguish residual microfoci and TDs [25,26].Thus the value of TDs in rectal cancer with preoperative chemoradiotherapy needed further assessment.
In rectal cancer receiving preoperative radiotherapy, consistent with the findings by Gopal P et al [19] and Zhang LN et al [20], we demonstrated the relevance of TDs with several aggressive tumor features, including more intensive regional lymph nodes metastasis [19,20], more perineural invasion [19] and higher CEA level [20].In addition, we also found TDs to be significantly associated with poorer tumor differentiation, more advanced ypT category and ypTNM stage, distant metastasis, as well as higher positive rates of CRM involvement and perineural invasion.It seemed that TDs was not only indicators of more advanced tumor stage, but also associated with intrinsic tumor aggressiveness.Our study also verified the prognostic value of TDs for rectal cancer-specific survival in rectal cancer with preoperative radiotherapy, which was in accordance with the studies by Gopal P et al and Zhang LN et al.Furthermore, by demonstrating the significant and independent prognostic value of TDs in regional lymph nodes negative group, we also proved the applicability of N1c category established by the 7 th edition of AJCC/TNM staging system in rectal cancer with preoperative radiotherapy.
Interestingly, in our study, by univariate analysis, although ypT and ypN categories were significant prognostic factors for rectal cancer-specific survival, after adjusting by other prognostic factors, ypT category turned out to be not an independent prognostic factor in rectal cancer with preoperative radiotherapy, and also in cases with negative regional lymph nodes.We further conducted univariate and multivariate analysis for the prognostic value of ypTNM stage (0+I/II/III/ IV) in rectal cancer with preoperative radiotherapy (Supplementary Table S1).We found that only stage IV patients had significant different survival compared with stage 0+I patients.This discovery called for a survey of literatures about the applicability of ypTNM stage in rectal cancer with preoperative radiotherapy or preoperative chemoradiotherapy.As a result, we found that it was actually not profoundly researched.Song JS et al proposed the irrelevance of ypTNM stage with DFS and OS in rectal cancer with preoperative chemoradiotherapy [18].Although several other studies indicated strong association of ypTNM stage and survival in rectal cancer with preoperative chemoradiotherapy, their conclusions were based on univariate analysis [20,27] or incomplete multivariate analysis [28,29], for example, adjusted only by age and sex [28].We didn't find any study to include some important postoperative pathological factors in survival analysis, such as TDs and CRM.Since our study identified these postoperative pathological features to have even more important prognostic value compared with ypT category and ypTNM stage, more comprehensive investigations about the applicability of ypTNM stage in rectal cancer with preoperative radiotherapy were further needed, especially for those without distant metastasis.
Several limitations of our study were noteworthy.Because information of chemotherapy was not available in the SEER database, we could only include rectal cancer with preoperative radiotherapy for analysis in this study.Chemoradiotherapy was the standard treatment for rectal cancer, thus this was one of the major limitations in our study.In addition, several important pathologic factors, including tumor regression grade and vascular invasion were not accessible, thus the conclusions of our study were not adjusted by these important prognostic factors.More importantly, the inter-observer variability in diagnosing TDs was particularly challenging for SEER data due to the various pathologists involved in the data generation.In addition, radiotherapy could increase tissue fibrosis and might cause false-positive diagnoses of TDs.The uniformity of diagnosis and possible false-positive diagnosis were major limitations in our study.These limitations couldn't be resolved and should be particularly noticed in our study with SEER data.Given these limitations, a further verification of our conclusions using a population with more complete information was warranted.In conclusion, tumor deposit was a significant and independent prognostic factor, and N1c category by the 7 th edition of AJCC/TNM staging system was applicable in rectal cancer with preoperative radiotherapy.
Ethics statement
This study was deemed exempt from institutional review board approval by Sun Yat-sen University Cancer Center and informed consent was waived.This study was conducted in accordance with the ethical standards of the World Medical Association Declaration of Helsinki.
SEER database and case selection
The dataset used for analysis in this study was based on the November 2014 data submission "Incidence-SEER 18 Regs Research Data + Hurricane Katrina Impacted Louisiana Cases, Nov 2014 Sub (1973-2012 varying)".According to the International Classification of Diseases for Oncology, third edition (ICD-O-3) topography codes and histology codes, adenocarcinoma (Code 8140 -8147, 8210 -8211, 8220 -8221, 8255, 8260 -8263, 8480 -8481, 8490 and 8574) of rectum (Code C20.9) was included in this study.In addition, we restricted eligibility to patients with records of the 7 th American Joint Committee on Cancer/tumor node metastasis (AJCC/ TNM) category from 2010 to 2012.We also excluded cases without follow-up records (survival time code of 0 months) and patients with primary tumors other than rectal cancer.Further limitations about radiation (Radiation sequence with surgery) and surgery (RX Summ--Surg Prim Site (1998+)) were also considered for the final study population.
Information for tumor deposits
Tumor deposits were defined as follows in the 7 th edition of AJCC/TNM staging system: "The deposit should be in the pericolorectal fat or adjacent mesocolic fat, it should be away from the leading edge of the tumor, there should be no evidence of residual lymph node tissue, and finally the tumor deposit should be within the lymph drainage area of the primary carcinoma".This was different from previous definitions which laid more importance on the size and shape of tumor nodules in the 5 th and 6 th edition of AJCC/TNM staging system.With the implement of Collaborative Stage Data Collection System Version 2 (CSv2) in 2010, the information of TDs was recorded in SEER database as site-specific factor (SSF) 4. The codes and description of the SSFs are available at https://cancerstaging.org/cstage/schema/Pages/ version0205.aspx.
Statistical analysis
We performed all the analyses with SPSS for windows V.13.0.(SPSS Inc., Chicago, IL, USA).The association of TDs with clinicopathologic features was conducted using chi-square test or Kruskal-Wallis H test. Rectal cancer-specific survival was calculated as the time interval between the diagnosis of rectal cancer and the death attributed to rectal cancer, or censored at the death from other causes or the last visit.Univariate and multivariate survival analyses were computed for the prognostic value of TDs for rectal cancer-specific survival.Survival curves were plotted by the Kaplan-Meier method and compared using the log-rank test.HR and 95% CI were computed with the cox proportional hazards model.A two tailed P value < 0.05 was considered statistically significant.Bonferroni correction was applied in univariate analysis.
Figure 1 :
Figure 1: The survival curves of tumor deposits (Absent / Present) plotted by the Kaplan-Meier method in rectal cancer with preoperative radiotherapy.Patients with tumor deposits had significantly worse rectal cancer-specific survival compared with those without tumor deposits.
Figure 2 :
Figure 2: The survival curves of tumor deposits (Absent / Present) plotted by the Kaplan-Meier method in regional lymph nodes negative rectal cancer with preoperative radiotherapy.Patients with tumor deposits (categorized as N1c category) had significantly worse rectal cancer-specific survival compared with those without tumor deposits (N0 category).
Table 1 : The association of tumor deposits with clinicopathologic characteristics in rectal cancer with preoperative radiotherapy
a : Significant P value.Abbreviation: CEA, carcinoembryonic antigen, CRM, circumferential resection margin.
Table 2 : Univariate and multivariate analysis for the prognostic value of tumor deposits in rectal cancer with preoperative radiotherapy
a Significant P value.b Significant P value after Bonferroni correction.Abbreviation: CRM, circumferential resection margin.HR, hazard ratio, CI, confidence interval.registered from 2010 -2012 in this study.To our knowledge, this was so far the largest study to investigate the prognostic value of TDs in rectal cancer with preoperative radiotherapy.
Table 3 : Univariate and multivariate analysis for the prognostic value of tumor deposits in regional lymph nodes negative rectal cancer with preoperative radiotherapy
a : Significant P value.b Significant P value after Bonferroni correction.Abbreviation: CRM, circumferential resection margin.HR, hazard ratio, CI, confidence interval. | 2018-04-03T03:56:15.634Z | 2016-09-16T00:00:00.000 | {
"year": 2016,
"sha1": "473b36cf779332e169a16c7a1019613b6edcf1b8",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=12058&path[]=40910",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "473b36cf779332e169a16c7a1019613b6edcf1b8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248958634 | pes2o/s2orc | v3-fos-license | Enhanced Toxicity of Bisphenols Together with UV Filters in Water: Identification of Synergy and Antagonism in Three-Component Mixtures
Contaminants of emerging concern (CEC) localize in the biome in variable combinations of complex mixtures that are often environmentally persistent, bioaccumulate and biomagnify, prompting a need for extensive monitoring. Many cosmetics include UV filters that are listed as CECs, such as benzophenone derivatives (oxybenzone, OXYB), cinnamates (2-ethylhexyl 4-methoxycinnamate, EMC) and camphor derivatives (4-methylbenzylidene-camphor, 4MBC). Furthermore, in numerous water sources, these UV filters have been detected together with Bisphenols (BPs), which are commonly used in plastics and can be physiologically detrimental. We utilized bioluminescent bacteria (Microtox assay) to monitor these CEC mixtures at environmentally relevant doses, and performed the first systematic study involving three sunscreen components (OXYB, 4MBC and EMC) and three BPs (BPA, BPS or BPF). Moreover, a breast cell line and cell viability assay were employed to determine the possible effect of these mixtures on human cells. Toxicity modeling, with concentration addition (CA) and independent action (IA) approaches, was performed, followed by data interpretation using Model Deviation Ratio (MDR) evaluation. The results show that UV filter sunscreen constituents and BPs interact at environmentally relevant concentrations. Of notable interest, mixtures containing any pair of three BPs (e.g., BPA + BPS, BPA + BPF and BPS + BPF), together with one sunscreen component (OXYB, 4MBC or EMC), showed strong synergy or overadditive effects. On the other hand, mixtures containing two UV filters (any pair of OXYB, 4MBC and EMC) and one BP (BPA, BPS or BPF) had a strong propensity towards concentration dependent underestimation. The three-component mixtures of UV filters (4MBC, EMC and OXYB) acted in an antagonistic manner toward each other, which was confirmed using a human cell line model. This study is one of the most comprehensive involving sunscreen constituents and BPs in complex mixtures, and provides new insights into potentially important interactions between these compounds.
Introduction
For years, the increase in anthropopressure on the natural environment, resulting from economic, industrial and agricultural activity, has caused significant changes to both abiotic and biotic systems. Numerous reports provide information about contaminant facilitated degradation and disintegration of the natural environment. For several years, a key interest has been to understand the environmental and biological effects of compounds generally referred to as CECs (contaminants of emerging concern) [1]. CECs typically exhibit a high level of environmental persistence and are not easily biodegradable, often demonstrating The co-occurrence and the resulting interactions of contaminants make it extremely difficult to foresee the environmental and physiological effects of such exposure. Therefore, it becomes necessary to identify and determine the type of interactions that occur between contaminants (first in model mixtures then in relation to environmental concentration levels). To date, the research carried out in this area has primarily focused on determining the impact of mixtures comprising UV filters [16]. However, it is apparent in the environment that such contaminants co-occur in mixtures with other xenobiotic compounds, which can significantly alter the level and action of their toxicity. Our study is the first systematic attempt to understand the combined impact of these emerging pollutants using bioluminescent bacteria and then testing if these responses can be recapitulated in human cancer cells. In this way, we not only test the effects of various pollutant mixtures, but try to confirm the relevance of the results using human cells.
Results
In the following subsections, the results of mixture toxicity studies with bioluminescent bacteria are given. Underestimation was observed in nine cases in our studies to understand the impact of EMC on a BPA + OXYB mixture. At the lowest concentrations, this three-component mixture had a trend towards synergy (for CA modeling, refer to Supplementary Table S1 and Figure 1B). This may suggest and justify continuing studies in the direction of lowcontent mixtures, which more precisely reflect environmentally relevant mixture compo- Figure 1B). This may suggest and justify continuing studies in the direction of low-content mixtures, which more precisely reflect environmentally relevant mixture compositions. Nevertheless, both IA and CA models adequately showed variation of toxicity with threecomponent mixtures in our study with bioluminescent bacteria.
BPA + 4MBC + Second Bisphenol Analogue
Results of a third component on the toxicity of a BPA and 4MBC mixture are presented in Supplementary Table S2 and Figure 1C, in the form of MDR values. Interestingly, a trend was observed when the concentration of a second bisphenol analogue (BPS) was increased, where the mixture components became synergistic in their behavior (such as in the case of BPA + OXYB + second bisphenol analogue). In the case of BPF, with CA modeling, this trend was also observable, but the magnitude was weaker. IA modeling resulted in a tendency towards overestimation.
BPA + 4MBC + Second UV Filter
The impact of EMC on the toxicity of a BPA + 4MBC mixture was very well forecasted by the IA model (refer to Supplementary Table S2); The CA model predicted several instances of underestimation, and this behavior was observed in only six cases of EMC C1 and C2.
BPA + EMC + Second Bisphenol Analogue
A CA model of a BPA, EMC and BPS mixture showed synergistic effects in all cases and underestimated the impact (Figure 1E and electronic Supplementary Table S3). Some concentration dependence was also visible in mixtures containing the lowest BPS content, as well as with BPA C2 and C3. With increasing EMC concentration, the mixture became more synergistic. Similar behavior was noticeable for mixtures where BPF was present as the third component ( Figure 1F), but there was only one confirmed case of synergy. On the other hand, both for BPS and BPF, in almost all cases, the IA model showed no significant interactions (only one case of overestimation was observed-C1 BPS + C1 BPA + C2 EMC). Figure S2A. For all cases, for BPS C1 no significant discrepancies between observed and calculated toxicity values were present using either the CA or the IA model. The OXYB and BPF mixture showed underestimation (for C1 BPF) and synergy (C2 and C3 of BPF) with BPS C2, having a clear concentration-dependence trend; interestingly, BPS C3 made all mixtures synergistic.
BPS + OXYB + Second UV Filter
The impact of 4MBC and EMC on BPS + OXYB toxicity are presented in Supplementary Figure S2B,C, respectively, as well as in Supplementary Table S4. At the lowest BPS concentration studied, both models indicated no interactions between any chemical found in the mixture. The BPS C2 and C3 concentration dependence trend was similar to one observed with the BPS + OXYB + BPF cocktail; however, the IA model showed a trend towards overestimation. Interestingly, EMC C2 and C3 exhibited a strong undeniable synergistic impact on all BPS + OXYB mixtures, while the impact of EMC C1 was underestimated, with a trend towards synergy at the lowest concentrations of all analytes present in the cocktail. Figure S2D, as well as in Supplementary Table S5. EMS had a tendency to underestimate the impact on the BPS and 4MBC mixture in almost all cases studied. Of note, at environmentally relevant levels, almost all other mixtures confirmed the synergistic impact (with CA model) of these pollutant cocktails.
BPS + EMC + Second Bisphenol
The CA model of BPS + EMC mixtures with BPF again showed interesting trends (Supplementary Figure S2E and Supplementary Table S6). The lowest BPS concentrations, with increasing concentrations of BPF and EMC, had a trend towards synergy. The magnitude of this trend was not strong but was noticeable. The BPS C2 and C3 concentration levels trended towards a slight weakening of synergy, the only exception being the mixture of BPS C2 + EMC C3 + BPF C2, which showed signs of underestimation. The IA model was again resistant to concentration variations and was not suitable for predicting plausible synergy/antagonism of chemicals acting in a similar manner.
BPF + 4MBC + Second Bisphenol
In the case of CA modeling, BPS had a clear synergistic impact on the BPF + 4MBC toxicological output of the bioluminescent bacteria. Interestingly, a trend of going from underestimation to synergy was clearly correlated with increasing BPS content (Supplementary Figure S3A and Supplementary Table S7). From an aqueous ecosystems point of view, this may demonstrate the detrimental effects of replacing BPA with BPS in everyday products/plastics production and uncontrolled waste disposal.
BPF + 4MBC + Second UV Filter
The impact of EMC and OXYB on the toxicity of BPF + 4MBC is presented in a graphic manner in Supplementary Figure S3B,C, respectively (as well as in Supplementary Table S7). No cases of synergy were present (except one result of the mixture with OXYB), and underestimation was confirmed in all cases of BPF C3.
BPF + OXYB + Second UV Filter
The results of MDR modeling for BPF + OXYB + EMC are presented in Figure S3D (and in Supplementary Table S8). The behavior of these three-component mixtures was similar to the ones shown in Supplementary Figure S2B,C, with many underestimated cases confirmed. Only one case of synergy was indicated.
Mixture of Three UV Filters Studied
The CA model for three compounds theoretically acting with the same MOA (Mode of Action) reflects the impact of such mixtures on bioluminescent bacteria (data presented in Supplementary Table S9). The results of MDR for the IA modeling are shown in Figure 2, where a clear trend towards overestimation was visible, indicating that sunscreen components compete at the concentrations studied.
Mixture of Three UV Filters Studied
The CA model for three compounds theoretically acting with the same MOA (Mode of Action) reflects the impact of such mixtures on bioluminescent bacteria (data presented in Supplementary Table S9). The results of MDR for the IA modeling are shown in Figure 2, where a clear trend towards overestimation was visible, indicating that sunscreen components compete at the concentrations studied.
Human Breast Cell Toxicity with MCF10A-A Non-Tumoral, Epithelial Cell Line
In order to study the possible impact of selected UV filters (EMC and 4MBC) with BPA and DBP (dibutyl phthalate) on human breast cells (with a non-tumoral, epithelial cell line named MCF10A), we first performed dose-response studies on each of the single substances (data not shown), which was followed by experiments using mixtures, performed for 24 and 72 h. The concentrations studied are given in Section 4.6. (for CA and IA models). Both UV filters studied had an antagonistic impact on the BPA and DBP mixture (see Table 1). The IA model was the more reliable one, as the chemicals belonged to different classes. The studies showed a small decrement of magnitude for antagonism with increasing concentrations of DBP in the mixtures with prolonged exposure time, while 4MBC appeared to be a stronger antagonist when compared to EMC.
Human Breast Cell Toxicity with MCF10A-A Non-Tumoral, Epithelial Cell Line
In order to study the possible impact of selected UV filters (EMC and 4MBC) with BPA and DBP (dibutyl phthalate) on human breast cells (with a non-tumoral, epithelial cell line named MCF10A), we first performed dose-response studies on each of the single substances (data not shown), which was followed by experiments using mixtures, performed for 24 and 72 h. The concentrations studied are given in Section 4.6. (for CA and IA models). Both UV filters studied had an antagonistic impact on the BPA and DBP mixture (see Table 1). The IA model was the more reliable one, as the chemicals belonged to different classes. The studies showed a small decrement of magnitude for antagonism with increasing concentrations of DBP in the mixtures with prolonged exposure time, while 4MBC appeared to be a stronger antagonist when compared to EMC.
Discussion
UV filter compounds are commonly used as active ingredients in many personal care products (PCPs) to protect against sunburns, premature aging and skin cancer caused by the UV light irradiation [17]. In order to ensure PCPs provide adequate protection, a mixture of individual substances, which usually contain three to eight UV filters, is common [3]. Growing concern among consumers about the harmful effects of solar radiation has significantly increased the use of products containing sunscreens, which is directly contributing to their presence in the biome. Currently, there are no regulations inhibiting such combinations [18]. As a consequence, UV filters have been detected in different environmental samples in the ng/L to low µg/L range. For example, OXYB has been detected in seawater samples in the range 13.2 ± 0.4-31.7 ± 0.25 ng/L [19] and <5 to 19.2 µg/L [17]. Moreover, OXYB has been detected in freshwater with quantitative levels ranging from 5 to 79 ng/L, as well as in bottom sediments ranging from <7-82.1 ng/g d.w. [19] and <0.03-65.7 ng/kg d.w. [20]. Some organic UV filters (such as 2-hydroxy-4methoxybenzophenone, 4-methylbenzylidene camphor and isoamyl 4-methoxycinnamate, among others) have been shown to accumulate in mussels, corals, crabs and fish, with concentrations ranging from few to hundreds of ng/g d.w. (e.g., OXYB and EMC have been detected in cod liver tissue in the range <20-1037 ng/g and <30-36.9 ng/g, respectively [19] ( Langford et al., 2015)) and in the livers of dolphins [21].
Our study shows, for the first time, how given selected mixtures of BPs and sunscreen constituents interact in three-component mixtures, at environmentally relevant concentrations. Interestingly, mixtures containing two BPs (BPA + BPS, BPA + BPF, BPS + BPF) and one sunscreen component (OXYB, 4MBC or EMC), show strong synergy in a prevailing number of tests. Moreover, mixtures containing two UV filters (any pair of OXYB, 4MBC and EMC) and one BP (BPA, BPS or BPF) have a strong underestimation potential, which is clearly concentration dependent. The three-component mixtures of UV filters (4MBC, EMC and OXYB) act in an antagonistic manner for each other. Interestingly, the antagonist effects of BPs and these UV filters are conserved in a model human cell line, indicating that these results may be relevant for mammals. We also show that BPF and BPS have as significant impact on the toxicity of these mixtures, which is in agreement with the effects of BPA.
The results show important synergistic interactions between BPs and the three sunscreen constituents, which belong to different chemical classes. The strongest synergy relate to the synergy within three-component mixtures that hold two different types of bisphenols and each of the three sunscreens (see Figure 1A,C,E). The results point out that BPS seems to be important for the synergy action for both OXYB and 4MBC, while the highest doses of EMC are important for the synergy action with the two other BPs. The consistent dose dependency driven by both the BPS and EMC suggest that further studies should be addressed to understand better the mechanisms behind these three-component mixture effects. Overall, our new data suggest that closer attention should be paid to the potential of sunscreens to be much more detrimental in environmentally occurring pollutant mixtures, when compared to their individual effects. The synergy of these UV filters with BPs, which are known to leak out of plastic containers, should prompt investigations into the coexistence of these products in several different kinds of situations. This includes when sunscreens are stored in plastic containers for a long time, sometimes at relatively high temperatures, and then applied to the human skin in relatively high doses [22]. It is relevant, in this context, that sunscreens have the ability to penetrate deep into skin tissues, where they are likely to enable the transfer of other compounds [23][24][25]. For example, in comprehensive skin tests, EMC was shown to penetrate deep into skin tissue. While the penetration was less significant, similar observations have been reported for OXYB and its metabolites [26]. It was also reported that exposure of human macrophages to EMC led to reduced immunity, increasing the risk of asthma and allergy-related complications, due to elevated excretion of cytokines [27]. Limited studies can be found on the interactions between sunscreen constituents and other pollutants [28]. In the work of Brand et al. [29], scientists confirmed enhanced penetration of selected pesticides (e.g., 2,4-D (2,4-Dichlorophenoxyacetic acid), DEET (N,N-diethyl-m-toluamide), paraquat, parathion and malathion) when hairless mouse skin was co-exposed to titanium and zinc oxides [29]. In the work of Marrot [30], the possible explanation for elevated atopy or eczema during periods of increased pollution exposure (heavy metals or polycyclic aromatic hydrocarbons (PAHs)) include oxidative stress, inflammation and metabolic impairments correlated to more frequent use of sunscreens. It would be important to investigate if BPs can potentially contribute to such sunscreen-related effects.
The antagonistic actions of these pollutants at environmentally relevant doses are also potentially very important. Our finding that three-component mixtures of UV filters (4MBC, EMC and OXYB) act in an antagonistic manner highlights their potential underestimation for biological consequences. Moreover, we found that such antagonist actions could be replicated in a human breast cell model, suggesting this may also be relevant for mammalian models, something that is worth further experimental consideration. Such antagonist effects can potentially skew conclusions of testing in different biological models. We also highlight the importance of using different experimental models to confirm such findings. Our study highlights the importance of dose dependency for both these types of pollutants, justifying the necessity to perform mixture studies using a wide range of concentrations.
Mixtures of UV filters and BPs have never been studied before with the help of marine organisms, and these results open up a new field for experimental design. Until now, only single pollutants belonging to UV filters have been studied with microorganisms belonging to different trophic levels. OXYB appears to be the most toxic substance among the UV filters, considering studies on microalgae (I. galbana), where OXYB was found to have an EC50 of 13.87 µg/L, followed by EMC (EC50 74.73 µg/L) and 4MBC (EC50 171.5 µg/L) [31]. In another model, employing Mediterranean mussel (M. galloprovincialis), the 4MBC solution (EC50 587 µg/L) was found to be the most toxic among the UV filters studied, while slightly lower toxicities were found for EMC (EC50 3110 µg/L) and OXYB (EC50 3470 µg/L). Studies using sea urchins (P. lividus) showed that this organism was more susceptible to EMC (EC50 284 µg/L) and 4MBC (EC50 854 µg/L), when compared with exposure to OXYB (EC50 3280 µg/L) [32]. Based on these results, it can be suggested that each marine organism responds to BPs and sunscreens components in a different manner. Individual predispositions and environmental conditions may also contribute, but further studies investigating the co-present pollutants are warranted to understand real-world consequences.
4MBC is known to cause detrimental effects in several test models. For example, 4MBC has been extensively investigated in terms of impact on reproductive systems. The Japanese rice fish, also known as the medaka (Oryzias latipes), is a fresh and brackish water fish, making it a good model for upper-tier ecotoxicological battery testing. Exposing adult male medaka to 4MBC resulted in disruptive spermatogenesis at doses of 5 and 50 µg/L. While 5 µg/L solutions greatly impacted estradiol and vitellogenin concentrations in the female plasma [33]. Furthermore, exposure of medaka eggs to a 4MBC solution resulted in prolonged hatching times. Japanese mussels (Ruditapes philippinarum) exposed to 4MBC at concentrations from 1-100 µg/L resulted in physiological stress due to elevated levels of antioxidant enzymes (GST) and proteins related to the inhibition of apoptosis (BCL2) and cellular stress (GADD). Sea snail larvae (Sinum vittatum) exposed to 2.57 mg/kg solutions of 4MBC had a decreased harvesting rate, while oxygen stress indicators were not impaired in these test organisms [34]. Moreover, the cell viability, oxidative stress and growth impairments as toxicological endpoints were studied with Tetrahymena thermophile protozoans [35] exposed separately to 4MBC and EMC. The EC 50 of 4MBC, after 24 h exposure, reached 5.125 mg/L, while toxicological endpoints of growth inhibition and cells vitality (5 mg/L at 6 h) were correlated with an ability to break down cellular membranes at 15 mg/L after 4 h post exposure. 4MBC has also been studied in mammals, including humans and rats, where plasma concentration levels were measured after application of a sunscreen product (containing 4% of 4MBC) to the skin [36]; the maximum plasma values reached 200 pmol/mL. Only a small portion of 4MBC glucuronide metabolite was detected in human urea samples, confirming the low metabolic capability of humans in 4MBC transformation processes. Bearing in mind all of the above, it becomes evident how important it is to study CEC mixtures in various organisms and cell lines, which belong to different trophic levels, as pollutant cocktails greatly lower effective concentration values in the case of synergy confirmation.
The UV filters and BPs we tested are commonly found together in numerous water bodies [37]. Large studies, performed to determine over 100 CECs, confirmed the copresence of sunscreen active components and BPs in effluents from wastewater treatment plants [38], although in lower concentrations than studied here. Oxybenzone and BPs were also instrumentally determined in samples collected from the Niger River delta [39], confirming a necessity for considering any environmental sample as a cocktail of numerous unknown ingredients and their metabolites/transformation products. Certainly, more work is needed to learn the mechanism of the enhanced toxicity of mixtures studied here, but our findings suggest that the action of a mixture of UV filters and BPs interfere with each other. Mixtures containing any of the two BPs studied and one sunscreen show a clear tendency towards underestimation and synergy, while the effect for cocktails containing any two UV filters studied and one BP show a slightly weaker effect. Overall, we find that underestimation events seem to be more frequent and there is a clear concentration dependence trend. Our results suggest that more studies looking at the direct interaction of these sunscreen and BP molecules with their potential cellular protein targets are warranted.
Preparation of Model Solutions
Before starting the tests, working solutions (C1 = 5 µM, C2 = 10 µM, C3 = 20 µM) were prepared by diluting the stock solutions with MeOH and ultrapure Milli-Q water, previously made by dissolving accurately weighted amounts of analytical standards in HPLC grade MeOH (4 mg/mL). All individual stock solutions were stored in the dark at −20 • C. The working solutions were freshly prepared before each set of analysis. The amount of MeOH was set at a maximum of 2% in these solutions for all toxicity tests and was used as the solvent control medium. In Table 2, the basic information on the studied sunscreens and BPA analogues are given [21,40].
Acute Toxicity Determination of the Mixtures
In order to determine the toxicity of the analytes of interest and their ternary mixtures, a standard Microtox ® acute assay has been performed. To achieve a ready-to-use bacterial suspension, the lyophilized Aliivibrio fischeri bacteria were rehydrated with 1 mL of RS. The cell suspension was immediately transferred into a glass cuvette placed in the reagent well of the analyzer maintained at 5.5 ± 1.0 • C. Subsequently, 100 µL of bacterial solution and 900 µL of working solutions were added into the test vials. Before starting the test, an osmotic adjustment was performed, using a saline solution to make the sample salinity optimal (above 2%) for Aliivibrio fischeri. The cuvettes were incubated at 15 • C. The toxicity was determined based on the inhibition of the light naturally emitted (at 490 nm wavelength) by the bacteria after its exposure to a standard solution/mixture sample. The bioluminescence level was detected by Microtox ® Model 500 Analyzer. Measurements of the luminescent output of the bacteria were recorded and compared with the light output of the control sample after 30 min. Each assay was performed in a duplicate. For quality control-according to delivery certificates-the phenol and copper (II) sulphate were used as positive controls.
Change of bioluminescence after t time was calculated according to Equation (1): where G (gamma correction factor) was calculated with Equation (2): where I t is the bioluminescence of bacteria in real sample at time of t, I 0 is the initial bioluminescence of bacteria in real samples and R t is calculated with Equation (3): where I ct is the bioluminescence of bacteria in control sample after t time of incubation and I c0 is the initial bioluminescence of bacteria in control.
Modeling and MDRs Calculations
The modeling calculations and MDR have been done according to standardized procedures described in detail in [41]. Mathematical modeling has been performed with concentration addition (CA) and independent action (IA) approaches, followed by the data interpretation with the Model Deviation Ratio (MDR) evaluation as presented in details in [42]. Here, the CA was assessed by using Equation (4): where ECx mix is the total concentration of the mixture that causes x effect, p i indicates the proportion of component i in the mixture, n indicates the number of components in the mixture and ECx i indicates the concentration of component i that would cause x effect. The independent action model is used to test toxicants in a mixture for a dissimilar mode of action, assuming that they act independently; the IA model is a statistical approach to predict the chance that one of multiple events will occur. The total mixture effect is calculated using Equation (5): where E(c mix ) is the total concentration of the mixture and E(c i ) is the concentration expected from component i. The CA model does not account for a possible interaction between different chemicals in the mixture and the deviations of tested mixture toxicity from the predicted one could be evidence for synergistic or antagonistic interaction between chemicals. To outline significant deviations (interactions between chemicals) the model deviation ratio (MDR) approach is applied. MDR (unitless) is defined with Equation (6): where Expected toxicity is the effective concentration toxicity for the mixture predicted by the CA/IA model and Observed toxicity is the effective concentration toxicity for the mixture obtained from toxicity testing.
Methodology of Human Breast Cell Line Studies
Human breast epithelial cells (MCF10A) were seeded in 250 µL complete growth medium within the inner wells of a 96-well plate (Corning Life Science, Amsterdam, the Netherlands) at a density of approximately 2 × 10 4 cells/well. The outer wells were excluded from the experiment and filled in with PBS. To permit cell adhesion and adaption to the novel environment, the cells were placed in a humidified incubator at 37 • C and 5% CO 2 for 24 h. After 24 h, the cells were exposed to different combination of pollutant mixtures (C1 = 1 µM, C2 = 5 µM and C3 = 10 µM), which were dissolved in DMSO (dimethyl sulfoxide, Invitrogen™, Catalog number: D12345). Then, after 24 and 72 h (in different trials), PrestoBlue™ HS Cell Viability Reagent (Invitrogen™, Catalog number: P50200. Pub. No. MAN0018371) was employed to test the cell viability, which reflects cell proliferation. The data were further analyzed in Microsoft Excel and GraphPad Prism (version 9.0.0). Each treatment was conducted with at least three biological replicates.
Conclusions
The increasing emissions of CECs into the environment prompts scientists to intensify their research related to the determination of interactions between chemicals co-present in complex matrices. Mixtures of CECs are found in practically all industrial and public service branches; from pharmaceutics, health, wastewater treatment, catalyst applications, ecosystem monitoring, life cycle assessment and many others. Thus, it is increasingly important not only to develop new tools to instrumentally determine the content of given CEC mixtures, but also to validate and adjust already known approaches. Further development of advanced mathematical tools to confirm possible interactions that occur between pollutants are warranted, especially for mixtures of higher orders.
This study is one of the most comprehensive on the interactions of compounds in complex mixtures, providing evidence for the notion that concentrations of a given CEC plays a crucial, and sometimes unpredictable, role in exerting cellular or physiological impacts on a living organism. Our mathematical approach and experimental setup enable the collection of rational and reliable data, which prompt conclusions on how to assess the potential overadditive or canceling effects of cocktail components. The findings presented here are an important next step for a better understanding of how to perform complex toxicological studies in a systematic manner, and how to evaluate a model's validity relating to dissimilar groups of pollutants and cells/organisms belonging to different trophic levels.
In our opinion, this work adds new insights into the field of mixtures impact on biota and confirms the necessity to increase the number of studies on complex mixtures of chemicals affecting biota and, gradually, start introducing requirements on admissible concentrations of particular chemicals, bearing in mind their toxicological impact on biota when present in mixtures. In such cases, law and policy makers need reliable sources of information to present and suggest realistic and achievable goals in directives aimed at minimizing the impact of versatile pollutant cocktails on humans. | 2022-05-22T15:06:01.733Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "522caa6c2abb96ce85efac099873e18822f722e0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/10/3260/pdf?version=1652957671",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7bf4318a7b3ca2bd91507c10b44c75f0adb47e8",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234531518 | pes2o/s2orc | v3-fos-license | SPECIES-SPECIFIC EQUATIONS: GREATER PRECISION IN COMMERCIAL VOLUME ESTIMATION IN MANAGED FORESTS IN THE AMAZON
The objective of this study was to analyze the performance of species-specific equations (SSEs) concerning generic ones in Annual Production Units (GEAPUs) and in a Forest Management Area (GEFMA) in the Brazilian Amazon. A total of 29,119 trees from 43 species were inventoried, harvested, and volumetric measurements were taken in ten APUs, with 10% of this total being separated for validation and comparison of the selected equations. After selection and validation of the equations (GEFMA, GEAPUs and SSEs) they were compared using precision statistics, by contrasting estimated and observed volumes and by residual analysis. Precision statistics were clearly lower for the SSEs. Trend lines near the average observed volume were shown for the SSEs when the estimates were contrasted with the observations. The residuals generated by the SSEs were smaller and statistically different than those of GEFMA and GEAPUs for the majority of cases. The most important commercial species (M. huberi) had its volume overestimated by 10.6, 9.3 and 3.0% when the GEFMA, the GEAPUs, and the SSEs were applied, respectively. Among the species that generally had very large trees, H. petraeum had its volume underestimated by 15.7, 16.6 and 4.4% by the GEFMA, GEAPUs and SSEs, respectively. The greater precision of the SSEs is reflected in better forest management planning decisions with respect to operational and economic aspects. These results show that besides being statistically valid, the SSEs are recommended for obtaining more precise estimates of commercial volume, especially since there is a great demand for reliable estimates for each individual species in forest management areas in the Amazon. 1University of the Midwest of Parana, Irati, Parana State, Brazil, ORCID: 0000-0002-6388-7679a, 0000-0001-99657851b, 0000-0003-4025-9562c, 0000-0002-1685-7864d 2University of Western Para, Santarem, Para, Brazil, 0000-0002-3629-3437a SPECIES-SPECIFIC EQUATIONS: GREATER PRECISION IN COMMERCIAL VOLUME ESTIMATION IN MANAGED FORESTS
INTRODUCTION
Forest management requires criterious planning to be conducted and estimation of commercial volumetric production is crucial to this process (Ribeiro et al., 2014;Tonini and Borges, 2015). In the Brazilian Amazon, the need for reliable estimates is gaining in importance because the estimated volumetric stock is one of the principal pieces of information required by public agencies responsible for forest management in order to emit annually the Logging Authorization (AUTEX) certificate to forest management companies operating in native forests (Brasil, 2009).
Estimates are primarily made using volume equations, and these can be generic or specific. A generic volumetric equation is constructed using data collected from various tree species, while a species-specific equation refers to an allometric equation developed from observations collected from a single tree species (Kora et al., 2018).
In the Brazilian Amazon generic equations developed based on an entire Forest Management Area (FMA) are commonly used, such as those developed by Rolim et al. (2006), Colpini et al. (2009), Thaines et al. (2010), Barreto et al. (2014), Silva and Santana (2014), Gimenez et al. (2015) and Tonini and Borges (2015). In the Tapajos National Forest (TNF), initiatives promoted by a forest management company have enabled the use of generic equations for Annual Production Units (APUs), and these equations are therefore restricted to a specific area. Although these equations have been compared to a generic equation used in the TNF (Gomes et al., 2018), there is still a need for comparisons with equations that are specific for species.
Gains in precision have been observed from the use of equations developed for smaller areas, possibly as a function of a reduction in environmental variation (Mauya et al., 2014;Vibrans et al., 2015;Kachamba and Eid, 2016;Gomes et al., 2018). However, in natural tropical forests, the great heterogeneity in species' composition and structure, even in small areas, represents an important challenge for the development of volumetric equations (Akindele and LeMay, 2006;Soares et al., 2011). According to the authors cited in the previous sentence, data stratification by species represents one of the principal alternatives to obtain more precise volume estimates.
It is important to consider that according to the Logging Authorization (AUTEX), the maximum volume authorized for harvesting is specific for species (Brasil, 2009). Variation in these volumes depends on, among other factors, the estimated production presented by the forest management company to the environmental regulation agency. In this way, precise estimates must be conducted for each individual commercial species. The more accurate the estimates for each species, the less will be the discrepancy between authorized volume and that of the harvest.
Although some studies have developed equations specific for some Amazonian commercial species (Lima et al., 2014;Ribeiro et al., 2014;Cysneiros et al., 2017;Santos et al., 2019), for the majority of species that are currently managed, specific equations have not been tested. Furthermore, in this region species-specific equations have rarely been compared to generic ones, as has been done in other regions and continents (Guendehou et al., 2012;Vibrans et al., 2015;Goussanou et al., 2016;Kora et al., 2018). Therefore, besides developing speciesspecific equations that are appropriate for Amazonian species, it is necessary to evaluate their performance in relation to generic equations.
In this context, objective of this study was to analyze the performance of species-specific commercial volume equations in relation to generic ones in a FMA and by APUs in a managed forest in the TNF in the eastern Brazilian Amazon. The hypothesis tested was that species-specific equations are more precise, and therefore are more appropriate for use in the Amazon region than generic ones.
Study area
This study was conducted in the Tapajos National Forest (TNF), which is a federal Conservation Unit (CU) located in the western region of the State of Pará, along the Santarem-Cuiabá (BR163) highway, and is part of the municipalities of Belterra, Aveiro, Placas and Ruropolis, with geographic coordinates 2º 45 to 4º 10´ S and 54º 45´ to 55º 30´W (Figure 1). The CU occupies an area of approximately 544,927 ha, of which about 32,000 ha are reserved for a community forest management concession (Forest Management Area -FMA). The vegetation in the CU is classified as Ombrophilous Dense Forest and is characterized by the dominance of large individual trees, palms and epiphytes, with a uniform canopy or with emergent trees (Gonçalves and Santos, 2008).
Data collection
The data used in this study were from 29,119 trees (50.0 cm ≤ DBH ≤ 175.0 cm) of 43 commercial species from 10 APUs (03 to 12) managed from 2008 to 2017 in the FMA of the TNF (Figure 1). The area of the APUs varied between 521 to 1,723 ha, totaling approximately 11,136 managed hectares. The data were collected through 100% forest inventories (census of all trees with DBH ≥ 50 cm) and rigorous volumetric measurements of logs.
In the 100% forest inventories, besides species identification by common regional name, they were obtained diameter at 1.3 m above the soil surface (DBH) and visually estimated commercial height (h c ) for commercial trees. Volume was obtained through rigorous volumetric measurements using the Smalian method. Initially, volumes of individual logs were taken so that the sum of these, discounting the volume of hollows (when present) composed the commercial volume (v c ) of the stem.
Besides DBH, h c (visually estimated during the inventory) was used for modeling of v c , although h c was also measured through the sum of the logs during the rigorous volumetric measurements. This procedure was justified by the fact that the sum of the logs was significant different than h c visually estimated during the inventory (Gomes et al., 2018). Since h c estimated in the inventory is the measurement used as input for the selected volumetric equation, this measurement was also chosen for the adjustment of the volumetric models.
Data organization
The dataset was separated into three categories to obtain three different types of equations, which differed in their scope: (1) for the FMA, independent of the APUs and species (generic equation for the FMA -GEFMA); (2) by APU, independent of the species (generic equations for the APUs -GEAPUs); and (3) by species (species-specific equations -SSEs). To obtain the GEFMA, all trees were used, which therefore involved all the APUs and all species. To obtain the GEAPUs and SSEs, the data were stratified by APU and by species, respectively.
About 10% of the sample trees were previously selected to compose the validation dataset, and for this reason, these trees were not included in equation adjustment. This selection was done randomly, but proportional to the number of trees in each diameter class (DBH). The selection was done for each species, and for validation of the generic equations the data were compiled into their respective categories (FMA and APUs).
The list of species and the number of trees in each dataset (adjustment and validation) are shown in Table 1.
SANTOS, et al
The data distribution for the adjustment and validation, using the relationship between v c and DBH, are presented in Figure 2 for the 36 species with n> 30, and in Figure 3 for the group "Others" (species with n< 30).
Selection and validation of the equations
Among the volumetric models commonly used for tropical forests (Guendehou et al., 2012;Vibrans et al., 2015;Cysneiros et al., 2017;Gimenez et al., 2017;Tsega et al., 2018), four were tested for equation selection in the three categories (GEFMA, GEAPUs and SSEs). Two single input and two double input Model 1 (Kopezky & Gehrhardt), Model 2 (Husch), Model 3 (Spurr), Model 4 (Schumacher & Hall) were used, where: v c = commercial volume, in m³; DBH = diameter at breast height (measured at 1.30 m above the soil), in cm; h c = commercial height, in m; b 0 , b 1 and b 2 = regression coefficients to be estimated; ln = neperiano logarithm; and ε i = random error. FIGURE 2 Data distribution for the adjustment and validation for the 36 species with n> 30. SANTOS, et al to the validation dataset separately for each species. The precision of the estimates in relation to the observed volumes was measured using the following percentage statistics (Campos and Leite, 2017): Mean Squared Root Error (MSRE%) (Equation 5), Bias (B%) (Equation 6), and the Average Absolute Difference (AAD%) (Equation 7). For which, when the values are nearer to zero the better the performance of the equation, where: y i = observed commercial volume of the i th tree, in m³; y i = commercial volume estimated of the i th tree, in m³; =average observed commercial volume of the sample trees, in m³; n = number of observations.
FIGURE 3 Data distribution for the adjustment and validation
for the group "Others" (set of 7 species with n < 30).
The coefficients of the volumetric models were estimated by the least squares method (LSM), using the 'lm' function of the software R version 3.6.0 (R Core Team, 2019). The significance of the coefficients was evaluated using a t-test (α = 0.05) in the regression.
For evaluation of the adjustments and selection of the best equations, the adjusted coefficient of determination (R² aj ) was used, which expresses the quantity of the total variation that is explained by the regression (Campos and Leite, 2017); the standard error of the absolute estimate (S yx m³) and in percentage (S yx %), which indicates the quality of the adjustment and how much the models errs on average when estimating the dependent variable (Machado et al., 2008) and the graphic dispersion of percent residuals (Res.% = ((v observed -v estimated )/v observed )100) were used to reveal possible biases in the estimates (Campos and Leite, 2017). The R² aj as well as the S yx were recalculated for arithmetic units in the case of logarithmic models.
The four models were tested separately for the FMA, for each of the APUs, and for each of the 36 species, as well as for the group "Others". After the evaluation of the adjustments, 48 equations were selected and compared.
Before comparing the selected equations in the three categories they were submitted to a process of statistical validation. In this process, after its use in the estimation of volumes in the validation dataset, a paired t-test (α = 0.05) was used to compare estimated volumes with their respective observed values. When logarithmic equations were used the estimates were done using the original volume scale. The null hypothesis tested was that the estimated and observed volumes were statistically equal.
Comparison of the selected equations
To compare the selected equations in the three categories (GEFMA, GEAPUs and SSEs), they were applied Furthermore, direct comparisons for equations were done for (1) the entire validation dataset independent of APU and species; (2) the ten species of greatest commercial importance (largest volumes harvested) (A. lecointei, Couratari sp., H. impetiginosus, H. courbaril, H. parvifolia, L. lurida, M. huberi, M. itauba, P. bilocularis and V. maxima); and (3) for five species that commonly have high errors in estimates, especially due to large variability the data (C. odorata, C. cateniformis, H. petraeum, P. suaveolens and Terminalia sp.).
For the direct comparisons, the volumes estimated by the selected GEFMA, GEAPUs and SSEs were contrasted with the observed volumes. A trend line for the estimates was generated using the linear relationship between the observed and estimated volumes. Additionally, the residuals of the estimates were graphically analyzed and submitted to Analysis of Variance (α = 0.05). Subsequently, the SNK test was applied (Student-Newman-Keuls) for comparison of means. All analyses were performed using R software (R Core Team, 2019).
Selection and validation of the equations
Since a large number of equations was generated, and the principal objective of this study is to compare the best equations, only the equations selected in each of the three categories are shown in Table 2. The complete [5] [6] [7] list of tested equations, with their respective adjustment and precision statistics, are found in the supplementary information (Appendix A). Graphical distributions of the residuals of the selected equations are also presented in a supplementary document (Appendix B).
All the selected equations have just DBH as an independent variable ( Table 2). The adjustment and precision statistics, as well as the graphical analysis of the percent residuals indicate similarity in the performance of these equations in relation to the equations that included DBH and h c . This result is possibly a result of the fact that commercial height had non-sampling error in its measurement due to the difficulty in accurately measuring this variable, which compromises the precision of the volumetric estimates.
The generic equations (GEFMA and GEAPUs) had values of R 2 aj varying between 0.64 and 0.76, and S yx (%) varying between 29.2 and 33.8%. In general, there were improvements in the statistics for the SSEs, except for a few species. Among the species-specific equations, the equation for the group "Others" had the highest error, and this reflects the large variability of the data grouped for seven species. Without considering the equation from the "Others" group, the R 2 aj varied from 0.30 to 0.79 between species, while the S yx (%) varied between 18.1 and 36.4% (Table 2). The estimated coefficients of all the selected equations were significant (p ≤ 0.05), according to the t-test from the regression.
All the selected equations were submitted to the validation test. The paired t-test (α ≤ 0.05), showed that the equation selected for the FMA was not statistically valid (p=0.0019) for the estimates of species' commercial volumes. With respect to the generic equations by APU, only for APUs 03 and 04 were the observed and estimated volumes not statistically equal (p ≤ 0.05). In contrast, all the species-specific equations were valid for the volumetric estimates (p ≥ 0.05), and therefore considered adequate for use. Despite this result in relation to the generic equations, the comparison of the selected equations in the three different categories of data was conducted. Table 3 shows the statistics for the analysis of estimate precision calculated by species, and consequently serves to compare the performance of selected equations in the three different categories.
Comparison of the selected equations
The species-specific equations (SSEs) show much lower values of MSRE (%), B (%) and AAD (%). This can also be seen in a more summarized manner by observing the weighted averages of the statistics at the end of Table 3. Furthermore, B (%) had large variation in the tendency to under-and overestimate values between species. The values of B (%) varied between -101.9 and 26.4%; -84.2 to 21.2% and -13.7 and 9.9% for GEFMA, GEAPUs and SSEs, respectively. From this perspective it can be concluded that, besides being statistically valid, the species-specific equations can generate more precise estimates when they are used for trees that were not part of the adjustment procedure, which is the principal objective of volumetric modeling.
In the direct comparison of the equations through contrast of the estimates in relation to the values observed for the entire validation dataset the trend lines were near the average (Figure 4 -Left). This occurred due to compensation by over-and underestimation since the number of trees was large. Although the three types of equations presented a trend to overestimate volumes (Figure 4 -Center), the ANOVA showed a significant difference between the residuals (Figure 4 -Right). Consequently, the means test revealed that the residuals generated by the SSEs were, on average, smaller and different than those generated by the other equations (GEFMA and GEAPUs). In the comparison of the equations for the ten most important commercial species, the contrast of the estimated and observed values showed that the SSEs generated more precise estimates, with trend lines nearer to the mean for most species (Figure 5 -Left). The analysis of the distribution of the residuals revealed gains in estimate precision by the SSEs for most species (Figure 5 -Center), although these were slight.
The superiority of the SSEs becomes evident when comparing the residual means. The ANOVA showed a significant difference (p < 0.05) for all ten species except H. impetiginosus and H. parvifolia ( Figure 5 -Right). The means comparison test showed that the residuals generated from the estimates made by the SSE were, on average, different and nearer to zero than the residuals generated by the GEFMA and GEAPUs. Consequently, the trends of over-and underestimation were reduced when the SSEs were used. The species M. huberi had the largest volume harvested in the ten UPAs (approximately 24% of the total volume), and had its total volume overestimated by the validation dataset by 10.6, 9.3 and 3.0% when the GEFMA, the GEAPUs and the SSE were used, respectively. Couratari sp., in contrast, with the second largest volume harvested, had its volume underestimated by 10.4 and 10.1% when the GEFMA and the GEAPUs were used, respectively; however, when the SSE was used for this species the volume was underestimated by just 0.8%. Table 2, even the best equations developed for C. odorata, C. cateniformis, H. petraeum, P. suaveolens and Terminalia sp. had high estimate errors (S yx > 30%). This is a function of the large variability in the data for this species, especially for large trees, which complicates model adjustment. In the comparison of the equations for these five species there are relevant gains in precision when the SSEs are used.
As shown in
When contrasting the generated estimates with the observed values, the trend lines for the estimates made by the SSEs were nearer to the mean (Figure 6 -Left). The distribution of the residuals showed a reduction in tendencies to under-and overestimate volumes using the CERNE SANTOS, et al
SANTOS, et al
SSEs (Figure 6 -Center). The ANOVA indicated a significant difference between the residuals generated by the three types of equations, except for the species C. cateniformis (Figure 6 -Right). The means comparison test showed that the SSEs were different than the generic equations (GEFMA and GEAPUs), with relevant gains in precision (Figure 6 -Right). Despite the non-significant ANOVA for the residuals of C. cateniformis, there was greater precision of the SSE, for the comparison of estimated volumes as well as for the evaluation of the residuals.
H. petraeum, one of the commercial species with the largest trees (Mean DBH = 109 cm), commonly has its volume underestimated. However, when the SSE is used for this species the underestimation was just 4.4%, while for the GEFMA and the GEAPUs the underestimation was 15.7 and 16.6%, respectively. In a similar manner C. cateniformis (Mean DBH= 105 cm) had its volume underestimated by 9.4 and 10.7% by the GEFMA and GEAPUs, respectively, while the SSE overestimated the volume of this species by 0.3%.
DISCUSSION
For the selection of the three types of equations (GEFMA, GEAPUs and SSEs), the single-input equations were chosen since these had similar performance to the double-input equations. Although not very common, single input generic equations have been tested and recommended for commercially managed species in diverse regions of the Amazon (Barros and Silva Júnior, 2009;Thaines et al., 2010;Barreto et al., 2014;Tonini and Borges, 2015;Gimenez et al., 2015;Gimenez et al., 2017). Results from these studies have shown that the use of single input generic equations reduced inventory time and cost, and one of the principal advantages is that intrinsic non-sampling error of the visual estimates of h c is eliminated.
In tropical forest ecosystems it is often difficult to measure commercial tree height with precision due to the presence of various strata and a closed canopy. Models that use only DBH as an explanatory variable are therefore useful in this case and have shown good results (Segura and Kanninen, 2005;Goussanou et al., 2016;Gimenez et al., 2017;Kora et al., 2018). When it is possible to rigorously measure commercial tree height in forest inventories, the use of double input equations should be prioritized.
It should be emphasized that the large errors in the estimates, principally for the selected generic equations, is due, in large part, to heterogeneity in dendrometric variables of the species. These results were also reported by other studies conducted in the Amazon (Hiramatsu, 2008;Cysneiros, 2016). As in the studies by these authors, normally high S yx and low R² aj are linked to the use of a large number of sample trees.
The SSEs for certain species such as C. odorata, C. cateniformis, Couratari sp., H. petraeum, O. costulata, P. suaveolens, Terminalia sp., V. maxima, and the "Others" group also had high errors (S yx > 30%), similar to those for the generic equations. This is probably due to the fact that these species have large structural variability, a common characteristic for large trees and where the largest errors in volumetric modeling occurs for tropical species (Brandeis et al., 2006).
The validation of the best equations indicated that, although the generic equations had satisfactory performance with respect to the adjustment and precision statistics and residual evaluation, would be inadequate for use in a new dataset because they can produce biased estimates. The species-specific equations, however, have the advantage of being validated by the t-test, indicating that their estimated volumes were statistically equal to the observed volumes.
Comparing the generic and species-specific equations through precision statistics revealed relevant variation in the generic equations in the tendencies for under-and overestimation between species, indicated by the values of B (%). This is a characteristic of generic equations when they are used to estimate the volume of individual species (Akindele and Lemay, 2006). This could be problematic for forest management planning since each species has specific restrictions, such as authorized harvest volume for each species. Furthermore, when species estimates are imprecise, the prediction for total production of a forest is incorrect.
The direct comparisons through contrasts of estimated and observed volumes for the entire dataset indicated that the errors in under-and overestimation tended to compensate each other due to the large quantity of data. This suggests that the generic equations are as efficient as the specific ones at making volume estimates for a set of data without stratification by APU or by species. However, as previously stated, there is a necessity to generate species-based estimates for forest management in the Amazon, which increases the importance of using of species-specific equations. Furthermore, the evaluation of the residuals through ANOVA and the means comparison indicated lower error using from the SSEs.
In this study, the precision statistics also demonstrated a gradual reduction in the estimated errors due to the stratification of data by APU and, principally, by species. The greater precision of the SSEs compared to the GEFMA and the GEAPUs, indicates that the results of individual evaluation using precision statistics are confirmed by direct comparison of equations. Species of greater commercial importance and those with large errors in their volume estimates were measured with greater precision by the SSEs, which could possibly occur for the remainder of the species in this study.
Species that have a high market demand, and that consequently have greater harvest volumes, need precise equations since systematic errors in estimates for these species represents under-or overestimation of a large quantity of cubic meters of commercial volume. Furthermore, for species that commonly have very largesized individuals (average DBH > 100 cm), volumetric modeling is particularly challenging (Brandeis et al., 2006;Cysneiros, 2016) and generic equations normally tend to under-or overestimate volumes. However, the results of the current study also show important gains in precision can be achieved for these species by using SSEs. In forest management areas, this is particularly important because exceptionally large trees obviously represent a large portion of total commercial volume.
The results confirm the importance of a reduction in variation of data to obtain more efficient equations, as emphasized by Finger (2006). The principal advantage of stratification of data by species was an increase in the correlation between v c and DBH, which undoubtedly contributed to a better adjustment of the models to the data. In certain cases, dataset stratification by species may be inviable from the point of view of sample representativity. However, this was not a problem in this study since after stratification most species remained well-represented with sample trees from across the range of species diameter.
Despite the greater facility in obtaining and using generic equations, they should be used in commercial tropical forests with caution. In the Amazon, besides their use being predominant, during many years an equation dependent on a form factor of 0.7, recommended by Heinsdijk and Bastos (1963), and generic equations adjusted using data from the TNF have been used in a generalized manner in a diversity of sites.
During recent years, equations have been adjusted for specific management areas in the Amazon (Rolim et al., 2006;Barros and Silva Júnior, 2009;Thaines et al., 2010;Barreto et al., 2014;Silva and Santana, 2014;Gimenez et al., 2015). However, these are equations used, generally, in a generalized manner for all species present in all production units that are managed annually. Furthermore, in the majority of cases, these are equations that were developed using a small number of trees, measured in specific places in management areas, thus reducing their representativity.
In the FMA in the TNF, Gomes et al. (2018) reported that an equation that was adjusted specifically for an APU was more precise than a general equation for the TNF and an equation related to the average form factor. Similar results were found by other studies when equations were developed for smaller areas (Mauya et al. 2014, Vibrans et al. 2015Kachamba and Eid, 2016). These results indicate that there is a gain in precision when data stratification in relation to area is done, possibly due to a reduction in edaphoclimatic variation. Even though there is a gain in precision, the heterogeneity between species can still be a limiting factor for generation of adequate equations, and therefore should be taken into consideration. Kora et al. (2018) compared species-specific volume equations with a generic equation in Benin in west Africa and found greater precision for the specific equations. The authors related that the generic equation had difficulty in estimating volume with precision, even though it was developed using data from the same forest ecosystem (the same edaphoclimatic conditions). Similar results were reported by Guendehou et al. (2012) and Goussanou et al. (2016) in the same region.
In the Brazilian Amazon, Cysneiros et al. (2017) tested generic equations for 32 commercial species, and species-specific equations for 12 principal species. Although direct comparisons of the estimates made by the selected equations were not done, the authors found better performance of species-specific equations through adjustment and precision statistics.
In the state of Amazonas, Krainovic et al. (2017) compared an equation specific for Aniba rosaeodora Ducke with a general equation based on the average form factor, which is commonly used in the Brazilian Amazon. These authors found that the general equation overestimated observed volumes by 32.8%, while the specific equation overestimated volume by just 0.15%. The use of a general equation with a single form factor for all situations could explain the elevated error generated by this equation.
Various factors can explain the difficulty of generic models in providing precise estimates, such as biophysiological properties of species and edaphoclimatic conditions (Goussanou et al., 2016). The inter-species variation in form factors of tropical tree stems (Larson, 1963;Silva et al., 1994) can make the generation of efficient generic equations difficult. Therefore, | 2020-12-24T09:13:04.063Z | 2020-11-17T00:00:00.000 | {
"year": 2020,
"sha1": "928b77dc0f294cde770a4a3acd5a56d047a1d136",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/cerne/v26n3/2317-6342-cerne-26-03-315.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "475d825ae9a4d961c6a9fd4bfef2be1025ad1f6c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
247199906 | pes2o/s2orc | v3-fos-license | Melatonin Ameliorates Testes against Forced Treadmill Exercise Training on Spermatogenesis in Rats
Introduction: It is well documented that some forced exercises can have bad effects on the genital system. Melatonin is a potent antioxidant that is effective in reducing the physical stress. Aim: The aim of this study was to evaluate the supportive effect of melatonin on the quality of spermatogenesis, including count, motility, morphology, viability, and apoptosis of sperm following a forced treadmill exercise. Materials and methods: A total of 40 adult male Sprague-Dawley rats were used in this experimental study. All rats were divided into five groups: control group, sham M group, melatonin (M) group, forced treadmill exercise group (Ft), and melatonin with forced treadmill exercise (MFt) group. The experimental group was trained to force treadmill stress for one hour of forced treadmill exercise daily, five days weekly for eight weeks. Then the sperm quality parameters were measured after dissection and removal of epididymis. Spermatogenesis and germ cell apoptosis were evaluated using Miller and Johnsen’s score and TUNEL staining separately. Results: Results showed the count, motility, morphology, and viability of sperm in forced treadmill-melatonin administrated group, significantly enhanced by melatonin treatment compared to the treadmill exercise group (p≤0.01). Also the number of apoptotic germ cells significantly decreased in treadmill exercised-melatonin administrated group compared to the treadmill exercised group. Conclusions: These results suggest that administration of melatonin can protect the testis against the detrimental effect of forced treadmill exercise in adult male rats.
INTRODUCTION
Infertility is one of the major health and social problems of all human societies in the present age and is one of the major causes of concern in couples who have begun living together. Exposure to external factors before and after pregnancy, and during the early postnatal stages can endanger their reproductive ability and the health of their offspring. 1,2 According to sport studies, researchers attribute these reproductive disorders to a series of physical activities that affect the male genital system such as the forced swimming exercise. [1][2][3] Some of the factors affecting male infertility depend on professional sport factors. Exercise is likely to decrease testosterone levels. 4 This change can impair the function of sperm cells which can ultimately decrease sperm quality and sterility. The oxygen consumed by the cells is converted to free radicals or reactive oxygen species (ROS) by the resuscitation of an electron in the mitochondria, as exercise increases the oxygen consumption ten to twenty times, leading to the promotion of ROS production in cells and tissues through the microsomal electron transport system during metabolism. 5 The drug is also produced in cells and tissues. Consequently, "exercise exerts a severe oxidative stress on cells and tissues". 6,7 However, ROS is effectively mediated by the cell's antioxidant defense system. ROS results in lipid peroxidation, membrane permeability, and apoptosis. This leads to DNA fragmentation of spermatogenesis cells and ultimately apoptosis of spermatogenesis cells. Exogenous antioxidants need to be added outside the cell to compensate for cellular antioxidant defense in order to prevent cellular damage. 7,8 Antioxidants are molecules that can prevent or slow the oxidation of other molecules. Oxidation is a chemical reaction that transfers electrons from a substance to the oxidizing agent. The oxidation reaction is capable of producing free radicals. 7,[9][10][11] Melatonin is a hormone that is made in the body by the pineal gland and a number of other organs such as the retina, lacrimal glands, intestines, and bone marrow or enterochromaffin cells of the gastrointestinal tract. 4,12 Melatonin is also a potent antioxidant that is readily available. It crosses the cell membrane as an important antioxidant that is active in both water and fat phases. Researchers have noted this. 2 Melatonin also reduces the effects of testicular injury in rats exposed to nandrolone. On the other hand, the presence of melatonin receptors in the testis has also been demonstrated 13 given that the amount of testicular antioxidant enzymes are lower than other tissues, such as the liver and kidney, and also given that it is likely that "intense exercise exerts a great deal of oxidative stress on the testis" 2,12 . Considering the high level of specialized exercise, for example, at the Olympic level or the professional level among male athletes and the wide range of adverse effects, especially sterility 4,12,14 , it seems important to employ a strategy to reduce these complications. Thus, melatonin is used as a potent antioxidant in this design to reduce the physical stress (exercise), thereby balancing the testicular antioxidant system, improving apoptosis and sperm quality. 4,[15][16][17][18]
AIM
Given that no attention has been paid to the molecular mechanism of the effect of melatonin on testicular tissue induced by running exercise, the main purpose of this study was to identify the molecular mechanisms involved in the effect of melatonin on apoptosis in severe exercises such as forced treadmill exercise, so that some strategies may be developed in the future.
Animals and exercise procedure
In this experimental study, the rats were accidentally separated into five different groups with 8 animals each. All animal care was performed according to guidelines of the Mazandaran University of Medical Sciences (Sari, Iran).
Group 1 (control group): without any injection or exercise protocol. Group 2 (sham M): rats received the solvent of melatonin (ethanol %1) as a vehicle (ip); Group 3 (M): rats received 10 mg/kg of melatonin weekly for eight weeks (ip). Group 4 (Ft): the exercise protocol was engaged for one hour of forced treadmill exercise per day, five days a week for eight weeks. Group 5 (MFt): rats received 10 mg/kg/week of melatonin and the exercise protocol was engaged for one hour of forced treadmill exercise per day, five days a week for eight weeks.
Sample collection
Measurement of testes weight, seminal vesicle, prostate and epididymal weights were evidenced along with measuring epididymal sperm count, morphology, viability, and motility. Testes were placed overnight in 10% buffered formaldehyde (37% formaldehyde, Merck, Darmstadt, Germany).
Sperm analysis
The epididymis was minced with scissors in a petri dish containing 5 ml of Ham's F10 medium and incubated at 37°C for 15 minutes to allow the spermatozoa to exit from the tissue. To analyze sperm motility, 10 µl of sperm suspension was placed on a slide and then covered with a coverslip. The percentage of motile sperms was calculated by selecting 10 microscopic fields at 400× magnifications. 19,20 To determine sperm viability, 10 µl of sperm suspension was blended with an equal volume of eosin-nigrosine dye. The percentage of live sperms (colourless or light pink) and dead sperms (red or dark pink color) were calculated by counting 200 sperms in each slide with observation by light microscope at 1000× magnification. The sperm count was calculated by mixing 50 µl of sperm suspension with 200 µl of distilled water. 10 µl of this diluted suspension was moved to each of the counting chambers of the Neubau-er haemocytometer and stand for 5 minutes for cells sedimentation. After this 5 minutes, sperm count was analyzed with a light microscope at 400× magnification. 3,21 Sperm morphology was evaluated by using eosin Y staining. One drop of sperm suspension was mixed with an equal amount of 1% eosin Y dye. After 30 minutes, smears were prepared and allowed to dry in the air, and were mounted and then covered with a coverslips. Two hundred sperm cells were inspected in each slide to investigate the morphological abnormalities at 1000× magnification. 22,23 Unusual structure or morphology of head and tail of spermatozoa was considered as abnormal morphology of sperm.
Evaluation of spermatogenesis
In the current study, Johnsen's and Miller's scores were used to evaluate the spermatogenesis. Spermatogenesis was ranked by calculating Johnsen's score (from 1-10) and measuring the number of germinal cell layers in the testes. Ten seminiferous tubules were considered to count germinal epithelial layers according to the Miller's scores. The scores of spermatogenesis quality in seminiferous tubules, were obtained according to the maturity of germ cells. 21
Histopathological analysis
After fixation, testes were embedded in paraffin wax and then 5-μm thick sections were obtained by using a rotary microtome. The prepared slides were stained with haematoxylin and eosin and then observed by a light microscope.
Germ cell apoptosis by TUNEL assay
Germ cell apoptosis was evaluated by TUNEL staining according to the TUNEL [terminal deoxynucleotidyl transferase (TdT) enzyme-mediated dUTP nick end labeling] assay kit protocol. 5-μm thick paraffin-embedded sections were deparaffinised and then rehydrated in graded alcohol. Sections were incubated in blocking solution (3% H 2 O 2 ) to neutralize endogenous peroxidases for 10 minutes. Then, sections were washed with PBS and were incubated with TdT for 60 minutes at 37°C. After washing the slides with PBS, they were incubated with anti-digoxigenin peroxidase antibodies. DAB substrate was applied for 10 minutes to stain positive apoptotic cell brown. At least, 10 seminiferous tubules were selected in each section for counting apoptotic cells by light microscope observation.
Johnsen's scores
Evaluation of Johnsen's scores in histopathological samples indicated that forced treadmill running exercise led to a significant decrease of spermatogenesis quality in the Ft group (8.1±0.83) as compared to that in the control (9.9±0.52) and sham (9.8±0.31) groups (p<0.01). Melatonin treatment in forced exercise rats (group MFt) increased the Johnsen's scores (9.0±0.67) as compared to the Ft group (8.1±0.83), but this change was not significant (Table 1, Fig. 3).
DISCUSSION
In the current study, we assessed the protective effect of melatonin treatment on harmful effects of forced treadmill running exercise in adult rats. An eight-week forced exercise led to an increase in the apoptotic germ cells in the testes tissue, and to a decrease in the sperm quality parameters, Johnsen's scores, body weight changes, and seminal vesicle weight. We found that melatonin treatment can protect against forced treadmill running exercise harmful effects via decreasing the apoptotic germ cells, increasing the sperm quality parameters (sperm viability and sperm progressive motility), and preventing body weight reduction.
There is a general consensus that intensive exercise stress can lead to testis tissue dysfunction and decrease spermatogenesis quality. 4 Previous studies have reported that intensive exercise increases germ cell apoptosis in testes. 13,22,24,25 Oxidative stress in testicular tissue is the main cause of infertility following intensive exercise. Due to enhancement of oxygen consumption during exercise, reactive oxygen species (ROS) are excessively generated and affect testicular normal structure and function. 7,11,24 Testis is a susceptible tissue to the oxidative stress because of the high level of cell division and existence of high level of unsaturated fatty acids. 26 Also, during extensive exercise, the blood supply of testis decreases and subsequently testosterone secretion declines and hypoxia in testis leads to apoptosis of germ cells. 27 There are several research studies both in animals and humans which have demonstrated that intensive exercise leads to the reduction of sperm quality parameters and production of reproductive hormone, and to the increase in the oxidative stress and lipid peroxidation. 7,8,11,18,28,29 Intensive exercise induces oxidative stress in testis tissue via decreasing the anti-oxidative enzymes (superoxide dismutase and glutathione peroxidase) and increasing oxidative enzyme (malondialdehyde). 30 Swimming exercise is one of the best studied animal models of intensive forced exercise. Previously, Moayeri et al. have reported that melatonin treatment could hamper the detrimental effects of forced swimming exercise against oxidative stress in testis tissue and spermatogenesis. 4 Similar to our study, their study shows that forced swimming exercise leads to the reduction of sperm quality parameters and anti-oxidative enzymes and to the increase of germ cell apoptosis. 4 They reported that melatonin treatment significantly reduced apoptosis of germ cells and increased progressive motility and antioxidative enzymes as compared to non-treated animals.
Beneficial effects of melatonin treatment as an antioxidant agents has been reported in different oxidative stress related disorders. In accordance with previous studies, we found that forced treadmill exercise decreased prostate and seminal vesicle weights. 4,22 Intensive exercise can change energy metabolisms and reduce the secretion of some reproductive hormones such as testosterone, which decreases the testes weight and the accessory sex organs. 31,32 Similar to our findings, a previous study has reported that melatonin could increase accessory sex organ. 4 Significant reduction in the spermatogenesis and sperm parameters quality after intensive exercises may be due to a reduction of testosterone production and increase of the oxidative stress.
Based on the literature data, overproduction of ROS causes cell damages in the testicular tissue. 24 Melatonin has been shown to have protective effects against the detrimental effect of ROS overproduction during intensive exercise. 4 Melatonin treatment has indicated the beneficial effects after ischemic/reperfusion injuries of animal model of testicular torsion/detorsion. 33 Previously, it has been stated that melatonin ameliorates testicular torsion/ detorsion injuries and increases spermatogenesis quality via increasing the Johnsen's score and serum inhibin B. It is of note that melatonin can stimulate antioxidative enzymes which boosts its antioxidative properties and protects against DNA damage. 12 Also, melatonin can protect testis tissue via stimulating testosterone production and angiogenesis properties. 33 There are different studies consistent to our study which have proved the protective role of melatonin against germ cell apoptosis. [34][35][36] For instance, a study reported that melatonin treatment decreased the apoptotic cells in testes of Busulfan-treated mice which confirmed the protective role of melatonin against chemotherapeutic agents and enhancement of fertility after cytotoxic therapy. 34 In another study, melatonin showed protective effects against cisplatin testicular damages as chemotropic agents. 37 Cisplatin administration led to a significant reduction of the testes weight and accessory sex glands; however, melatonin treatment ameliorated these adverse changes. 8,17,37,38 Also, melatonin could counteract the adverse effects of cisplatin on decreasing epididymal sperm count, motility, and morphology. Take et al. reported that melatonin could protect testis tissue against degenerative changes and cell death of ionizing irradiation exposure. 25,39 This result may be due to free radical scavenging, antioxidant and anti-apoptotic (caspase-3 inhibition) properties of melatonin.
CONCLUSIONS
These results suggest that administration of melatonin can protect the testis against the detrimental effect of forced treadmill exercise in adult male rats. Forced treadmill running exercise results in testicular oxidative stress and induces testicular injuries. Taken together, melatonin can protect testis tissue and sperm quality parameters. | 2022-03-03T16:21:31.513Z | 2022-02-28T00:00:00.000 | {
"year": 2022,
"sha1": "0f22d6b3459499ecec04658e55786b73cb4c1440",
"oa_license": "CCBY",
"oa_url": "https://foliamedica.bg/article/57544/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "13906247ef39d68ddc59b0c60e1e972624869bdf",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
22427759 | pes2o/s2orc | v3-fos-license | dbQSNP: A Database of SNPs in Human Promoter Regions With Allele Frequency Information Determined by Single-Strand Conformation Polymorphism-Based Methods
We present a database, dbQSNP (http://qsnp.gen.kyushu-u.ac.jp/), that provides sequence and allele frequency information for single-nucleotide polymorphisms (SNPs) located in the promoter regions of human genes, which were defined by the 5 0 ends of full-length cDNA clones. We searched for the SNPs in these regions by sequencing or single-strand conformation polymorphism (SSCP) analysis. The allele frequencies of the identified SNPs in two ethnic groups were quantified by SSCP analyses of pooled DNA samples. The accuracy of our estimation is supported by strong correlations between the frequencies in our data and those in other databases for the same ethnic groups. The frequencies vary considerably between the two ethnic groups studied, suggesting the need for population-based collections and allele frequency determination of SNPs, in, e.g., association studies of diseases. We show profiles of SNP densities that are characteristic of transcription start site regions. A fraction of the SNPs revealed a significantly different allele frequency between the groups, suggesting differential selection of the genes involved. Hum Mutat 26(2), 69–77, 2005. rr 2005 Wiley-Liss, Inc.
INTRODUCTION
SNPs in the transcriptional promoter regions are candidates for the cis-acting determinant of the inherited variations found in gene expression among individuals [Pastinen and Hudson, 2004]. A catalog of these SNPs should be useful for studying the regulatory mechanisms of genes with medical importance, such as those that convey disease resistance or affect drug metabolism.
We have developed a database, termed dbQSNP, that accumulates and presents experimental data regarding the sequences and allele frequencies of SNPs in the promoter regions (Fig. 1). The promoter regions studied here were 1.0 kb upstream and 0.2 kb downstream of transcriptional start regions (TSS), which were defined experimentally as the 5 0 end of fulllength cDNA and were available from the database of transcription start sites (DBTSS: http://dbtss.hgc.jp [Suzuki et al., , 2004).
We conducted experimental SNP findings and quantification by performing both sequencing and SSCP analyses [Inazuka et al., 1997;Kukita et al., 2002b;Baba et al., 2003], which were coordinated by a laboratory information management system, dbQSNP. In this system, a precise estimation of SNP allele frequency is achieved using a pooled DNA strategy, which has advantages in terms of cost and labor efficiency compared to the counting of alleles after individual genotyping [Sasaki et al., 2001;Baba et al., 2003].
This database is useful as a resource for selecting informative SNPs to study genes that are responsible for various diseases, many of which may be caused by mutations in the regulatory sequences. Some characteristics of the regions with respect to SNP density and spectrum of base changes have been observed. Comparisons of allele frequencies in this database and those in the public databases revealed that frequencies can differ widely between ethnic groups, possibly as a result of differential selection, as in the case of the aldehyde dehydrogenase 2 (ALDH2; MIM# 100650) (Oota et al., 2004).
MATERIALS AND METHODS Target Regions and Primer Design
The sites to be examined were collected in several batches in the following manner: The 5 0 sequences of the cDNA clones used in this study were mapped onto the human reference genome and then clustered according to the reference sequences of transcripts (NM-prefixed RefSeq; www.ncbi.nlm.nih.gov/RefSeq/) as previously described [Suzuki et al., 2004]. RefSeqs with more than 10 summed clones were chosen as those of frequently transcribed genes, and the 5 0 positions of the clones were mapped onto the genome sequence. The position with the highest clone number within each NM cluster was defined as the candidate of the major TSS of the RefSeq. The most upstream candidate was defined as TSS in this study when more than two candidate positions were found. Most of the SNPs that are presently in the dbQSNP public database are in the neighborhood of the frequent 5 0 ends of full-length cDNA collected in DBTSS (version 3.0, 23 May 2003 [Suzuki et al., 2004]), remapped on the Human Reference Sequence (Release 34; ftp://ftp.ncbi.nih.gov/genomes/ H _ sapiens/).
The target sequences (1.0 kb upstream and 0.2 kb downstream of TSS) were filtered using RepeatMasker (http://repeatmasker. org/), and repetitive elements were removed. The sequences were then used as input data for Primer3 [Rozen and Skaletsky, 2000] to define the PCR primers and STS. All primers for the PCR (purchased from SIGMA Genosys, Japan; www.genosys.jp) carried extra nucleotides of either 5 0 -ATT or 5 0 -GTT for the purpose of post-labeling with fluorescent nucleotides by the exchange reaction, as described previously [Kukita et al., 2002a].
DNA
The DNA used for the SNP searches was extracted from the B-lymphoblast cell lines of anonymous Japanese individuals (the samples were kindly supplied by Takafumi Ishida, Tokyo University). Locally collected DNA served as pools of 100-426 Japanese individuals. Either CEPH parents (78 individuals, www.cephb.fr/cephdb/) or DNA panels from Caucasians (HD100CAU/HD200CAU, CORIELL Cell Repositories, http:// locus.umdnj.edu/nigms/) served as a source of the Caucasian DNA pool.
The pools were constructed as follows: The initial concentration of each DNA dissolved in 10 mM Tris-HCl (pH 7.5) and 1 mM EDTA (1 Â TE buffer) was determined using a microplate spectrofluorometer (SPECTRAmax s GEMINI XS; Molecular Devices Corp; www.moleculardevices.com), after staining with PicoGreen dsDNA quantification reagent (Molecular Probes; http://probes.invitrogen.com). The concentration was adjusted to 20 ng/mL by two-step dilution with various volumes of 0.1 Â TE using a robotic system (Genesis RSP150, TECAN AG; www. tecan.com), and the concentrations of all samples were confirmed to be within 10% of the expected value. The pools were then made by combining equal volumes of each sample, and the concentration was finally adjusted to 16.6 ng/mL.
Sequencing Analysis
The PCR products were sequenced using the ABI BigDye Terminator Cycle Sequencing Chemistry and ABI PRISM 3700 or Applied Biosystems 3730 Sequence Detection Systems (Applied Biosystems; www.appliedbiosystems.com). The sequencing data were interpreted with the use of Phred/Phrap/PolyPhred [Ewing et al., 1998;Nickerson et al., 1997] and PolyBayes [Marth et al., 1999], followed by visual inspection to determine the consensus sequence and identify SNP nucleotides. We developed a Java applet, termed '' A-view,'' to visualize Phrap alignments along with STS sequences (aligned using CLUSTAL W [Thompson et al., 1994]) and to edit the consensus if necessary, instead of using the conventional Consed viewer/editor. Insertion/deletion polymorphisms were characterized by a new algorithm, Ins/del Interpreter, which is based on the idea that the sequence trace of a heterozygote should be the summation of signals from equal amounts of each allele. Thus, subtraction of half the amplitude of the homozygote trace signal from the heterozygote trace signal produced the trace of the residual allele, which was then analyzed by Phred/Phrap.
SSCP Analysis
Post-PCR Labeling of DNA fragments and analysis using Automated Capillary Electrophoresis under SSCP conditions (PLACE-SSCP analysis) was performed as previously described [Baba et al., 2003], using the ABI PRISM 3100 system. The SSCP data obtained from capillary electrophoresis machines were processed by QUISCA software [Higasa et al., 2002] to judge the presence or absence of polymorphisms, and to assign alleles if present. Estimations of allele frequency of polymorphic STS with well-separated allele peaks were done automatically using peak heights of pools and heterozygotes. Peaks of heterozygotes were used to correct PCR bias by a previously described algorithm [Sasaki et al., 2001]. When separation of alleles was insufficient, fused peaks were resolved and interpreted semiautomatically by a module, Allele Resolver. With this module, trace data of homozygotes were subtracted from the traces of the heterozygotes to obtain the trace data of remaining alleles of presumably equal concentrations. The traces of the reconstituted pools were then generated from the mixtures of the traces of the two alleles at various ratios. The allele frequency of the pool was estimated from the mixing ratio to minimize the difference between the traces of the pool and those of the reconstituted pool. Overview of the dbQSNP system.The dbQSNP system consists of two main servers^dbQSNP Conductor and dbQSNP Public^to which client PCs were connected through a local area network. The major modules of dbQSNP Conductor are Project Manager and Process Manager. Project Manager makes projects that are de¢ned as sets of STS, and the DNA templates for the PCR. Process Manager creates Run ¢les, which serve as protocols to carry out PCR, SSCP analysis (post-labeling reactions), or sequencing reactions. It also creates ¢les to be downloaded to client PCs that operate capillary electrophoresis machines (ABI3100, ABI3700, and ABI 3730) or robots (TECAN Genesis). Process Manager also handles trace data of SSCPs or sequencing, which are uploaded from the capillary machines to the server (see Materials and Methods). The data interpreted in dbQSNP Conductor are reviewed by a curator using Transfer Tool. Satisfactory STS are given the status''registered''and added to the list for update, and records are created to be posted in dbQSNP Public.The registered STS in the list for update are subjected to a BLAST search against existing dbQSNP Public, dbSNP, and the Reference Human Sequence every day to obtain annotations (e.g., local synonyms, synonymous RefSNPs in dbSNP (dbSNP i.d.), chromosomal location, and neighboring genes). Records of registered STS in dbQSNP Public are replaced by dbQSNP Prepublic on the occasion of version renewal.
Laboratory Information Management System
All procedures were controlled by a laboratory information management system, dbQSNP, that consists of two servers: dbQSNP Conductor and dbQSNP Public (Fig. 1). dbQSNP Conductor manages the whole process of experiment and data analysis. dbQSNP Public publicizes the results as records that contain SNP information, with various annotations.
These servers were situated on UNIX machines, and Post-greSQL was used as a relational database management system. Client PCs within a local network communicated with these central machines through Web servers (Apache). The system was described in Perl and C, while a Java applet was used in some interactive operations. Data transfers between dbQSNP Conductor and dbQSNP Public, and between dbQSNP Conductor and peripheral instruments (i.e., capillary electrophoresis machines and a dispensing robot) were done by file transfer protocol (FTP).
dbQSNP Public Web Interface
The dbQSNP record for STS is a primary record and can be viewed on the chromosome list or by a Blast or keyword search. The dbQSNP record for SNP contains additional information for analyzing each SNP (e.g., the primer sequence for micro-STS, if present) and is linked to STS records. In the records of dbQSNP Public, the sequence changes are unequivocally described by the chromosomal position on the latest version of the Reference Human Genome Sequence, as determined by Blast searching of sequences flanking SNPs (4180 bp) against the database (ftp:// ftp.ncbi.nih.gov/genomes/H _ sapiens/) at the update of dbQSNP Public. Multiple hits to the genome sequence are indicated when found. The adoption of HGVS-approved nomenclature [den Dunnen and Paalman, 2003] is being considered. The SNP data in this study have been submitted to dbSNP (www.ncbi.nlm.nih.gov/SNP) under the handle name KYUGEN.
Strategies for Discovering and Quantifying the SNPs
We first adopted a strategy for finding SNPs (termed ''reg series'') in which the primary screening was done using SSCP analysis. The 1.2 kb target regions without repeat elements were covered by four overlapping sequence-tagged sites (STS; average fragment size=350 bp), and eight individuals were examined by PLACE-SSCP analysis. STS that revealed the same SSCP pattern for all examined individuals were regarded as monomorphic. Polymorphic STS were reamplified from the homozygotes, heterozygotes, and pooled samples using the same primers as in the initial amplification, followed by sequencing to identify SNP nucleotides, and for the second SSCP analysis. The allele frequencies of the SNPs were determined by quantitative SSCP analysis [Sasaki et al., 2001]. Consistencies between the results of SSCP and sequencing were confirmed, and nucleotide sequence polymorphisms were assigned to the SSCP alleles.
With this strategy, approximately 88% of the examined STS (350-400 bp) were interpretable by SSCP. The remaining STS were accounted for by PCR failure or by a complicated SSCP pattern (e.g., multiple peaks per allele). These were probably caused by the extremely high GC content (475%) or by embedded short tandem repeats in the STS. No further attempts were made to rescue these missed STS. Approximately two-thirds of the analyzable STS were judged to be monomorphic, although some SNPs (5-10%) may have escaped detection in this step because of limitations in the sensitivity of SSCP [Baba et al., 2003]. Approximately 38% of the SNPs described in dbQSNP (version 10, 10 August 2004) were collected using this strategy.
Allele frequency could be calculated for over 70% of the STS that had single SNPs (data not shown). For STS containing multiple SNPs, the possible number of haplotypes exceeded the number of peaks separated by SSCP, and the allele frequencies of SNPs could not be deduced. About 10% of the SNPs detected in the reg series were in this category.
As an alternative strategy (the ''regm series''), we discovered the SNPs by sequencing overlapping repeat-free segments in eight individuals. We then designed smaller amplicons, termed micro-STS (length=60-100 bp), to bracket the discovered SNPs. Homozygotes, heterozygotes (identified by sequencing), and pooled DNA served as templates for the second PCR and SSCP analysis. With this strategy, most of the micro-STS harbored only one SNP.
With the second strategy, primary screening by sequencing was interpretable for 94% of the examined STS. The rest were accounted for mainly by PCR failures, including low-fidelity amplification. Micro-STS were designed for more than 90% of the discovered SNPs. Some SNPs were not targeted by micro-STS, because they were located too close to the ends of resequenced regions. Failure to design micro-STS was also the case when SNPs were within extremely GC-rich regions, and primers with appropriate melting point (T m ) values (lower than 731C) could not be designed.
With both strategies, some of the discovered SNPs could not be quantified, for the following reasons: 1) The peak-height ratio between the alleles was skewed (1:4 or more), or variable in two or more heterozygotes. This may have been caused by excessive PCR bias, the presence of polymorphic paralogues [Fredman et al., 2004], or large-scale copy-number variations [Iafrate et al., 2004]. The presence of SNPs in primer sequences also may have caused variable heterozygote patterns, although we tried to design primers in SNP-free regions. 2) Peaks that were not found in individuals were detected in the pool(s), i.e., there were additional uncharacterized SNPs in the pooled samples. 3) The STS carried multiple SNPs, and the allele frequencies of SNPs could not be deduced. 4) Separation of the allele peaks was insufficient for resolution and quantification.
Overall, allele frequencies were obtained for 58% of the SNPs that were detected by SSCP in the first strategy using STS of approximately 350 bp. In the second strategy, in which SSCP analysis was applied to smaller amplicons, allele frequencies were obtained for 67% of the SNPs.
Nucleotide Diversity and Positional E¡ects
We estimated nucleotide sequence variation around the TSS using resequenced data from eight individuals (i.e., those obtained by the regm series). We remapped the examined regions on the Human Reference Sequence (Build 34), and a total of 1.4 Mb of regions were finally regarded as sequences flanking the TSS (1,842 genes). The estimate of per-site nucleotide heterozygosity averaged for the entire examined sequences, as estimated from a pairwise comparison of all observed alleles (p) [Hartl and Clark, 1997] for these regions, was 6.4 Â 10 À4 . This value is lower than those previously estimated by others (7.1 Â 10 À4 [Reich et al., 2003] and 7.5 Â 10 À4 [Sachidanandam et al., 2001]), probably because the heterogeneity of the subject populations differed (i.e., Japanese in this study vs. two or more ethnic groups (including Africans) in the previous studies). Other factors that may have contributed to the reduced diversity include functional constraint around the transcription start sites, or complete removal of the repetitive elements from our targets, which are known to have approximately twofold higher diversity compared to single copy regions [Orita et al., 1990].
The ratio of transition-to-transversion for the SNPs in the TSS regions was 1.56. This was significantly lower than previously reported values, such as the 1.91 determined in a large-scale, genome-wide study [Zhao and Boerwinkle, 2002]. The reduced ratio in our study may be explained by the nature of the target regions that are enriched with CpG islands, where CpG dinucleotides are abundant but are undermethylated [Majewski and Ott, 2002]. The CpG dinucleotides are generally recognized to be hot spots of transitional mutations (CpG to TpG or CpA) because of cytosine methylation followed by deamination to thymine, and thus are frequent sites of SNPs. However, the dinucleotides in the islands are scarcely methylated, so mutations at these sites are expected to be suppressed and the transition-totransversion ratio is lowered. In accordance with this, we found that only 58% of SNPs involving CpG had TpG/CpA variants in the TSS regions. This is consistent with a previous report [Tomso and Bell, 2003] that the fraction of transition in all CpG variants was 63% in the islands, whereas that in non-islands was 80%.
We next estimated the average per-nucleotide heterozygosity p at each nucleotide position relative to TSS ( Fig. 2A). More than 800 sequences were used for the calculations at each position (À800 to 100) shown in Figure 2. The base composition at each nucleotide position is also shown (Fig. 2C). The sharp changes in the base compositions at or near the TSS support the validity of our definition of TSS: that is, A at TSS, C at the position 5 0 adjacent to TSS (consensus sequence of initiator), and an AT-rich nature around position À30 (TATA box position) . The value fluctuated considerably; nevertheless, a gradual increase of p in the upstream region toward the TSS was evident, as shown by the value in the moving average of 100 nucleotides. This increase correlated with local GC content, consistent with previous observations (i.e., a correlation of nucleotide diversity with GC content [Sachidanandam et al., 2001]). Other features observable in the figure (p in the moving average of 20 nucleotides) are small but distinct drops of p at position À30, where the TATA boxes are located, most likely because any mutations would be deleterious at this site.
Depletions of SNPs were also observed starting at around +50 and extending to the downstream regions, indicating the conservative nature of coding regions [Halushka et al., 1999]. To confirm this, we realigned the same set of sequences by assigning the translation initiation site (A of the first methionine annotated in RefSeq, GenBank) as position 1, and then removed the nonexonic sequences (Fig. 2B and D). More than 300 sequences were examined at the positions shown (À90 to 70, relative to each initiation codon). A sharp drop at around position 0 was caused by the presence of the initiation codon, where polymorphic change is obviously prohibitive. Another large drop at around positions 50 was also observed, but the reason for this is unknown. Unexpectedly, we also found that the reduced diversity of the coding region did not extend further upstream of around À30 within the first exon (Fig. 2B). This is contrary to previous suggestions that the diversity of the 5 0 untranslated region (UTR) is similar to that of nondegenerate sites in protein-coding regions in humans [Hellmann et al., 2003], and consistent with data showing that the overall SNP density is higher in 5 0 UTR regions than in coding regions [Salisbury et al., 2003].
Commonality of SNPs in dbQSNP WithThose in Other Databases
More than 10 million nonredundant SNPs are described in the central database (dbSNP), of which approximately half have been validated. Though these figures are large, it is uncertain whether the available SNPs cover most of the SNPs present in a particular genomic region of a given ethnic group. We assessed the commonality and consistency between our data and the public data by Blast-searching the genomic regions targeted here against external SNP databases (dbSNP, Build 123; JSNP, Release 21).
As shown in Table 1, we found 6,473 SNPs and quantified allele frequencies for 4,134 (64%) of them. Newly identified SNPs (not described, or described solely by us in dbSNP as RefSNPs) accounted for 2,245 (35%), and 502 of them (22% newly found) were frequent SNPs (minor allele frequency 410% in Japanese subjects). When the comparison was limited to validated SNPs (validated in all categories defined in dbSNP), 3,668 (57%) of the SNPs we found were already registered in the dbSNP as validated SNPs. The fraction of validated SNPs was increased to 74% if frequent SNPs were compared. On the other hand, only 34% (2,079/6,188) of the validated SNPs in dbSNP were identified to be frequent SNPs by this study. Considering that about two-thirds of the detected SNPs were quantified in this study, approximately 50% of the validated SNPs in dbSNP may be rare or absent in the Japanese population.
We also examined the commonality of our SNPs with those in JSNP (http://snp.ims.u-tokyo.ac.jp/ [Hirakawa et al., 2002]) and HapMap (www.hapmap.org [International HapMap Consortium, 2003]; Table 1). JSNP is a gene-based SNP database of the Japanese population, and 195,059 SNPs (84,566 with allele frequencies) are registered (Release 21). As is evident from the table, we found 80% of the SNPs determined by JSNP that had high minor allele frequencies and were located in our screened regions, whereas only 17% of our frequent SNPs were covered by JSNP. At least part of the reason for the low coverage of JSNP is that many promoter sites targeted in the present study have not been examined by JSNP, because the information on many transcription start sites (5 0 end sequences of mRNAs obtained in the full-length cDNA projects) was not available at the time the SNPs were collected by JSNP. In HapMap Project public data release #14 (December 2004), 956,730 SNPs were genotyped in CEPH individuals, and less than half of them were genotyped in other populations. In the region we screened, only 15% of the SNPs identified by us were assayed by HapMap, reflecting a rather sparse coverage of the project at present. We did not identify 43% of the HapMap target SNPs, probably because of their rarity in the Japanese population, whereas we identified 89% of the SNPs that were found to be frequent (MAF410% according to HapMap data) by genotyping of Japanese individuals (data not shown).
Allele Frequency Distribution
The distribution of SNPs in bins of allele frequencies is shown in Figure 3. The allele frequency distribution obtained in this study differs from that expected from neutral theory of population genetics (i.e., that SNPs with higher minor allele frequencies are enriched [Kruglyak and Nickerson, 2001]). Obviously, this is because we surveyed only eight individuals to discover the SNPs, and thus preferentially collected the frequent SNPs. The fraction of SNPs having a minor allele frequency greater than 10% (frequent SNPs) in the Japanese pool was 68%, which is greater than that in the Caucasian pool (61%), reflecting the fact that SNPs were ascertained by screening Japanese individuals. The fraction of frequent SNPs in both populations was 54%.
Intra-and Interpopulation Comparison of Allele Frequencies
To assess the reliability of our frequency estimation, we compared it with those in the JSNP database using 569 SNPs that were common to both databases. The subjects sampled in the frequency determinations by us (dbQSNP) and JSNP were independently collected from different regions of Japan. The methods used to determine the frequency also differed. Pooled DNA samples from 100-426 individuals were assayed by SSCP analysis in this study, while alleles of up to 752 individuals were typed using the Invader assay in the JSNP study [Ohnishi et al., 2001]. However, the allele frequencies in the two databases agree well (r 2 =0.90), as shown in Figure 4A. Thus the methods for estimating frequency employed in both studies are highly reliable; moreover, the Japanese populations sampled were clearly well mixed.
We then compared our allele frequency data for Caucasian samples with those of the HapMap Project (public data release #14, December 2004). The Caucasians examined for the frequency estimation by HapMap were 60 parents of unrelated trios in CEPH reference families. The allele frequencies of 690 SNPs in the two databases are in moderately good agreement (r 2 =0.82), as shown in Figure 4B.
A comparison of allele frequencies of SNPs between Japanese and Caucasians in our data revealed quite a different picture (r 2 =0.41), as shown in Figure 4C. More than half of the SNPs (2,254 or 56%) showed significantly different allele frequencies between the two pools (Po0.01 in a Z-test for two independent proportions). The genetic differentiation of populations can be calculated using F ST statistics [Wright, 1969]. Several versions of calculating F ST have been developed since then. We calculated unbiased estimates of F ST under an analysis of variance framework, considering evolutionary processes [Weir, 1996;Akey et al., 2002]. SNPs with minor allele frequencies of less than 10% in both populations were excluded from the calculations, considering the uncertainty of estimating allele frequencies below 0.10 (Fig. 5).
The mean value of F ST was 0.072, which is significantly higher than the following values obtained by intrapopulation comparisons: F ST =0.0037 (between Japanese populations assayed by us and by JSNP) and F ST =0.0045 (between Caucasian populations assayed by us and by HapMap). While most SNPs (96%) revealed insignificant differentiation (F ST o0.3), some of the SNPs (approximately 0.5%) showed an unusually high F ST (40.5). These population-biased SNPs (high F ST SNPs) were scattered throughout the chromosomes (data not shown). Among these was the SNP (rs886205) in exon 1 of the ALDH2 gene (F ST =0.500 in our calculation), which is known to have significantly different allele frequencies between Asians and Caucasians, probably because of natural selection [Oota et al., 2004]. At least part of the high-F ST SNPs we found here should also be in the loci under selection during local adaptation. On the other hand, there were a considerable number of low-F ST SNPs: approximately 20% of the SNPs showed an F ST value close to zero (Fig. 5, first column; note that unbiased estimates of F ST sometimes give negative values). Although the data are presently insufficient for us to draw conclusions, balancing selection may be acting on some of these regulatory SNPs.
DISCUSSION
We developed a laboratory information management system, dbQSNP, to find and quantify SNPs by performing coordinated analyses of capillary-based SSCP studies and sequencing. The system was designed to take advantage of SSCP to accurately quantify SNP alleles in pooled DNA [Sasaki et al., 2001;Kukita et al., 2002b;Baba et al., 2003].
We believe we found the majority of frequent SNPs (those with a minor allele frequency of 410% in the Japanese pool) in the screened regions, since we found 80% of the frequent SNPs described by JSNP (Table 1). On the other hand, we detected only 59% of validated reference SNPs (dbSNP, Build 123) in the regions we screened. Many of the SNPs that were not detected in this study are likely to be those with a low allele frequency in Japanese subjects, because most of the SNPs in dbSNP have been discovered in studies using multi-ethnic DNA sources with an emphasis on Caucasians [Carlson et al., 2003]. Since we obtained our findings by examining eight individuals, the SNPs discovered here were biased toward high minor allele frequencies. The population dependence of allele frequencies indicates that further efforts to find and confirm SNPs for each population may be needed to secure many informative SNPs. This is necessary, for example, for association studies of diseases, in which the subjects are usually drawn from a single population.
Previous studies revealed differences in nucleotide diversity in several classes of sites, such as coding/noncoding and synonymous/ nonsynonymous [Halushka et al., 1999]. However, efforts to characterize nucleotide diversity around transcription start sites are still limited [Hellmann et al., 2003;Salisbury et al., 2003] because up to now the definitions of these regions have been uncertain. Our resequencing of the region (defined by the 5 0 ends of full-length cDNA clones) revealed peculiar characteristics of SNP distribution. Elevated diversity is maintained through positions À400 to +50 of the transcription start sites (except for positions of the TATA box and initiator), reflecting the high GC content of this region and perhaps some biological functions. Perhaps the first 50 bp of the first exons, mostly 5 0 -UTR, are allowed to diverge at levels similar to those immediately upstream. The decreased nucleotide diversity at around the +50 position from the first ATG may suggest enrichment of some regulatory elements (such as for splicing) at this position [Majewski and Ott, 2002].
The reliability of our estimate of allele frequency by SSCP analysis of pooled DNA was confirmed by comparison with other projects. In a DNA pooling approach, quantitative errors may be introduced during each experimental stage (i.e., pool construction, amplification, and frequency estimation) [Barratt et al., 2002]. In addition to these measurement errors, variability because of random sampling from heterogeneous populations is also expected, especially when the sample size is small. Nonetheless, the frequency data obtained here are generally in good agreement with other data of high-throughput efforts, such as the JSNP, TSC, or HapMap projects. We also compared our allele frequencies with those obtained by a large-scale allele frequency estimation in gene regions by chipbased mass spectrometry of DNA pooled from 92 unrelated CEPH Caucasians [Nelson et al., 2004]. The allele frequency of 350 SNPs commonly found in the data of Nelson et al. [2004] (downloaded from dbSNP Build 122) and ours were compared. The correlation between the two was significantly lower (r 2= 0.48) than those between the HapMap data and ours (data not shown), indicating that the degree of accuracy was better in our estimation. One possible reason for this is that assay bias between alleles was corrected in our study but not in theirs, perhaps as a trade-off between assay accuracy and high throughput.
As demonstrated here, SSCP analysis of pooled DNA is an accurate and widely accessible method for estimating allele frequencies of many SNPs using commonly available capillarybased DNA sequencers. Pooled DNA analysis is a realistic strategy for the primary screening of many genes in association studies [Sham et al., 2002]. However, more accurate quantification than the large-scale SNP profiling shown here is required in such studies. The experimental design based on many small pools [Barratt et al., 2002] should yield more accurate estimation if high throughput is not the priority.
The allele frequency information obtained in this study will prove useful not only as a resource for selecting informative SNPs for association studies, but also as a resource for population genetics. In contrast to the overall concordance in intrapopulation comparisons, allele frequencies differ widely between different populations (e.g., Japanese and Caucasian). This divergence obviously reflects the different demographic histories of Asians and Caucasians, including the genetic bottlenecks that occurred during or after the ancestral dispersal from Africa. By sampling a large number of SNPs throughout the genome, loci that were affected by natural selection can simply be identified as outliers in the empirical distribution of F ST [Akey et al., 2002]. We found several outlier genes with F ST values larger than 0.5. They were distributed across the whole genome, consistent with previous reports, which suggests that natural selection in the human genome may be widespread [Akey et al., 2002]. Whether these regions were positively selected or came to predominate as a result of genetic drift remains to be investigated. | 2018-04-03T02:13:09.457Z | 2005-08-01T00:00:00.000 | {
"year": 2005,
"sha1": "c698174688ecdef5f06aa69e5bb4282ab8ec29be",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/humu.20196",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "5d04fc78e6c092e34afbadd5175017ec96e514de",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
257517782 | pes2o/s2orc | v3-fos-license | Possible triply heavy tetraquark states
In the present work, the triply heavy tetraquarks states $QQ\bar{Q}\bar{q}$ with $Q=(c, b)$ and $q=(u, d, s)$ with all possible quantum numbers are systematically investigated in the framework of the chiral quark model with the resonance ground method. Two kinds of structures, including the meson-meson configuration (the color-singlet channels and the hidden-color channels) and the diquark-antidiquark configuration (the color sextet-antisextet and the color triplet-antitriplet), are considered. In the considered system, several bound states are obtained for the $cc\bar{c}\bar{q^{'}}$, $bb\bar{c}\bar{q^{'}}$ and $bc\bar{c}\bar{q}$ tetraquarks. From the present estimations, we find that the coupled channel effect is of great significance for forming the below thresholds tetraquark states, which are stable for strong decays.
I. INTRODUCTION
Searching for multiquark states has become one of the most important and interesting topics of hadron physics, and the experimental observations and theoretical investigations shall deepen our understanding of the nonperturbative QCD [1][2][3][4][5][6][7].At the early beginning of the quark model, the notion of multiquark states had been proposed [8].But there had been no progress on the experimental side for a long time.A turning point came in the year of 2003, when the Belle Collaboration reported their observation of a new charmonium-like state X(3872) in the exclusive B ± → K ± π + π − J/ψ decays [9].Since then, a growing number of new hadron states have been observed experimentally, which attract the great interest of experimentists and theorists.
Among the new hadron states observed in the recent two decades, there are some good candidates of QCD exotic states, which can be classified into different categories according to different criteria.For example, for the charmonium-like states, we can divide them into two types according to the carried charges, i.e., the neutral and charged categories.One can also classify the new hadron states by their most possible quark components into tetraquark, pentaquark states, etc.It is interesting to notice that almost all the new hadron states have at least one heavy constituent quark or antiquark component.Since the mass of the heavy quarks is much larger than that of the light quarks, one can usually discuss the properties of hadrons with heavy quark components in the heavy quark limit.In this case, the number of heavy constituent quark/antiquark can also be used to classify the new hadron states.According to this criterion, we separate the observed new hadron states into three types, which are states with one, two, and four heavy quark/antiquark components, respectively.In the following, we select some typical examples for each type and present a short review.
The observations of the fully heavy tetraquark states makes tetraquark spectroscopy abundant and systematic.However, one can find that the tetraquark states with three heavy quark/antiquarks, i.e., QQ Q q (q = u, d, s), absent experimentally.The triply heavy tetraquark states are different from the already discovered quarkonium-like states, it might in sense offer a new platform of studying the internal structure of the exotic states.On the theoretical side, in the frame of colormagnetic interactions, the triply heavy tetraquark states were systematically investigated and some exotic tetraquark states were predicted [127].The QCD sum rule estimations indicated that the triply heavy tetraquarks states, ccc q, cc b q and bc b q, with quantum numbers J P = 0 + and J P = 1 + are all heavier than the corresponding meson-meson thresholds, while the bb b q tetraquarks were expected to be stable for strong decay [129].However, the estimations in the extended chromomagnetic model [128], nonrelativistic quark model [126], extended relativized quark model [130] indicated that there was no bound triply-heavy tetraquark state.In a word, the existence of the triply heavy tetraquark states is still an open question.In the present work, we employ a nonrelativistic chiral quark model (ChQM) to estimate the mass spectra of the S -wave triply heavy tetraquark states with the possible J P quantum numbers to be 0 + , 1 + , 2 + , to further check the existence of triply heavy tetraquark states.
The work is organized as follows.In Section II and section III, the theoretical framework utilized in present estimations is presented, which includes the chiral quark model and the Resonating Group Method (RGM).Section IV is devoted to the analysis and discussion of the obtained results.In the last section, we give a short summary.
II. THE CHIRAL QUARK MODEL
In the quark model, the Hamiltonian of a hadron is generally written as [131], with m i and p i are the mass and momentum of the ith quark, respectively.T CM is the center-of-mass kinetic energy, which is usually subtracted without losing generality since one mainly focuses on the internal relative motions.V(r i j ) indicates the interaction potential between the ith and jth quarks.
As for the ChQM, it is constructed based on the fact that the light current quarks are nearly massless, which lead to the chiral symmetry.However, due to the interactions of the quarks with the gluon medium, the current quarks become dressed and such dressed current quarks can be approximately described by the massive constituent quarks.In practice, the masses of the constituent quarks in the ChQM are determined by reproduce the spectrum of the conventional hadrons, and this model has been widely used to investigate the study of the spectra of mesons containing heavy quarks [132][133][134][135], the electromagnetic, weak and strong decays and reactions of mesons as well [135][136][137][138][139][140], the phenomena related to multiquark structures [141][142][143][144][145][146][147].In addition, in the ChQM, the interaction potential usually includes the Goldstone-boson exchange potentials, the perturbative one-gluon interaction, and a confinement potential.Furthermore, when one only considers the S −wave tetraquark system, the spin-orbit and tensor contributions can be ignored, thus the two body interaction potential reads, ( where V OGE (r i j ) indicates the potential resulted from one gluon exchange, and its concrete form is where σ and λ c are the Pauli matrices and SU(3) color matrix, respectively.α i j s is the QCD-inspired scale-dependent quarkgluon coupling constant, which offers a consistent description of mesons from light to heavy-quark sectors, and it can be determined by the mass splits between different mesons 1 .As for the confinement potential, the harmonic oscillator potential is adopted, which is, where a c represents the strength of the confinement potential and V 0 i j is the zero-point energies, which can be determined by the mass shift between different mesons.The Goldstone-boson exchange interactions between light quarks appear because of the dynamical breaking of chiral symmetry.For the QQ Q q with (Q = (c, b), q = (u, d, s)) systems, the π, K and η exchange interactions do not work due to the quark components.Thus in this paper, the Goldstoneboson exchange interactions are not considered.
The concrete values of these parameters are collected in Table I.In addition, the details of how to obtain these parameters can also be found in Ref. [148].The calculated mesons masses in comparison with experimental values are shown in Table II.It should be noticed that the parameters in the potentials are obtained by reproducing the mass spectra of conventional mesons, but the two-body quark-quark interaction potentials could be extended to investigate the multiquark system, where the difference between the color configurations is reflected by the product of the SU(3) color matrix λ c i • λ c j .In the present work, the triply heavy tetraquark systems are estimated by using the resonating group method [149].In this FIG.
1: Two types of configurations in QQ Q q, QQ Q′ q and QQ ′ Q q tetraquarks.For the QQ Q q system, there are two structures: the meson-meson configuration (diagram(a)) and the diquarkantidiquark configuration (diagram(b)).For the QQ Q′ q system, diagrams (c) and (d) correspond to the meson-meson and the diquarkantidiquark configurations, respectively.For the QQ ′ Q q system, diagrams (e) and (f) correspond to meson-meson configuration, while diagram (g) refers to the diquark-antidiquark configuration.method, the multiquark system can be divided into two clusters, which are frozen inside, so one only needs to consider the relative motion between the two clusters.The conventional ansatz for two-cluster (cluster A and B) wave functions is, where A is the antisymmetry operator of triply heavy tetraquarks.
For QQ Q q system, one has, TABLE III: All the possible channels for different J P quantum numbers, where [i, j, k] denotes the channels with i, j, and k to be the indices of flavor, spin, and color, respectively.
this antisymmetry operator becomes, for QQ Q′ q system, and for QQ ′ Q q system, due to the absence of any homogeneous quarks, then antisymmetry operator becomes a unit operator, which is, Moreover, [σ] = [222] gives the total color symmetry, and I, S , L and J represent flavor, spin, orbital and total angular momenta, respectively.ψ A and ψ B are the two-quark cluster wave functions, which are, where η I , S , and χ represent the flavor, spin, and internal color terms of the cluster wave functions, respectively.According to Fig. 1, we define different Jacobi coordinates for different diagrams.As for the meson-meson configuration in Fig. 1, the Jacobi coordinates are, where the subscript q/ q indicates the quark/antiquark particle, while the number indicates the quark position in Fig. 1.By interchanging r q 1 with r q 3 , one can obtain the Jacobi coordinates in Fig. 1-(f).As for the diquark-antidiquark configuration, one can also obtain the Jacobi coordinates corresponding to the diagrams in Fig. 1 by interchanging r q 3 with r q2 .
From the variational principle, after variation with respect to the relative motion wave function with H(R, R ′ ) and N(R, R ′ ) to be the Hamiltonian and normalization kernels, respectively.The eigenenergy E and the wave functions are obtained by solving the above RGM equation.In the present estimation, the function χ(R) can be expanded by gaussian bases, which is, where C i,L is the expansion coefficient, and n is the number of gaussian bases, which is determined by the stability of the results.S i is the separation of two reference centers.R is the dynamic coordinate defined in Eq. (11).After including the motion of the center of mass, i.e., With the above formula, one can rewrite the wave function in Eq. ( 5) as, where φ α (S i ) and φ β (−S i ) are the single-particle orbital wave functions with different reference centers, which are, With the reformulated ansatz as shown in Eq. ( 15), the RGM equation becomes an algebraic eigenvalue equation, which is, with N L ′ i, j and H L,L ′ i, j to be the overlap of the wave functions and the matrix elements of the Hamiltonian, respectively.By solving the generalized eigenvalue problem, we can obtain the energies of the tetraquark systems E and the corresponding expansion coefficient C j,L .Finally, the relative motion wave function between two clusters can be obtained by substituting the C j,L into Eq.( 13).
Besides the space part, we present the flavor, spin, and color parts of the wave function in Appendix-A.It is worth noting that after applying the antisymmetry operator, some wave functions may vanish, which means that some states are forbidden.For example, for the cc b q system with J P = 0 + , when considering the diquark-antidiquark structure with the spin wave function forced to choose S 1 0 , the color wave function χ c 3 would be excluded due to the constraints that the total wave function must be antisymmetric.
IV. RESULTS AND DISCUSSIONS
In the present calculation, the triply heavy tetraquark systems are evaluated by taking into account the meson-meson and diquark-antidiquark configurations in the ChQM, which have been shown in Fig 1 .To exhaust all possible configurations of the QQ Q q systems, we divide them into three classes, which are, the QQ Q q system including ccc q and bb bq, the QQ Q′ q system including cc b q and bbcq, and the QQ ′ Q q system including cb b q and cbcq.Moreover, in the present work, only the S −wave triply-heavy tetraquark states are evaluated, which indicates that the total orbital angular momenta L is equal to zero.Then, the total angular momentum, J, coincides with the total spin, S , and can take values of 0, 1, and 2, then the possible J P quantum numbers of the tetraquark states could be 0 + , 1 + , and 2 + .All the possible channels would be considered through the symmetry of the wave functions and all the allowed channels are listed in Table III.From Table III, one can find that in the ChQM the color singletsinglet (1 c × 1 c ) and the color octet-octet (8 c × 8 c ) structure have been taken into account for the meson-meson configuration.Moreover, for the diquark-antidiquark configuration, both antitriplet-triplet ( 3c × 3 c ) and sextet-antisextet (6 c × 6c ) color structures have also been considered.
Our estimations of the eigenenergies of the triply tetraquark states are presented in Tables IV-XI.In these tables, all the allowed meson-meson and diquark-antidiquark configurations are listed.In the meson-meson channels, (M 1 M 2 ) 1 and (M 1 M 2 ) 8 indicate the color singlet-singlet (1 c × 1 c ) and the color octet-octet (8 c × 8 c ) structures, respectively.E th is the experimental value of the thresholds for the physical channels.In the present work, the single-channel and channel-coupling calculations are all considered, and E sc , E CC1 , E CC2 and E CC are the estimated values of the eigenenergies of every single channel, the coupled channel for the meson-meson configurations, the coupled channel for the diquark-antidiquark configurations, and the one estimated by simultaneously considering the meson-meson and diquark-antidiquark configurations, respectively.P indicates the percentages of each channel for the lowest-lying eigenenergies E CC .
A. The QQ Q q systems Our estimations for the ccc q tetraquark system are presented in Table IV.For the case of J P = 0 + , one can find there are four channels in the meson-meson configurations, and two channels in the diquark-antidiquark configurations.For the cccn, n = {u, d} tetraquark states, the lowest threshold of the physical channel is 4849 MeV, which is the threshold of η c D. Form the table, one can find the eigenenergies of every single channel in both the meson-meson and diquarkantidiquark configurations are all above the lowest threshold of the allowed physics channel, which indicates that all these tetraquark states can decay into η c D. When one couple all the channels in a certain configuration, one can find the estimated eigenenergies are 4851 and 5415 MeV for the mesonmeson and diquark-antidiquark configurations, respectively, which is still a bit higher than the threshold of η c D. After considering both the meson-meson and diquark-antidiquark configurations simultaneously, we find the eigenenergy of cccn tetraquark state is about 4851 MeV, which is about 2 MeV above the threshold of η c D. As for ccc s tetraquark states with J P = 0 + , the lowest threshold of the physical channel is 4952 MeV, which is the threshold of η c D + s .As for ccc s tetraquark state with J P = 0 + , the single channel estimations show that all the tetraquark states are heavier than η c D + s .The eigenenergies of the coupled-channel estimations in mesonmeson and diquark-antidiquark configurations are 4954 MeV and 5487 MeV, respectively, which are all above the threshold of η c D + s .Moreover, the full coupled-channel estimations, i.e., considering the meson-meson and diquark-antidiquark configurations simultaneously, indicate the eigenenergy of the ccc s tetraquark state is 4594 MeV, which indicates that in this case, the effects of channel coupling is rather weak.It is worth noting that in the single-channel estimation the eigenen- ergy for the lowest physical meson-meson channel is several hundred MeV below the ones of other channels, thus, in the coupled-channel estimations, the mixings between different channels are expected to be small due to the large eigenenergy splittings.
As for the cccq tetraquark system with J P = 1 + , there are nine channels in this case, which include three color singlet channels and three hidden color channels in the meson-meson configuration, while there are three channels in the diquarkantidiquark configuration.The lowest physical meson-meson threshold is the one of J/ψD, which is 4962 MeV.In the single channel estimations, no bound state is found.The eigenenergies estimated in the coupled-channel estimations of the meson-meson and diquark-antidiquark configurations are 4964 and 5363 MeV, respectively, which are all above the threshold of J/ψD.By considering both the meson-meson and diquark-antidiquark configurations simultaneously, the eigenenergy of the tetraquark state with J P = 1 + is estimated to be 4963 MeV, and the effect of the channel coupling is rather weak, which is similar to the case of J P = 0 + .As for the ccc s tetraquark system, the lowest physical threshold is the one of J/ψD + s , which is 5065 MeV.Similar to the case of cccn system, the eigenenergies obtained in the single channel are all above the threshold of J/ψD + s .In addition, when we consider the channel coupling in the meson-meson and diquarkantidiquark configurations individually, the eigenenergies of the tetraquark state are estimated to be 5067 and 5442 MeV.After considering the meson-meson and diquark-antidiquark configurations simultaneously, we obtain the eigenenergy of ccc s tetraquark state with J P = 1 + is 5066 MeV, which is still a bit higher than the threshold of J/ψD + s .
For the case of cccn tetraquark states with J P = 2 + , there are two channels in the meson-meson configuration and only one channel in the diquark-antidiquark channel.The physi- cal meson-meson threshold is 5104 MeV.Our single channel estimations indicate that the eigenenergies are all above the threshold of J/ψD * , and after considering the channel coupling in the meson-meson configuration, the eigenenergy is estimated to be 5106 MeV, which is still above the threshold of J/ψD * .When we include the meson-meson and diquarkantidiquark configuration simultaneously, the eigenenergy is estimated to be 5095 MeV, which is about 9 MeV below the threshold of J/ψD * and then this tetraquark state can not decay into J/ψD * .Moreover, our estimations indicate in this states the dominant component is J/ψD * , which is about 75%, while the fractions of the hidden color channel, (J/ψD * ) 8 , and the diquark-antidiquark channel, (cc)(cn), are about 11% and 13%, respectively, which indicate the effect of coupled channel plays an important role in the existence of below threshold cccn tetraquark state.Different from the cccn system, our estimations find there are no below threshold ccc s tetraquark state In a very similar way, we can estimate the bb b q tetraquark system, and our results are listed in Table V.Our estimations indicate that there are no below threshold bb b q tetraquark states.However, within the framework of QCD sum rules, the bb b q tetraquark states with J P = 0 + and J P = 1 + may be stable due to obtaining the masses below the threshold η b B and η b B * [129], which is different from our conclusions.It is interesting to notice that for the cccn system, we find one below threshold tetraquark state with J P = 2 + , while the mass of the corresponding state in bb bn sector is above the threshold of Υ B * .To find which interaction plays the dominant role in forming a below threshold cccn tetraquark state with J P = 2 + and further check the influence of the coupled channel effect, we list the contribution of each term in the system hamiltonian in Table VI.As we have discussed in the above section, the potential resulting from the Goldstone-boson ex- The average values of each operator in the Hamiltonian of the cccn and bb b n tetraquark system in unit of MeV.E M(J/ψD * ) and E M(ΥB * ) stand for the sum of the theoretical thresholds of J/ψD * and ΥB * channel, where the distance between two mesons are very large and the interactions between them are ignored.change disappeared due to the quark components of the triply heavy tetraquark system.For the cccn tetraquark system with J P = 2 + , E M(J/ψD * ) refers to the sum of the theoretical threshold of J/ψD * , which indicates the interactions between J/ψ and D * to be zero and the system wave function is the product of the ones of J/ψ and D * .In this case, the average value of the kinetic operator is 1800.1 MeV, and the ones of confinement and OGE terms are -1812.8MeV and -380.5 MeV, respectively, one can obtain the threshold of J/ψD * by summing over the average values of different terms and the masses of the constituent quarks.In a similar way, one can obtain the average value of the operators in the single E (J/ψD * ) 1 channel estimation, the coupled channel estimations of meson-meson configuration (E cc1 ) and diquark-antidiquark configuration E cc2 , and the coupled channel estimation of both meson-meson and diquark-antidiquark configurations E cc .For simplify, we can define the ∆E as the difference of the average values of operators between single/coupled channel cases and E M(J/ψD * ) .If the sum of ∆E for all the operators is negative, the tetraquark states are below the threshold of J/ψD * .From the table, one can find the sum of ∆E for a single channel, coupled channel of each configuration is positive, while the coupled channel of both configurations is negative, which indicates the cccn tetraquark state with J P = 2 + is a below threshold state and the coupled channel effects between different configurations are essential in forming a below threshold tetraquark state.From the table, this result is mainly due to the strong attraction of the interaction of OGE term under the coupling of all configurations.As for bb bn tetraquark state with J P = 2 + , one can find that all the ∆E is positive, which indicates the tetraquark state is above the threshold of ΥB * .
B. The QQ Q′ q system In Table VII, we present our estimations of the eigenenergies of the cc b q system with J P = 0 + , 1 + and 2 + , respectively.For the case of cc bn tetraquark with J P = 0 + , we find there are four meson-meson channels and two diquarkantidiquark channels.The lowest physical threshold of cc bn is the one of B + c D, which is 8140 MeV.The eigenenergies obtained from the single channel, coupled channel in each configuration, and the full coupled channel estimations are all above the threshold of B + c D. From the full coupled channel estimations one can find the dominant component of cc bn tetraquark state with J P = 0 + is B + c D. As for the cc bn tetraquark states with J P = 1 + , there are six meson-meson and three diquark-antidiquark channels, respectively.The lowest physical threshold is the one of B + c D * , which is 8282 MeV.Similar to the case of 0 + , the eigenenergies obtained from the single channel, coupled channel in each configuration, and the full coupled channel estimations are all above the threshold of B + c D * .Similarly, there are two meson-meson and one diquark-antidiquark channels in the cc bn tetraquark system with J P = 2 + , and our estimations also indicate that there is no below threshold cc bn tetraquark state with J P = 2 + .Similarly, we can analyze the cc b s tetraquark system, and we find all the eigenenergies of the cc b s tetraquark are above the lowest thresholds of the corresponding physical channels.
As for the bbcq tetraquark system, the estimated eigenenergies are listed in Table VIII.For the bbcn tetraquark states with J P = 0 + , we find the lowest threshold of physical channel is the one of B − c B, which is 11554 MeV.The eigenenergies obtained from the single channel estimations and coupled channel estimations in each configuration are above the threshold of B − c B. While considering the coupled channel effects of meson-meson and diquark-antidiquark configurations simultaneously, we find the eigenenergies of bbcn tetraquark with J P = 0 + is 11552 MeV, which is about 2 MeV below the threshold of B − c B. In this tetraquark state, the dominant component is B − c B and its percentage is about 94.25.As for the bbcn tetraquark states with J P = 1 + , the lowest physical channel is B − c B * with the threshold to be 11579 MeV.We find that the eigenenergies obtained from single channel estimations and coupled channel estimations in each configuration are all above the threshold of B − c B * , while the From our estimations, we find there is no below threshold QQ Q′ s tetraquark state.But for QQ Q′ n tetraquark system, we find the eigenenergies of all the S -wave ground bbcn tetraquark states with different J P quantum numbers are below the lowest threshold of the corresponding physical channels, which is much different with cc bn case.To further compare the spectrum of cc bn and bbcn, we list the average values of each operator in the Hamiltonian of the tetraquark systems in Table IX.It is interesting to notice that in the full coupled channel estimation all the eigenenergies of the bbcn tetraquark states are below the corresponding lowest physical threshold, while the eigenenergies of the cc bn are all above the corresponding lowest physical threshold.By comparing the average values of the operators in the Hamiltonian of the cc bn and bbcn tetraquark system, one finds the dominant difference is the average values of V OGE , especially in the case of coupled channel estimations in the diquark-antidiquark configurations.The average values of V OGE are negative, which indicates that the OGE potential is attractive.However, for the cc bn tetraquark states with J P = 0 + and 2 + , the attractions become weak when we consider coupled channel effects in each configuration, and for J P = 2 + case, the attraction becomes stronger in the diquark-antidiquark coupled channel estimations.For the bbcn tetraquark states, we find that the attractions become much stronger in the diquark-antidiquark coupled channel estimations, although the attractions caused by the confinement potential become weak and the eigenenergies obtained in the diquark-antidiquark coupled channel estimations are still above the corresponding lowest physical threshold.But when we consider the coupled channel effects in both configurations, the eigenenergies of the bbcn are below the corresponding lowest threshold of the physical channels.
C. The QQ ′ Q q system Similar to the cases of QQ Q′ q and QQ ′ Q q tetraquark states, we can estimate the eigenenergies of QQ ′ Q q tetraquark states.Our estimations of the eigenenergies of the cbcq and bc b q tetraquark states are collected in Table X and XI.From Table X, one can find the eigenenergies of bccn tetraquark state with J P = 0 + obtained in the single channel estimations, the coupled channel estimations in each configuration and the full coupled channel estimations are all above the threshold of DB − c , which is 8140 MeV.Similarly, we also find that the eigenenergies of the bcc s tetraquark states with J P = 0 + are all above the threshold of D + s B − c .As for bccn tetraquark state with J P = 1 + , we find that the eigenenergies obtained in the single channel estimations and the coupled channel estimations in the meson-meson and diquark-antidiquark configurations are all above the threshold of DB * − c , however, when considering the coupled channel effects in both meson-meson and diquarkantidiquark configurations, one obtains the eigenenergy to be 8159 MeV, which is 6 MeV below the threshold of DB * − c .In this tetraquark state, the dominant component is DB * − c with a percentage to be 91.57.As for bcc s tetraquark state with J P = 1 + , we find that the eigenenergies obtained in the single channel estimations, the coupled channel estimations in each configuration and the full coupled channel estimations are all above the threshold of D + s B * − c .As for the case of J p = 2 + , the eigenenergies of bccn and bcc s obtained in the full coupled channel estimations are 8273 MeV and 8410 MeV, which are below the threshold of D * B * − c and D * + s B * − c , respectively.In the bc bn tetraquark state with J P = 2 + , the dominant components are (D * B * − c ) 1 , (J/ψB * ) 8 and (bc)(cn) with [i, j, k] = [7, 6, 4], the corresponding percentages of these components are 72.10,11.04 and 10.77, respectively.As for bc b s tetraquark state with J P = 2 + , the dominant component is (D * + s B * − c ) 1 with a percentage to be 95.29.
As for the bc b q tetraquark system, the eigenenergies estimated in the ChQM are collected in Table XI.From the table, one can find that the eigenenergies obtained in the single channel estimations, coupled channel estimations in each ∼ 0% [7,3,3] (bc)(cn) 8828 ∼ 0% (bc)(c s) ∼ 0% [7,3,4] (bc)(cn) 8640 ∼ 0% (bc)(c s) ∼ 0% [7,4,3] (bc)(cn) 8858 ∼ 0% (bc)(c s) ∼ 0% [7,4,4] (bc)(cn) 8582 1.17% (bc)(c s) ∼ 0% [7,5,3] (bc)(cn) 8803 ∼ 0% (bc)(c s) ∼ 0% [7,5,4] ( configuration and the full coupled channel estimations are all above the corresponding lowest physical threshold, which is different with the bccq tetraquark states, where one find three below threshold tetraquark states.To further analyze the role of the coupled channel effects, we estimate the average values of the operators in the Hamiltonian of Q ′ Q Q q system, which are collected in Tables XII and XIII.From the tables, one can find the average values of kinetic terms increase when we include the interaction between mesons and coupled-channel effects.In the full coupled-channel estimations, we find the attraction from confinement potential becomes stronger for bccn tetraquark states with J P = 1 + and J p = 2 + , but the attraction from the OGE potential becomes weak for the bccn tetraquark states with J P = 1 + , while this attraction becomes strong for the bccn tetraquark states with J P = 2 + .As for bcc s tetraquark states, the full coupled-channel estimations indicate the average values of H T , V Con and V OGE are close to those of E M(ΥD) , and the sum of these terms are positive.
As for the bcc s tetraquark state with J P = 2 + , the estimations indicate the confinement potential becomes strong in the full coupled-channel estimation.
V. SUMMARY
To summarize, inspired by the recent observation of fully heavy tetraquark states, we perform a systematic estimation of the triply tetraquark states in a chiral quark model, where the coupled channel effects of meson-meson configuration and diquark-antidiquark configurations are included.The eigenenergies of the S -wave ground states have been estimated.After including the coupled channel effects of both configurations, We notice that the eigenenergies of some tetraquark states are below the corresponding lowest threshold of the physical channel, which indicates that these tetraquark states cannot fall apart directly and thus are stable for strong decay.In Table XIV, we collect all the stable tetraquark states estimated in the present work.For comparison, we also list the corresponding lowest thresholds of the physical channel.
Moreover, comparing with the results in Refs.[127][128][129][130], we find that the masses of the diquark-antidiquark configurations are several hundred MeV higher than those of the color-magnetic interaction model [127,128] and QCD sum rules [129], while the masses under an extended relativized quark model [130] are generally consistent with present estimations of the diquark-diquark configurations.Although there are discrepancies in the estimated masses due to different input parameters and different interactions in different models, the conclusions are basically the same for the triply heavy tetraquark system, i.e., no stable states are found in the diquark-antidiquark configurations except for the estimation of QCD sum rules [129].But when we consider the coupledchannel effects of diquark-antidiquark and meson-meson configurations simultaneously, we find there exist several stable tetraquark states which are below the corresponding lowest physical threshold, which may accessible for experiments in LHCb.For the meson-meson configurations, the color wave functions of a q q cluster are, where the subscript [111] and [21] stand for color-singlet (1 c ) and color-octet (8 c ), respectively.Then the color-singlet tetraquark SU(3) color wave functions can be constructed by two color-singlet clusters, i.e.,1 c ⊗ 1 c ) and by two color-octet clusters, i.e., 8 c ⊗ 8 c ), which are, For the diquark-antidiquark configuration, the color wave The color-singlet wave functions of the diquark-antidiquark configuration can be the product of color sextet and antisextet clusters (6 c ⊗ 6c ) or the product of color-triplet and antitriplet cluster (6 c ⊗ 6c ), which read, C [2] [22] + C For the flavor degree of freedom, the quark content of the investigated 4-quark system is QQ Q q, Q = {c, b}, q = {u, d, s}, the isospin could be 1/2 and 0. Here, we adopt F i m and F i d to denote the flavor wave functions of the tetraquark system in the meson-meson and diquark-antidiquark configurations, respectively.In the present work, the flavor wave function of the QQ Q q system can be categorized into three types, which are QQ Q q, QQ Q′ q and QQ ′ Q q, respectively.
For the QQ Q q system, the flavor wave functions can be, and for the QQ Q′ q system, the flavor wave functions can be read as, While the flavor wave functions for the QQ ′ Q q system read,
cB
* and its percentage is about 58.26, while the (B − c B * )1 and (B − c B * )8 channels in the meson-meson configuration and (bb)(cn) channel with [i, j, k] =[4,4,4] in the diquark-antidiquark configuration are also important with the percentage to be 15.71, 6.16 and 16.57, respectively.For the J P = 2 + case, there is only one physical channel for bbcn tetraquark state, which is B * − c B * with the threshold to be 11625 MeV.Similar to the case of 0 + and 2 + , the eigenenergies obtained from the single channel estimations and the coupled channel estimations in each configuration are all above the threshold of B * − c B * .When we consider both meson-meson and diquarkantidiquark configurations simultaneously, the eigenenergy of bbcn tetraquark state with J P = 2 + is estimated to be 11613 MeV, which is about 12 MeV below the threshold of B * − c B * and the percentage of different channels are 68.00,9.79 and 22.21 for (B * − c B * ) 1 , (B * − c B * )8 channels and (bb)(cn) channel with [i, j, k] =[4,6,4], respectively.Different from the bbcn tetraquark system, our estimations indicate the eigenenergies of bbc s tetraquark states with different J P quantum numbers are all above the lowest threshold of the corresponding physical channels.
Appendix A: The wave function of the triply heavy tetraquark a.The color wave function
TABLE I :
The concrete values of the model parameters, which are determined by reproducing the masses of mesons listed in TableII.
TABLE II :
[150]asses (in units of MeV) of the mesons.The measured values of the masses[150]are also presented for comparison.
TABLE IV :
The lowest-lying eigenenergies of the cccn n = {u, d} and ccc s tetraquarks in the ChQM.
TABLE V :
The lowest-lying eigenenergies of the bb b n n = {u, d} and bb b s tetraquarks in the ChQM.
TABLE VII :
The lowest-lying eigenenergies of the cc bn n = {u, d} and cc b s tetraquarks in the ChQM.
TABLE VIII :
The lowest-lying eigenenergies of the bbcn n = {u, d} and bbc s tetraquarks in the ChQM.
TABLE IX :
The same as Table VI but for cc bn and bbc n tetraquark states with J P = 0 + , 1 + and 2 + .
TABLE X :
The lowest-lying eigenenergies of the bccn n = {u, d} and bcc s tetraquarks in the ChQM.
TABLE XI :
The lowest-lying eigenenergies of the bc bn n = {u, d} and bc b s tetraquarks in the ChQM.
TABLE XII :
Contributions of each term in Hamiltonian to the energy of the bccn tetraquark and bc bn tetraquark in ChQM.E M("channel") stands for the sum of the theoretical thresholds of the lowest physical channel.(unit: MeV).
TABLE XIII :
Contributions of each term in Hamiltonian to the energy of the bcc s and bc b s tetraquark in ChQM.E M("channel") stands for the sum of the theoretical thresholds of the lowest physical channel.(unit: MeV).
TABLE XIV :
Possible bound state with the different quantum number in ChQM.(unit: MeV). | 2022-05-18T06:47:03.560Z | 2022-05-16T00:00:00.000 | {
"year": 2022,
"sha1": "a9823ba86394609829afd39ea44ce4e06134bb66",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.107.054019",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "a9823ba86394609829afd39ea44ce4e06134bb66",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
224421208 | pes2o/s2orc | v3-fos-license | Poverty in New Zealand: Who is Most Affected, What are the Effects on Students, and How can the Issues be Overcome?
Students from low socio-economic backgrounds in New Zealand face many disadvantages when it comes to education, and, despite government initiatives, the disparity between the poor and the well-off continues to grow in this country. New Zealand is among several countries where income inequalities are large and the impact of socio-economic background on learning outcomes is also large (OECD, 2010). The literature in New Zealand, and overseas, regarding the effects of poverty on education is varied and extensive. This position paper discusses these effects on the learning and behaviour of students and considers what ethnicities are most at-risk as a result. Enablers and barriers to overcoming disadvantages associated with low socio-economic status (SES) background are then reviewed.
INTRODUCTION
According to the Statistics New Zealand report, Vulnerable children and families: Some findings from the New Zealand General Social Survey one in four children under the age of 18 (268,000) live in households defined as medium-or high-risk (Statistics New Zealand, 2012). Households in the high-risk group include those receiving benefit income, sole parent households, large households, households with Māori respondents, and households where the mother had given birth before the age of 21. Six per cent (67,000) of children live in high-risk households with five or more risk factors. The report also concluded that Mā ori were over-represented in high-risk households. Children who live in high-risk households are more likely to have poor health and nutrition. Low levels of parental education and stress common in the high-risk household can lead to poor parenting skills and a learning environment with limited stimulation (Statistics New Zealand, 2012).
Research has shown that children who are born into poor families may have poorer levels of educational attainment or cognitive function. Low levels of educational attainment may lead to poor employment opportunities and reduced income in adulthood, and poverty is 'ʻ transmitted' to the next generation (Save the Children, 2009). However, The Competent Children, Competent Learners longitudinal study in New Zealand found that what teachers and parents do, their interactions and the opportunities they provide, could make a positive difference to children's achievement at school despite the challenges of poverty (Children's Commissioner, 2013).
In this position paper I will discuss the effects poverty has on learning and behaviour, who is most at-risk in New Zealand, and the barriers and enablers schools and families face when trying to interrupt the cycle of poverty (Save the Children, 2009). Henderson (2013) used a similar format to discuss Māori potential and the barriers to creating culturally-responsive learning environments in Aotearoa/New Zealand. Her paper inspired me to research the affects of poverty on students' learning and behaviour and the link between low SES and ethnicity.
What Ethnicities are Most Affected by Socioeconomic Disadvantage in New Zealand?
Socio-economic status (SES) is a measure of social and economic factors. Children are assigned SES labels according to their level of household income and their parents' educational qualifications (Children's Commissioner, 2013). Ethnicity and SES are often closely linked and in New Zealand, Māori and Pasifika people are more likely to have a lower SES, not because it is a disadvantage to come from these ethnic groups, but because of inequitable distribution of income and education across New Zealand's population (Children's Commissioner, 2013). Thrupp (2006) states that schools may actively maintain inequality as they quietly sort people into winners and losers based on their initial cultural characteristics, thereby maintaining the dominance of the middle classes. He goes on to say that low SES families are not only poor, they are typically in a subordinate social class position within society. It follows, then, that if Mā ori and Pasifika people are the ethnicities most at-risk of poverty in our society, then teachers, many of whom are from the dominant middle classes, will need to know how to effectively engage and teach these students in their classroom.
According to Alton-Lee (2003), by 2040, current projections predict that the majority of students in New Zealand primary schools will be Mā ori and Pasifika. This change will occur within the working life of teachers who are currently being trained or inducted into teaching. Furthermore, classrooms are growing increasingly more diverse with many students identifying with many different ethnicities (Alton-Lee, 2003).
What Effect Does Coming From a Low Socioeconomic Background Have on Learning and Behaviour?
The educationalist Helen Ladd writes: "… study after study has demonstrated that children from disadvantaged households perform less-well in school on average than those from more advantaged households" (cited in Children's Commissioner, 2013, p. 2).
According to Perry (2012), low SES families have limited finances and therefore less access to books, educational toys, and educational outings. There is often increased stress in the low socio-economic home, which in turn makes it more difficult to provide a cognitively stimulating environment (Perry, 2012). This is particularly important in single-parent homes, where Nelson et al., (2007) have found that maternal depression is a significant factor contributing to behaviour problems in young children. Letourneau, Duffet-Leger, Levac, Watson and Young-Morris (2011) found in their meta-analysis of ethnic diversity, socio-economic status and child development, that the lower the SES the higher the prevalence of externalising (aggressive) and internalising (depression) behaviour. Hook, Lawson and Farah (2013) looked at the relationship between socio-economic status and executive function (the ability to actively direct, control and regulate thoughts and behaviour) and found that children from higher socio-economic families showed better executive function. They have also shown that the level of executive function is a predictor of school achievement and is also associated with mental health outcomes. However, the parent-child relationship and its ability to buffer stress can mediate the association between childhood SES and executive function. These authors suggested the need for social policies which fund programmes and interventions that reduce parental stress and increase children's access to cognitively-stimulating activities and resources.
Furthermore, an examination of a 25-year longitudinal study of over 1,000 New Zealand children found that the educational aspirations of families and positive parent-child interactions played a large role in children from low SES backgrounds succeeding at school (Fergusson, Horwood & Boden, 2008). This supported the Best Evidence Synthesis: The Complexity of Community and Family Influences on Children's Achievement in New Zealand which reported that regardless of SES background, families with high levels of educational expectations have the most positive effects on their children's achievement at senior school level (Biddulph, Biddulph & Biddulph, 2003). The evidence indicated that most parents were prepared to help their children as far as their resources permitted.
In New Zealand and overseas, studies have found that lower SES children are less likely to have access to a stimulating curriculum, and more likely to experience less-qualified teachers with lower expectations (Perry, 2012). In some communities this can lead to a kind of socio-economic segregation or "white flight" where higher SES families send their children out of town to schools rather than to the local one. This may lead to a 'spiral of decline' where the quality of education schools can provide declines as lower SES areas lose their 'brightest' students (Perry, 2012;Thrupp, 2006).
Disturbingly, an Australian study found the quality of pedagogy was lower in schools with large numbers of students from disadvantaged backgrounds (Griffiths, Amosa, Ladwig & Gore, 2007). Furthermore, studies have also found that the school-level SES had a detrimental effect on students as they progressed through school (Holmes-Smith 2006, cited in Buckingham, Wheldall & Beaman-Wheldall, 2013). A study by Cassan and Kingdom (cited in Buckingham et al., 2013) put it succinctly: students from lower SES backgrounds were often found in lower-quality schools.
Jensen (2013) discusses seven differences between middle-class and low-income students showing up at school: health and nutrition; vocabulary; effort, hope and growth mindset; cognition; relationships, and distress. Jensen provides advice about how schools can address these differences, however, he makes it clear that without teachers getting to know their students well, addressing these seven factors will mean little. Thrupp (2006) questions whether the onus should be on schools to solve the problem. While schools can make a difference there is still a role for government to devise long-range strategies to eliminate poverty. However, Thrupp believes schools are not benign because they can help to reproduce social inequalities through the 'hidden curriculum' of schooling which is set up for the white middle classes. What can teachers do to encourage enablers and respond to barriers so as to address SES in relation to ethnicity? Studies have shown that positive home/school relationships and parents' active support of their children's education can make a difference to achievement at school.
Building Relationships
For schools to make positive links with families they need to know about their students' lives outside of school and families' expectations of what it is that schools can achieve (Groundwater-Smith, 2011). Schools must work from a position of whanaungatanga (making connections) and get to know the iwi and hapu in their areas, talk to kaumatua and then, when ready, experience parts of that world when they can (Macfarlane, 2004). Macfarlane goes on to state that parental involvement is a must if schools wish to lessen academic and behavioural disadvantages, and then lists several ways of interacting with Mā ori parents. Parents of students experiencing learning and behaviour difficulties also have aspirations for their futures. As educators, it is important that we do not kill these dreams (Macfarlane, 2004, p. 69). Effective engagement of Pasifika parents and communities also rely on relationships which must be fostered among all partners (Gorinski & Fraser, 2006, cited in Ferguson et al., 2008. When family and school form positive relationships, outcomes for students quickly improve (Berryman, cited in Bottrell & Goodwin, 2011). To build a relationship of trust, schools need to actively construct knowledge with the community and be willing to listen and learn, and schools must allow families to be self-determining: to let families decide how they will be involved in schools (Berryman & Bishop, 2011). When all parties construct and share common visions and goals it is most effective for partnerships (Children's Commissioner, 2013).
The New Zealand Education Review Office (ERO) (2008) found that when schools with diverse communities started to recognise the cultural identity and values of students, their parents and their families, then those cultural identities and values started to appear in the school culture and practices. Schools which do this well identified the skills and expertise in their communities and valued them. They held regular meetings involving parents and whānau and this had a positive effect on engagement. Key people from either the school or the wider community led these meetings and provided a bridge for parents to come into school. This built parents' confidence, especially if schooling had not been a positive experience for them in the past. The strengthening of the school, whānau and community partnership then benefited student learning and well-being.
It is up to schools to reach out to families who, for many reasons, may find schools inhospitable places. However, schools also need to reach out to those community agencies that support families (Groundwater-Smith, 2011). This is exactly how one school in Christchurch changed the way in which it engaged with its community after the tragedy of the Canterbury earthquakes. Principal Christine Harris (2013) discussed how her approach to engaging whānau changed dramatically after the earthquakes. Her approach changed as she, and the teachers at her school, started to meet the needs of the community, which then built up a considerable amount of trust with whānau. At the same time, they were building 'strong and respectful' relationships with support agencies. Harris concluded that if there was a concept that encapsulated the learning she and her team experienced it was the importance of developing relationships above all else. She said relational trust began to evolve in her school's diverse community as the school reached out to all members of the community in need. Harris also talked of the school meeting the holistic needs of the student first which developed a strong sense of ako (collaborative and reciprocal approach) and awhinatanga (interpersonal care and support).
Harris and her teachers showed that they were committed to their community and cared about its social and emotional stability, and that they were willing to embrace diverse cultures and value cultural exchanges at both the personal and pedagogical levels (Munns, Sawyer & Cole, 2013).
Ministry of Education Resources
Ministry of Education (MOE) resources are many and varied, however, too often they are not taken up by schools. There needs to be teachers and leaders within the schools with cross-cultural competency to ensure the likes of Ministry of Education resources such as Ka Hikitia and Tātaiako are implemented effectively (Henderson, 2013). Tapaleao (2014) attributed an increase in Māori and Pasifika students achieving NCEA 2 to a number of MOE initiatives introduced in schools.These included mentoring programmes and homework centres such as the Power Up Pasifika and Starpath projects. The Starpath project, launched by the University of Auckland in 2005, is research-based and aims to help high school students from low-to mid-socioeconomic backgrounds achieve. Both projects use mentors to guide students in their learning and educational aspirations. Mentors and teachers offer their services to students in Power Up stations around Auckland and Wellington free-of-charge. The Power Up programme is uniquely Pacific in that the homework centre invites parents and families to come in and act as support-figures for their children. They are held in places familiar to Pacific families such as churches and community halls and a meal is provided afterwards (Tapaleao, 2014).
Culturally Responsive and Empathetic Teachers/Educators
Empathetic teachers create a culture of care in their classrooms and respond to their students' culture positively. They are aware of and understand Article 2 of the Treaty of Waitangi which allows Māori the right to protect their knowledge, language, values, beliefs and practices (Macfarlane, Glynn, Cavanagh & Bateman, 2007). When teachers in New Zealand get their bi-cultural relationship right then multi-cultural relationships will do likewise.
Empathetic teachers promote self-efficacy in their classrooms and this may lead to higher academic achievement in low SES schools. Teachers need to find the "slightest thing" to help students believe in themselves as learners (Munns et al., 2013;Perry, 2012).
While Durie (2003) does not discount socioeconomic factors, he believes the essential difference is that Māori live at the interface between two worlds: te ao Māori (the Māori world) and te ao whānui (the wider global society). Therefore, it is the way these two views impact on each other that is the determinant factor for educational success, however, that does not mean socio-economic factors are unimportant (Durie, 2003).
Culturally-responsive educators will help students appreciate their own place in their community while at the same time opening up the possibilities of a wider world (Munns et al., 2013). It is in the space between these two worlds that culturally-responsive educators, including Resource Teachers: Learning and Behaviour, must walk and help to facilitate the resilience students and teachers will need to navigate a world they don't necessarily live in every day.
Lack of Knowledge of Te Ao Ma ori (The Ma ori World)
Educational policy, teaching practice and key performance indicators for staff must match the Māori world view reality (Durie, 2003). Barnes, Hutchings, Taupo & Bright (2012) agree, stating that some teacher-practice demonstrated a low level of awareness of Māori world views and more needed to be done to train and professionally develop teachers and school leaders so as to improve engagement with Māori students (Barnes, et al, 2012). Research undertaken in Colorado into family-school partnerships (FSP) also highlighted the need for training teachers to be taught ways of engaging and interacting positively with diverse families (Sullivan, Miller, Lines & Hermanutz, 2009).
Lack of Knowledge of the Pasifika World
The word Pasifika is used to recognise the multiethnic, heterogeneous group of Pasifika peoples which comprises different languages and cultures (Ferguson, et al., 2008). Pasifika peoples is a collective term used to refer to the cultures of Samoa, Cook Islands, Tonga, Niue, Tokelau, Fiji, Solomon Islands, Tuvalu, and other Pasifika or mixed heritages. Ferguson et al., (2008) state that an understanding of the immigration history of Pasifika peoples to New Zealand is critical for all those working in education because it may enable teachers and educators to better-appreciate the role of schooling in replicating wider society, as well as assist in perceiving students as having complex social identities.
Deficit-Theorising and Differences in Values
Some teachers tend to blame students, or their socioeconomic background, for learning and behaviour difficulties and so problems are often attributed to students' weaknesses and not to the teaching method, curriculum or teacher-student relationship. Teachers need to look at their own pedagogy and not dwell on the supposed inadequacies of their students (Munns et al., 2013). Often there are differences in values between the school and parents that can lead to communication breakdown. Gillanders, McKinney and Ritchie (2012) found parents praised teachers who communicated with them positively about their child as well as the things their children needed to work on.
Lack of Cross-Cultural Skills
Many teachers do not have adequate knowledge and understanding of te reo me ngā tikanga Māori and the history and cultures of Pasifika peoples, and teacher training institutions need to ensure this is taught (Ferguson et al., 2008;Gillanders, McKinney & Ritchie, 2013). If teachers and school leaders cannot step outside their own culture and engage with an 'other' in a cultural partnership, it is highly unlikely that the engagement with whā nau and community will occur (Henderson, 2013). In New Zealand it is very easy to not have to step outside of the eurocentric culture and this mono-cultural lens colours everything people do, their values, their professional practice, and the way they live.
CONCLUSION
Coming from a low SES background should not be a precursor for not doing well academically. Those working in education need to be aware of the issues faced by children and families from low SES backgrounds. They need to upskill in cross-cultural competency and learn more about the Māori and Pasifika worlds because these are the ethnicities most-affected by poverty in New Zealand. Upskilling in cross-cultural competency is a must for all educators because the population in our schools is going to continue to become more diverse (Alton-Lee, 2003). Evidence shows teaching that is responsive to student diversity can have very positive impacts on both low and high achievers. Therefore, if teachers can help low SES students achieve academically, then they may go on to higher learning and gain better jobs with better pay, which can then break the poverty cycle. Key to this outcome is building positive relationships with students and their whānau from lower SES homes: it goes hand-in-hand with forming strong and respectful relationships with those agencies working with family and whānau. | 2020-10-19T17:06:23.495Z | 2014-07-01T00:00:00.000 | {
"year": 2014,
"sha1": "837583d88ae8de61538c5c450d64868d0347c9ab",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.kairaranga.ac.nz/index.php/k/article/download/248/150",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c530166b76987da43c3bc1750a24c0667956e1f9",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Sociology"
]
} |
259402395 | pes2o/s2orc | v3-fos-license | EU COUNTRIES HIERARCHICAL CLUSTERING TOWARDS CIRCULAR ECONOMY PERFORMANCE INDICATORS
. Circular economy indicators can be used for ranking countries and their hierarchical clustering. It shows the differences and similarities in the progress between individual countries, made toward the realization of the principles of the circular economy (CE). CE is a modern concept and the aspiration to preserve resources and protect the environment. The paper presents a cluster analysis at the level of the European Union (EU27) countries, based on the data of CE composite indicators, which is managed by the statistical office of the European Union (Eurostat). SPSS IBM 26.0 statistics software was used for cluster analysis, while the ANOVA method was applied to check the statistical significance of the obtained results. The most important results achieved in the paper are the classification into 6 clusters within the EU27 countries, with similar policies in the area of the circular economy. The best-ranked cluster is cluster 6 which consists of only 1 country, the Netherlands, the European leader in the circular economy. Accordingly, this paper aims to determine the similarities and differences between the EU member states in the implementation of circular economy postulates by dividing them into clusters. In this way, the highest mean values of the indicators within the cluster will be determined, and thus the circular economy model that other clusters should follow.
INTRODUCTION
European Union has a growing interest in legislation and policies related to CE and developed CE indicators as guidelines for their countries. The European Commission launched the Circular Economy Action Plan in 2015 and highlighted the importance of CE. It has introduced monitoring its performance across countries to understand and benchmark the level of success of policy initiatives [1]. Without measuring, researchers cannot manage any system, and for that reason, many indicators are being presented in the area of CE. Some authors reviewed tools and methodologies of CE indicators that are already in use and their disadvantages: none of the indicators and methodologies alone was capable of monitoring all the CE characteristics (Elia et al.,2017) and none of the methods alone could account for the retention of value in waste resources (Iacovidou et al., 2017) [2,3].
The composite indicator aims to prioritize measurement progress to CE and uniformity to drive the development to create an innovative and prosperous environment, zero-waste [4,5,6]. In 2018, a Monitoring Framework for the CE was presented by the statistical office of the European Union (Eurostat), the Joint Research Centre, and the European Patent Office. The CE indicators are classified into four thematic areas: Production and Consumption, Waste Management, Secondary Raw Material, and Competitiveness and Innovation. CE indicators related to the generation of different types of waste are within the theme of rea Production and Consumption. The indicators included in the thematic area of Waste Management are the recycling rates of different products. Indicators such as material use rates and trade of recyclable raw materials are within the thematic area of Secondary raw material. Indicators such as patents in CE, gross investment in tangible goods, persons employed, and value-added are within the thematic area of Competitiveness and Innovation. The framework illustration shows that most of the indicators focus on the preservation of materials, with strategies such as recycling, reuse, and generally on environmental protection [7].
As follows, the authors selected Eurostat indicators in order to perform a cluster analysis on EU27 countries and grouped them into similar circular economy "ecosystems" [8].
METHODOLOGY
The methodology applied in this research is based mainly on cluster analysis. The application of cluster analysis aims to group the European Union member states based on selected indicators for the circular economy. Grouping the member states of the European Union based on the value of the indicators should contribute to the perception of similarities between the countries in certain clusters in the matter of conducting the circular economy policy. Also, by determining descriptive statistics for each cluster individually, the cluster with the best conditions for implementing the circular economy is determined, which provides guidelines for other member countries to advance in this area. In doing so, authors use indicators value for EU27 from the last available year Eurostat data set for CE (Circular Economy) indicators, as follows: ▪ Resource productivity -The indicator is defined as the gross domestic product (GDP) divided by domestic material consumption (DMC). DMC measures the total amount of materials directly used by an economy. It is defined as the annual quantity of raw materials extracted from the domestic territory of the local economy, plus all physical imports minus all physical exports. It is important to note that the term 'consumption', as used in DMC, denotes apparent consumption and not final consumption. DMC does not include upstream flows related to imports and exports of raw materials and products originating outside of the local economy [13]. ▪ Recycling rate of municipal waste-measures the share of recycled municipal waste in the total municipal waste generation. Recycling includes material recycling, composting, and anaerobic digestion. The ratio is expressed in percent (%) as both terms are measured in the same unit, namely tones [13].
▪ Circular material use rate-measures the contribution of recycled materials to overall materials demand. The indicator measures the share of material recycled and fed back into the economy -thus saving extraction of primary raw materialsin overall material use. The circular material use, also known as the circularity rate is defined as the ratio of the circular use of materials to the overall material use [13]. ▪ Recycling rate of all waste excluding major mineral waste-The indicator is calculated as recycled waste divided by total waste treated excluding major mineral wastes, multiplied by 100 [13]. Cluster analysis as a multivariate technique was carried out using statistical software SPSS IBM 26.0, using an agglomerative hierarchical approach. First of all, the authors have conducted a hierarchical agglomerative procedure based on Euclidean squared distance. The obtained agglomeration scheme (Table 1) as an output result of SPSS cluster analysis, involves bottom-up analysis and then combines objects and groups until each of them is in a group or cluster [11]. The last smallest bottom-up change in cluster formation indicates the number of future clusters. Ward's method applied in the agglomerative procedure is based on the analysis of variance to estimate the distance between clusters and thus differs from the others [9,10].
RESULTS AND DISCUSSION
Descriptive statistics among the formed clusters represent the second output results of the conducted cluster analysis ( Table 2). From Table 2, it can be concluded that there are a total of six clusters of EU member states, which is also confirmed in the previously mentioned agglomeration scheme with the last biggest change in the Coefficients column. Also, Table 2 shows the mean value of the analyzed indicators between the clusters. Country-cluster Netherlands has the highest values for all indicators except for indicator C1-Resource productivity, which has the maximum value in the third cluster. Therefore, countries that want to achieve higher values of the indicators should strive for the country cluster which represents Cluster 6. .569 (max)
(max)
Source: Author's elaboration based on the SPSS IBM 26.0 cluster output The hierarchical agglomerative approach as well as the descriptive statistics between the existing clusters show that based on the analyzed indicators of the circular economy, six clusters of European Union countries have been identified, which are represented by a map diagram (Figure 1). Based on the European map chart, it can be seen that in the third cluster there are as many as 11 member states of the European Union with a similar circular economy policy. The authors used the ANOVA procedure to check the statistical significance of differences in the average values of indicators among clusters. Based on the conducted ANOVA procedure (Table 3), statistically significant differences in average values can be stated for the indicators as can be seen in the Sig. a column where P < 0.05 for all CE indicators.
CONCLUSION
The paper presents issues related to circular economy indicators and cluster analysis in EU 27 countries and the disparities in performance which are largely the result of different starting positions of countries' development [12]. The authors used selected CE indicators from the Eurostat database, as follows: Resource productivity, the Recycling rate of municipal waste, Circular material use rate, and the Recycling rate of all waste excluding major mineral waste. The statistical software SPSS IBM 26.0 was used for cluster analysis, while the ANOVA method was used to check the statistical significance of the obtained results. Within the paper, EU27 countries are classified into six clusters and the top-ranked is country-cluster Netherlands with the highest values for the most indicators. It is interesting that Czechia and Estonia joined the EU in the same year (2004) and also constitute the fourth cluster which means those countries have similar circular economy politics requirements. Moreover, further research should provide specific recommendations for improving the circular economy environment in the cluster of the EU countries with the weakest progress in agricultural performance, especially for the countries from the third and fourth clusters. Circular economy policymakers should provide a more effective CE strategy for countries from mentioned clusters to eliminate limitations from their economic transition period before the joining EU. The optimal model of circular economy politics is the Netherlands. This country-cluster has long-a term governmentwide framework for raw materials in all industries until 2030. year. The mentioned framework has main priorities in biomass and food, plastics, the manufacturing industry, the construction sector, and consumer goods. | 2023-07-11T00:22:00.316Z | 2022-12-18T00:00:00.000 | {
"year": 2022,
"sha1": "aa0377d1546bae7625ffd68d916a6b62096feb44",
"oa_license": null,
"oa_url": "http://casopisi.junis.ni.ac.rs/index.php/FUWorkLivEnvProt/article/download/11254/4725",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1f68264e5de6929f9cd772435aa685e6c55eb98b",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
103467402 | pes2o/s2orc | v3-fos-license | Generic Mechanism for Pattern Formation in the Solvation Shells of Buckminsterfullerene
Accurate description of solvation structure of a hydrophobic nanomaterial is of immense importance to understand protein folding, molecular recognition, drug binding, and many related phenomena. Moreover, spontaneous pattern formation through self-organization of solvent molecules around a nanoscopic solute is fascinating and useful in making template-directed nanostructures of desired morphologies. Recently, it has been shown using polarizable atomistic models that the hydration shell of a buckminsterfullerene can have atomically resolved ordered structure, in which C60 atomic arrangement is imprinted. In analyzing any peculiar behavior of water, traditionally, emphasis has been placed on the long-ranged and orientation-dependent interactions in it. Here, we show through molecular dynamics simulation that the patterned solvation layer with the imprints of the hydrophobic surface atoms of the buckminsterfullerene can be obtained from a completely different mechanism arising from a spherically symmetric, short-ranged interaction having two characteristic lengthscales. The nature of the pattern can be modified by adjusting solvent density or pressure. Although solute–solvent dispersion interaction is the key to such pattern formation adjacent to the solute surface, the ordering at longer lengthscale is a consequence of mutual influence of short-range correlations among successive layers. The present study thus demonstrates that the formation of such patterned solvation shells around the buckminsterfullerene is not restricted to water, but encompasses a large class of anomalous fluids represented by two-lengthscale potential.
Simulation Models & Methods
The interaction potentials U vv (r) between two CG water molecules and U uv (r) between a C atom of the C 60 and a CG water molecule are shown below. The red line in Figure S1 b is the solutesolvent potential with =0, which corresponds to purely repulsive fullerene-water interaction and the blue line with =1 in the same figure is the solute-solvent potential when the interaction potential between the carbon atom of C 60 and the CG water is Lennard-Jones type (c.f. Eq. 2). Figure S1: Plots of (a) solvent-solvent and (b) solute-solvent potentials for the C 60 -CG water system. The solute-solvent potential is between a carbon atom of the C 60 and a CG water molecule. The red line in the lower panel represents repulsive C 60 -water interaction i.e. for (cf. Eq.(2) with =0 ) and the black line is for attractive C 60 -water interaction (cf. Eq. (2) with =1).
S3
After the equilibration of a cubic solvent box of a particular density ρ* at a particular temperature T*, the C 60 molecule was inserted at the middle of the box. All the solvent molecules overlapping with the carbon atoms of C 60 were removed. The C 60 molecule is rigid and has been held fixed at the middle of the box. The composite C 60 -solvent system was equilibrated in the same way as mentioned above. Finally, a production run of 10 7 steps were performed.
Trajectories are stored at an interval of 20 steps for post processing.
Dissection of the first hydration shell of C 60 :
We dissect first solvation shell (for ρ*=0.6) further into three layers: front, middle and end, each of width 0.24 Å. The ADFs for these three layers are shown in Figure S2. It is observed that the front layer follows the same pattern as that observed in Figure 1 (main text). In this layer all the 5-and 6-memberd rings are occupied ( Figure S2a) with the pattern at the hexagonal faces is triangular in shape; where as in the second layer ( Figure S2b) additional solvent particles are accumulated on both the hexagonal and pentagonal faces. In the end layer however all the centers of the hexagonal faces are unoccupied ( Figure S2c) and it is due to the excluded volume effect of the particles present in the previous (middle) layer. As expected the front layer is very similar to that observed in Figure 1 (main text). The middle layer has a particular pattern and the pattern in the end layer is complementary to that of the middle. It is observed that the number of particles in the front layer for ρ*=0.6 is around 32, the same as the number of faces on the C 60 surface indicating that on an average each face is occupied by one solvent particle. Any additional number of solvent particles is sitting slightly away from the C 60 center and these additional particles, which are available at higher solvent densities, are responsible for making the pattern different from that for the low density case. Comparing
S6
The average number of solvent particles in the spherical shell region between the first and the second peaks of the C 60 -solvent RDF are shown in Figure S3. It shows that up to the solvent density 0.16, there is almost no solvent particle between the conventional first and second solvation layers. At higher densities however the value of <N> increases. The large <N> values at ρ*=0.6 and above is due to the development of a new peak at r*=3.0 in the C 60 -solvent g(r) (cf. Figure 2c main text). | 2019-04-09T13:08:15.558Z | 2018-01-26T00:00:00.000 | {
"year": 2018,
"sha1": "0167c0c36325827c900ed6d7c27b66400e6e24b2",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.7b01858",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ac895ff881ca4f5a9922a6177dbe2991b09dfaa",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
267698627 | pes2o/s2orc | v3-fos-license | Assessing postnatal care for newborns in Sub-Saharan Africa: A multinational analysis
Background No doubt providing optimal postnatal care (PNC) prevents both maternal and neonatal deaths, in addition to the prevention of long-term complications. Sub-Saharan Africa (SSA) had the highest neonatal mortality rate, despite this adequate content of PNC for the newborn is not explored in SSA, therefore, it is important to identify the factors affecting adequate content of PNC for the newborn in the region. This may assist the program and policymakers to give an intervention based on the findings of the study. Methods A secondary data analysis was performed using 21 SSA countries’ Demographic and Health Surveys. A total weighted sample of 105,904 respondents were included in this study. A multilevel binary logistic regression model was fitted. The odds ratios along with the 95% confidence interval were generated to determine the individual and community-level factors of adequate PNC for the newborn. A p-value less than 0.05 was declared as statistical significance. Results Adequate PNC for newborns in sub-Saharan Africa was 23.51% (95% CI: 23.26, 23.77). Mothers age ≥ 35(AOR = 1.21,95% CI: 1.06,1.16), mothers’ primary education (AOR = 1.18, 95% CI: 1.13, 1.23), secondary education (AOR = 1.58, 95% CI:1.51,1.66), higher education (AOR = 1.61,95% CI:1.49,1.75), rich wealth status (AOR = 1.05,95% CI = 1.01,1.10), ANC visits 1–7 (AOR = 1.61,95% CI:1.51, 1.73), antenatal care (ANC) visit 8 and above (AOR = 2.54,95% CI: 2.32, 2.77), health facility delivery (AOR = 4.37, 95% CI:4.16,4.58), lived in east (AOR = 0.23,95% CI = (0.20,0.26), central(AOR = 0.21,95% CI = 0.19,0.24), west African sub-regions (AOR = 0.23,95% CI = 0.21, 0.27), Urban dwellers (AOR = 1.22,95% CI: 1.17,1.27), and low community poverty (AOR = 1.21 (95% CI = 1.11,1.31) were associated with adequate content of PNC for the newborn. Conclusion The finding of this study showed that the overall prevalence of adequate content of PNC for a newborn in SSA countries was low. The low prevalence of adequate content of postnatal care for newborns in SSA countries is a concerning issue that requires immediate attention. Age of the respondents, level of education, wealth status, ANC visits, place of delivery, residence, community-level poverty, and sub-region of SSA were the individual-level and the community-level variables significantly associated with adequate PNC for the newborn. Strategies should focus on increasing access to antenatal care services, particularly for vulnerable populations, such as younger mothers, those with lower education levels, and individuals residing in impoverished communities to improve PNC for the newborn.
Conclusion
The finding of this study showed that the overall prevalence of adequate content of PNC for a newborn in SSA countries was low.The low prevalence of adequate content of postnatal care for newborns in SSA countries is a concerning issue that requires immediate attention.Age of the respondents, level of education, wealth status, ANC visits, place of delivery, residence, community-level poverty, and sub-region of SSA were the individual-level and the community-level variables significantly associated with adequate PNC for the newborn.Strategies should focus on increasing access to antenatal care services, particularly for vulnerable populations, such as younger mothers, those with lower education levels, and individuals residing in impoverished communities to improve PNC for the newborn.
Background
Postnatal Care (PNC) is one of the care packages that comprise the worldwide continuum of care for mothers and newborns [1].Maternal and child mortality rates are widely used to judge a country's health system's success.However, 2.4 million children died in their first month of birth worldwide in 2020.Over 6700 newborns die every day, accounting for 47% of all child mortality under the age of five [2].Most neonatal deaths (75%) occur during the first week of life, and in 2019, about 1 million newborns died within the first 24 hours [2,3].
Sub-Saharan Africa (SSA) had the highest neonatal mortality rate (27 deaths per 1000 live births) in 2020, followed by Central and Southern Asia and the Middle East (23 deaths per 1000 live births).A child born in SSA or Southern Asia is 10 times more likely to die in the first month of life than a child born in a high-income nation.The majority of neonatal deaths in impoverished nations are caused by childbirth, intrapartum, and inadequate immediate infant care practices [2, 4,5].
The postpartum period is critical for both the mother's and newborn child's health and survival [6,7].It is suggested that women who give birth in a healthcare facility with a trained attendant get prompt PNC and stay at the facility for at least 24 hours in the case of an uncomplicated birth [8].However, it has been observed that even when women give birth in a healthcare facility, PNC may not be included because women may only be there for a few hours [9].Only 13% of the 48% of women in SSA who give birth without a trained birth attendant receive a PNC visit [10].
The international community established a sustainable development goal (SDG) to reduce neonatal and under-five mortality rates to 12 and 25 deaths per 1000 live births, respectively, by 2030 [11,12].Low-income countries, such as SSA, are falling behind in meeting the maternal and newborn health targets established under the SDG agenda [13].
Different studies show that neonatal deaths can easily be prevented and avoided in developing countries with simple, low-cost, and short-term newborn care [14][15][16][17][18][19][20].Adequate content of PNC for the newborn defined in terms of contents that a newborn utilize included: having the cord examined, having the temperature of the baby measured, counseling on newborn danger signs, counseling on breastfeeding, and having had an observed breastfeeding session [7,21].
There are no comprehensive studies regarding the content of PNC for a newborn in SSA though one single study has been conducted in Rwanda [7], but it does not take into account the hierarchical nature of the Demographic health survey (DHS).Therefore, this study aimed to assess the prevalence of adequate content of PNC for newborns in SSA countries using the recent DHS.A regional comparison is necessary to achieve current global initiative agendas such as the SDGs.Furthermore, the information derived from this study will provide invaluable insight into sub-regional adequate content of PNC for newborns.This study may also support policymakers, non-governmental organizations (NGOs), other global organizations, and researchers in identifying the factors in the African region that influence adequate content of PNC in order to provide rapid interventional measures and resource allocation to enhance their utilization of postnatal newborn care.
Study design and data source
This study used secondary data cross-sectional household data for women collected from 2015 to 2021 among 21 SSA countries demographic health surveys, namely, Burundi, Ethiopia, Malawi, Rwanda, Tanzania, Uganda, Zambia, Zimbabwe, Madagascar, Angola, Cameroon, Benin, Gambia, Guinea, Liberia, Mali, Mauritania, Nigeria, Senegal, Sierra Leone, and South Africa.We retrieved the data for this study from the DHS website www.dhsprogram.comafter the request to access the data and downloading was allowed.In this study, we combined these datasets in order to generate large datasets representing different SSA countries and generalizing PNC for newborns in the countries.DHS is a nationally representative household survey that collects data on a broad range of health indicators like mortality, morbidity, fertility, family planning, and maternal and child health [21].In this study, we used the child record datasets (KR file), and extracted the outcome and independent variables.A two-stage stratified selection procedure was used to identify study participants.In the first step, enumeration areas (EAs) were chosen at random, while households were chosen in the second stage.A total weighted sample of 105,904 respondents with their respective children were included in the study (Table 1).
Populations
All newborns during the first 2 days after the birth, two years preceding the survey year across 21 SSA countries were our source population, while newborns in the selected Enumeration Areas (EAs) or primary sampling units of the survey clusters were our study populations.Those mothers who were either permanent residents or guests who slept in the selected residences with their newborns the night before the survey were eligible to be questioned [22].Furthermore, from the included DHS data set, newborns having the missing value of the outcome variable were excluded based on the DHS guideline.
Variables and data collection procedure
Dependent variable.The dependent variable was adequate content of PNC for the newborn.Based on the world health organization (WHO) recommendations [23] and the availability of data in the DHS dataset, which was interviewed two years preceding the survey [21].Adequate content of PNC for the newborn was considered when a newborn was able to have received all the five PNC contents that included: having the cord examined, having the temperature of the baby measured, counseling on newborn danger signs, counseling on breastfeeding, and having had an observed breastfeeding session [7,21].
Individual level variable.
Several individual-level variables were included, age (15-24, 25-34, and 35-49 years), media exposure (yes/no), respondent currently working (yes/no), education level (no education, primary, secondary and higher education), wealth index generated using household asset data and by Principal Component Analysis (PCA) to classify the respondents into wealth quintiles (poor, middle, and rich), the number of ANC visit (0, 1-7, 8 and above), and child sex (male, female).
Community level variables.Since the DHS collects the data at the individual level except for residency and distance from the health facility, in this study, the rest of the community-level variables were included in the analysis by generating from the individual level variables, which includes community poverty level, community media exposure, and sub-regions of SSA.The communitylevel poverty was determined by the proportion of households in the poorest and, poorer quintiles obtained from the wealth index results.Categorized as low if the proportion of household which is from households belonging to the categories of poor was less than 50% and categorized as high if the proportion was greater than 50%.Community-level media exposure was created from the respondents' exposure to newspaper/magazines, radio, and television after merging and recoding as Yes/No, and the results were classified as low (below the median) and high (above the median) if the respondents had exposure to at least one medium based on the median value.
Data analysis
Stata version 14 statistical software was used for coding and data analysis.The data were adjusted and weighted throughout the analysis to ensure the DHS sample's representativeness and to get credible estimates and standard errors prior to data analysis.We used women's individual sample weight (V005/1000000) for this study to account for the hierarchical nature of the DHS data [22].Multilevel modeling, also known as hierarchical linear modeling or mixed-effects modeling, is a statistical technique used to analyze data that has a hierarchical or nested structure.It is particularly useful when dealing with data that has multiple levels of analysis, such as individuals nested within groups or repeated measures nested within individuals [24].In our analysis, DHS datasets have hierarchical data structures with individuals nested under geographical clusters (primary sampling units) and newborns were nested inside a cluster.This may affect standard logistic regression model assumptions such as equal variance and independence assumptions.Thus, in this study, four models were fitted: the empty model, which did not have explanatory variables, model I, which contained individual-level factors, model II, which contained community-level factors, and model III, which contained both individual and community-level variables.Because the models were nested, the Intra-class Correlation Coefficient (ICC), Median Odds Ratio (MOR), and Likelihood Ratio test (LLR) values, as well as the deviation (-2LLR), were utilized for model comparison and fitness, respectively.Model III was chosen as the best-fitting model because of its low deviation compared to other models.
The outcome variables' random effects or measures of variation were estimated using the median odds ratio (MOR), Proportional Change in Variance (PCV), and Intra Class Correlation Coefficient (ICC).Taking clusters as a random variable, the MOR is defined as the median value of the odds ratio between the area at the highest risk and the clusters at the lowest risk clusters when randomly picking out two clusters, MOR ¼ e 0:95 ffi ffi ffi ffi VA p .While, the ICC tells the variation of adequate PNC for the newborn between clusters, and is calculated as; ICC ¼ VA VAþ3:29 * 100%.Furthermore, the PCV shows the variation in the prevalence of adequate PNC for the newborn explained by factors and calculated as; PCV ¼ VnullÀ VA V null * 100% where; Vnull = variance of the empty model, and VA = area/cluster level variance [25,26].Finally adjusted odds ratios with 95% confidence intervals and a p-value of less than or equal to 0.05 were utilized in the multivariable analysis to identify associated factors of adequate postnatal newborn care.
Ethical consideration
Ethics approval was not required for this study since the data is secondary and is available publicly.However, we have been given the authorization letter to download the DHS data.More details concerning DHS data and ethical standards are available at http://goo.gl/ny8T6X.
Obstetrics-related characteristics of respondents
About 72,214 (68.19%) of the respondents gave birth in the health facility.Only a small number 6,721 (6.35%) of the respondents had eight or more ANC visits based on the newly recommendation of ANC by WHO (Table 3).
Newborn related characteristics
An approximately equal proportion of the newborns (50.84% male) and (49.16% female) were included in this study; 52.23% were between the ages of 0 and 11 months with a median age of 14 (IQR: 10, 18) months.The majority of the newborns (80.29%) were currently breastfed, and almost all (98.27%) of childbirth were singleton (Table 4).
Prevalence of adequate content of PNC for the newborn in SSA countries
The prevalence of adequate content of PNC for newborns in sub-Saharan Africa was 23.51% (95% CI: 23.26, 23.77).Among those countries, Burundi recorded the lowest adequate content of PNC for newborns (2.19%), and the highest was observed in South Africa (67%) (Fig 1).
Factors associated with adequate content of PNC for the newborn in SSA countries
In this analysis, a multivariable multilevel model was used to examine the factors associated with adequate content of postnatal care (PNC) for newborns.From the final model, it was found that residence, community-level poverty, and Sub-Region of Sub-Saharan Africa (SSA) 5).
Discussion
The provision of postnatal care significantly reduces the risk of morbidity and mortality for both mothers and children.Postpartum care helps healthcare practitioners recognize and treat postpartum issues.The purpose of this study was to determine the prevalence of adequate content of PNC for newborns and its associated factors in sub-Saharan Africa.The prevalence of adequate content of PNC for the newborn was 23.51% (95% CI: 23.26, 23.77).The observed prevalence of adequate content of PNC for newborns is still too low to achieve the desired reduction in postnatal-related child mortality and morbidity.The prevalence of adequate content of PNC for newborns ranged from 2.19% in Burundi to 67% in South Africa.Specifically, South Africa is the leading adequate content of PNC for newborns followed by Zimbabwe (49.14%),Zambia (43.62%), and Malawi (43.28%).On the other hand, Burundi, Ethiopia, Mauritania, Nigeria, Tanzania, Madagascar, Mali, Angola, and Uganda have below 20% coverage of adequate content of PNC for newborns.Several factors may contribute to this low coverage of adequate content of PNC.All the countries included in this study except South Africa, are essentially low-and-medium income countries (LMICs), and it has been found that healthcare services in these low-resource countries are insufficient, substandard, or non-existent.
Evidence have shown that mother and child health care services are underserved in LMICs due to limited infrastructure, low levels of education, low or no enlightenment, and poverty [27][28][29][30][31]. Therefore, the variation in the prevalence of adequate content of postnatal care (PNC) for newborns between countries might be attributed to differences in healthcare infrastructure, access to services, health education, socioeconomic factors, and cultural influences.
Countries with stronger healthcare systems, better access, higher health literacy, and fewer socioeconomic disparities tend to have higher rates of adequate PNC.Addressing these factors through improved infrastructure, enhanced access, health education, and socioeconomic interventions can help bridge the gap and improve PNC prevalence.Another probable explanation for the low prevalence of adequate PNC is a cultural practice, which prohibits freshly delivered neonates from being touched by anybody or leaving the house until the 10/12th day after birth [32].
The result of the study showed that residence, community-level poverty, and Sub-Region of sub-Saharan Africa were the community-level variables factors significantly associated with adequate content of PNC for the newborn.Likewise, the age of the respondents, level of education of the respondents, wealth status, number of ANC visits, and place of delivery were the individual-level variables significantly associated with adequate PNC for the newborn.
The odds of adequate content of PNC among newborns whose mothers aged � 35 years were higher in contrast to mothers whose age is between 15-24 years.Similar findings were reported in Malawi and SSA [33,34].Possibly, this positive relationship might be explained by the likelihood that PNC service experience may increase with women's age [32].This study showed that when women's educational status increases, so do their chances of having newborns who have access to adequate content of PNC.This study was in agreement with different studies [34][35][36][37].This might be because as women gain power, they will have access to knowledge about the benefits of using PNC services and will be encouraged to use them.This illustrates that education improves health knowledge and behavior [38].
The odds of adequate content of PNC among newborns whose mother lives in urban areas were higher compared to newborns whose mother resides in rural areas.Similar findings was reported from different studies, in Nigeria [37], in Nepal [39].In rural areas, the disadvantages may include access, cost of services, distance and travel time, opportunity costs of leaving work to attend health facilities, and lack of skilled personnel [40,41].The other possible explanation might be that child and maternal health services are concentrated around urban areas than rural areas.This can lead to poor health outcomes in the rural areas.This implies that rural areas should be the focus of attention to improve the utilization of adequate postnatal newborn care.To alleviate the gap of newborn care in rural areas interventions may include strengthening healthcare infrastructure, implementing mobile health services, training and deploying community health workers, and providing transportation support.Additionally, raising health education and awareness, as well as offering financial support, can help bridge the gap and improve access to adequate content of PNC in rural areas.
This study found that both the wealth status of the households and the community-level poverty determine the PNC utilization, thus newborns from a rich household and from low community-level poverty had higher odds of adequate content of PNC, respectively compared to poor households and high community-level poverty.This could be due to the fact that women with a higher wealth index are more likely to be knowledgeable [42], and develop an interest to learn [43].This implies that increasing postnatal newborn care and improving the household wealth status and decreasing the level of community poverty is necessary.Addressing community-level poverty is crucial for improving the overall well-being of communities and, subsequently, the quality of postnatal care provided to newborns.This requires comprehensive poverty alleviation strategies that focus on improving education, income generation, and access to healthcare services.
The likelihood of newborns receiving adequate content of PNC was found to be lower in the East, Central, and West African sub-regions compared to South Africa.This discrepancy may be attributed to variations in literacy levels between South Africa and other regions.For instance, South Africa has a higher literacy rate, estimated at 94.6% [44], which potentially influences health knowledge and the utilization of PNC services.The higher literacy rate in South Africa may contribute to better awareness and understanding of the importance of PNC, leading to increased utilization and ultimately improved health outcomes for newborns.The other possible difference might be due to the difference in gross domestic product (GDP) [45], which in turn may affect the infrastructure of the country.
According to the findings of this study, there is a significant association between antenatal care (ANC) utilization, facility-based delivery, and the likelihood of getting adequate content of PNC for newborns.This finding is supported by previous research studies [33,34,46].It is could be that mothers who received ANC visits and delivered in health facilities are more likely to receive comprehensive counseling on PNC services, including education on identifying neonatal danger signs.This suggests that ANC and facility-based delivery play a crucial role in promoting optimal PNC practices and improving maternal and newborn health outcomes.Efforts should be made to improve access to antenatal care services and ensure that pregnant women receive comprehensive care throughout their pregnancy.This might be achieved through the expansion of healthcare facilities, particularly in underserved areas, and the training and deployment of skilled healthcare professionals.
Regarding the strengths, we used large data set from 21 SSA countries, which is representative across the countries.Moreover, a robust multilevel modeling technique was employed, accounting for the hierarchical nature of the survey data and yielding results that are more reliable.However, the study is not free of limitations, the survey data may be susceptible to recall bias, as mothers were interviewed about care provided within two days after delivery, even if their baby is currently two years old.Furthermore, due to the cross-sectional nature of the study, it may not establish a clear temporal relationship between the independent and outcome variables.These considerations should be taken into account when interpreting the findings.
Conclusions
The finding of this study showed that the overall prevalence of adequate content of PNC for a newborn in SSA countries was low.Various factors at both the individual and community level were found to be significantly associated with adequate content of PNC, including the respondents' age, level of education, wealth status, ANC visits, place of delivery, residence, community-level poverty, and sub-region of SSA.Efforts should be directed towards interventions that address these factors to improve the provision of adequate content of PNC for the newborn.Strategies should focus on increasing access to antenatal care services, particularly for vulnerable populations, such as younger mothers, those with lower education levels, and individuals residing in impoverished communities.Furthermore, future research should explore the root cause for the variation of adequate content of PNC across different regions of Africa.This study can inform the development of targeted interventions and policies aimed at improving PNC outcomes and ultimately reducing newborn morbidity and mortality rates in SSA.
Table 2 . Socio-demographic and other characteristics of respondents in SSA, 2023 (n = 105904).
With regard to the community level factors, the odds of adequate content of PNC for the newborn who lived in East, Central, and West African subregions were decreased by 77% https://doi.org/10.1371/journal.pone.0298459.t002werethe community-level variables significantly associated with adequate content of PNC for newborns.Similarly, the age of the respondents, level of education of the respondents, wealth status, number of ANC visits, and place of delivery were the individual-level variables significantly associated with adequate content of PNC for the newborn.After adjusting other individual and community-level variables, the odds of adequate content of PNC for the newborn whose mothers aged � 35 were 1.12 (95% CI = 1.06,1.16)times higher compared to those aged 15-24.Newborns whose mothers attended primary, secondary, and higher education had 1.18 (95% CI = 1.13, 1.23)), 1.58 (95% CI = 1.51,1.66),and 1.61 (95% CI = 1.49,1.75)odds of adequate content of PNC, respectively as compared to those who did not have formal education (no education).Newborns from households with rich wealth status had 1.05 (95% CI = 1.01,1.10)times higher odds of adequate content of PNC in contrast to those who were from the poor households.The odds of adequate content of PNC for newborns whose mothers attended ANC 1-7, 8 and above were 1.61 (95% CI = 1.51, 1.73) and 2.54 (95% CI = 2.32, 2.77) times higher, respectively as compared to no antenatal care visits.Newborns who were born at a health facility had 4.37 (4.16,4.58)times higher odds of adequate content of PNC as compared to those born at home.
Table 5 .
(Continued) Null model-contains no explanatory variables; Model I-includes individual-level factors only; Model II-includes community-level factors only; Model III includes both individual-level and community-level factors, AOR-adjusted odds ratio, CI-confidence interval ICC: Intra Class Correlation Coefficient, MOR: Median Odds Ratio, PCV: Proportional Change in Variance. | 2024-02-17T06:17:02.043Z | 2024-02-15T00:00:00.000 | {
"year": 2024,
"sha1": "0ff2cdc833706d5995b7374cf3824fcbc141a70b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d917428a526e93e9341cdb524da363bfc92106a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53353989 | pes2o/s2orc | v3-fos-license | Is ketoprofen safe to use when breastfeeding?
The NSAIDS is one of the most prescribed drugs during breastfeeding. It is mainly used for pain or inflammatory diseases. Some recent data suggest that ibuprofen is compatible with prolonged breastfeeding. Some women will however need a “stronger” treatment. Drugs such as codeine, tramadol or morphine are not recommended for breastfeeding women, especially during the early post-partum stage. Thus, Ketoprofen could be interesting. Due to a lack of pharmacologic information on the drug’s transfer in mature milk, breast-feeding is being contraindicated. As a result the mother is “under treated”. The present study was undertaken in order to quantify ketoprofen’s transfer in mature breast milk. 13 patients gave their written informed consent to participate and completed the study. After the first week following the delivery, women received ketoprofen in order to treat pain or inflammatory disorders. Each patient received a dose of 176mg/day (+/-56). One milk sample and 2 blood samples at different stage of the treatment were collected. Ketoprofen milk concentrations were determined by using high-performance liquid chromatography. The mean ketoprofen milk concentration was 3.3mg/l [range 12.2 to 76]. The mean fat milk content was 3.9g/100mL range 1.7 to 6.5] and mean milk protein content 1g/100mL [range 0.8 to1.6]. Our first results showed a ketoprofen TID of 0,16mg/kg/day [range 0,02 to 0,37] and a relative infant dose RID of 0,037% [range 0,009 to 0,138] of the weight-adjusted maternal daily dose. Ketoprofen was not found into breast milk. Due to its hydrophilic properties, the drug could not cross blood milk barrier. In conclusion, the use of ketoprofen is compatible with prolonged breastfeeding, after the early post-partum stage.
Introduction
Breastfeeding rates varies depending on the mother's age, marital status, education, birthplace, BMI and smoking status during pregnancy [1]. Medication exposure is the second cause of stopping breastfeeding. The number one cause is a lack of supportive practices [2]. During the post-partum period, every women will receive in average 3,3 medications [2]. Analgesics /AINS, antibiotics and anti hypertensive drugs appear in first place. Medical network data confirm that these medications are also prescribed at a late stage of breastfeeding. One of the most given treatments during this period is analgesics (22% according the Medic-Al network; (Table 1). For example after a C-section, breastfeeding women were given up to 400 mg of ibuprofen. No measurable amounts of ibuprofen were found by Gas-liquid chromatography assay methodology in the samples of breast milk [3]. A recent study of our team on pharmacologic data of ibuprofen in mature milk concluded that ibuprofen is safe even for prolonged breastfeeding [4]. However, women might sometimes need a stronger painkiller. In 2006, Gideon Koren et al. [5] demonstrated that codeine, initially approved by the AAP to be compatible during breastfeeding, was responsible for a breastfed neonate death. Later other authors concluded that opioid and codeine are hazardous to breastfeed infants [6][7][8]. Furthermore our team had published a retrospective diagnosis of an adverse drug reaction in a breastfed neonate with dextropopoxyphene [9]. A clinical study including 26 breastfeeding women showed that the concentration of ketoprofen transferred into colostrum was very low [10]. In 2014 a French team reviewed epidemiological data about ADR in breastfed children. It showed that ketoprofen prescribed to the mothers had gastrointestinal effect [11]. They have highlighted the need of pharmacokinetic study in order to prove the link between ADR and ketoprofen maternal use. Therefore, the aim of the present study was to quantify the transfer of ketoprofen in mure milk and to discuss whether nursing mothers could receive ketoprofen on a long period. Furthermore we compared these concentrations with the age of lactation, milk's fat and protein content.
Study Design
The present study ANTALAIT PHRC AOR 10127 was funded by the French Ministry of Health and approved by the institutional ethical review board (Comite de protection des personnes (CPP) 12506, CPP Ile de France 01 Hotel Dieu Paris). The study included 13 breastfeeding mothers. All treated orally by one or two 100 mg tablets of ketoprofen for uncontrolled pain (post chirurgical pain, back pain, dental treatment) with standard treatments on the 7th day after the delivery. Ketoprofen half life (t1/2) is previously documented in the literature between 90 and 120 minutes/ Steady state concentrations are achieved when ketoprofen is administered at a constant rate (5 x t1/2 ~ 10 hours), at a minimum of one tablet/day of 100 mg ketoprofen. Women included in our study were among those who contacting the Medic-Al network. They were asking for information on the compatibility of ketoprofen with breastfeeding. Women below 18 years of age and/or treated by naproxen and/or who have not given their informed written consent were not included in the study. This study took place from the 1st of January 2011 till the 30th of September 2012.
Sample Collection
The samples from breast milk were taken sub sequentially between 1 hour and a half and 8 hours from the breast pump following the third ketoprofen dose. Each woman provided a sample of 20 mL breast milk. Two maternal blood tests were obtained as well, the first sample between 30 minutes and 2 hours following the ketoprofen intake and the second sample between 4 and 8 hours. The milk samples were obtained between 1 day and a half and 8 hours after the third ketoprofen dosage ( Figure 1). The samples were then analyzed, in order to determine ketoprofen's concentrations. All of these measures were made at the Toulouse pharmacokinetic and clinical toxicology laboratory Purpan Hospital.
Ketoprofen in Blood and Milk Samples
Milk was then frozen at -18°C and defrosted before analysis. As for the blood samples, they were stored at +4°C. A stability test was made for concentration of ketoprofen in milk and plasma. This test was made to make sure the blood could be stored at +4°C then frozen and defrosted. Ketoprofen concentrations in milk and blood samples were measured by using a high performance liquid chromatography (HPLC) with ultraviolet (UV-DAD) detection. For blood samples, 200 µL of serum was spike with 40 µL internal standard and a liquid-liquid extraction with dichloromethane at an acid pH was used. For analyzing the milk samples, we eliminated the high hydrophobic compounds (lipids) with dichloromethane at an alkaline pH followed by an acidification of the aqueous phase
Journal of Pharmacology & Clinical Research
and an extraction of ketoprofen with hexane. Separation was realized at 25°C on Protonsil 120-5-C18-ACE-EPS (5,0 µm ; 125 mm x 4,6 mm; Bischoff). The mobile phase consisted of a gradient of acetonitril (35%), methanol (15%) and ammonium formate buffer pH 3 (50 %) to acetonitril / methanol / ammonium formate buffer pH 3 (40%/15%/45%) during acquisition of 13 min at a flow rate of 1 mL/ min. For both serum and breast milk, ketoprofen was detected at 256 nm. Assay performance was validated through determination of linearity, specificity, recovery, limit of quantification (LOQ), limit of detection (LOD), repeatability and reproducibility over five days. Retention times were 6,70 min for ketoprofen and 7,74 min for the IS. Linear responses in analyte/IS peak area ratios were observed for ketoprofen concentrations ranging from 0,1 to 25 mg/L in plasma and 10 to 200 µg/L in breast milk, using a weighted (1/C) linear regression and a regression coefficient of > 0.999 (for all samples). The average extraction recovery for both ketoprofen and IS was 79 % in serum and 51 % in breast milk with an imprecision (coefficient of variation; CV %) of less than 20 %. For serum and breast milk, the LOQ was 0,1 mg/L and 10 µg/L respectively, LOD was 0,03 mg/L and 4 µg/L respectively. For serum and breast milk, maximum imprecision and inaccuracy were respectively 5,66 and -8,34 % for within-day variability and 4,67 and -10,57 % for between-day variability.
Determining the protein and fat concentrations of breast milk
By using the Miris Human Milk Analyzer (HMA) device at the human milk bank Ile de France Necker Hospital Paris we analyzed the components of human breast milk, such as the protein and fat milk concentration using a mid infrared spectral analysis (MIRSA) (4 specific filter instruments). This device from the Miris HMA (Beldico Villeurbanne France SAS) was designed to study the nutritional quality of small samples of unprocessed or homogenized human milk. Our team and others have validated MIRSA, for mother's milk and donor's human milk (DHM). HMA reproducibility: Specification of MIRSA gave a reproducibility of the measure < 0.5% for fat and protein. The intra-assay variation of the MIRSA for fat and protein was 1.8% (SD 0.04%) and 2.4% (SD 0.07%) respectively. The protein composition of a milk sample was determined by monitoring the intensity of specific wavelengths of mid-infrared radiation absorbed by the organic substances present in the sample. Absorbed radiation was transformed into molecular vibration energy. This difference in intensity between the incidental shelves and the outgoing shelves at specific wavelengths decreases according to the concentration of various substances in the samples, such as fats and proteins. By applying the Beer-Lambert law and by following the technical protocol of the manufacturer, we were able to determine the fat and protein concentration within the milk samples. Our milk samples of 2 mL were frozen at -20°C, then thawed, heated at 40°C and mixed thoroughly before analysis.
Data Analysis
Data analysis was performed according to the guidelines for studies on the transfer of drugs into breast milk. For samples with undetectable concentrations, a lower limit of quantification for ketoprofen of 10µg/L was used for calculation. For each compound
•
The mean and maximum milk concentrations over the period of collection were determined; • The mean and maximum doses that the infant would ingest, assuming an ingestion volume of 150 mL/kg/day, expressed in absolute amounts per kg and day; and • The relative infant dose (RID, expressed in percentage), by dividing the mean dose that the infant would ingest and the maternal dose related to body weight (expressed in mg/kg/day). The results were expressed as mean ± standard deviation. Estimated serum concentrations of ketoprofen (concentrations corresponding to the milk sampling time) were determined from the serum concentration measured in the first blood sample by using Microsoft Excel and the formula C(T)=A*e(-0.0061xT) Ketoprofen stability in blood and milk was evaluated at +4°C, -20C and room temperature corresponding to storage condition for the study.
Statistical Analyses
We used a polynomial regression analysis of quadratic type, with a regression line and data at the log10 scale (Minitab Inc. USA V.14 2010 software). The polynomial regression method allows modeling the curve of the relation between the RID (Y) response variable and each (X) predictor variable: protein concentration, lipid concentration, duration of lactation, by extension of the simple linear regression model and including the square predictor variable (X²).
Model was defined as follows:
Results
13 women who were treated with ketoprofen. For each of them we evaluated the transfer of ketoprofen into breast milk. Results were expressed as mean ± standard deviation. Inclusion's clinical information's are resumed in Table 2. The mean age of the nursing women included in this study was 31 years old [range 27 to 37], parity was 2,3 [range 1 to 4], gestity 2,4 [range 1 to 4]. Four of the deliveries were premature (under 37 SA) and the over at term (above 37 SA). The average delivery term was 38.5SA [range 28+6SA to 41SA]. The stability test proved than blood can be stored more than 18 days at +4°C and 3 days for milk and respectively frozen at -18°C more than 53 days and 4 month then defrosted before analysis ketoprofen concentration ( Table 3). Out of the 13 patients included, 2 of them received acetaminophen as well as ketoprofen in order to treat pain. The mean ketoprofen dose taken by the mothers was 176mg/day +/-56, milk concentration was 31,3 mg/l+/-17,4 [range 11,3 to 76], TID 0,16 µg/kg/day [range 0,02 to 0,37], RID 0,037 [range 0,009 to 0,138], milk/blood rate 0,027 [range 0,007 to 0,1]. Individual data for ketoprofen are presented in Table 4. Ketoprofen was detected in all plasma and milk samples. Maternal residual ketoprofen blood concentration at T0 was 40,24 mg/L [range 18,04 to 62,44]. The mean ketoprofen milk concentrations at Tmax was 34 mg/L [range 16 to 76].The mean TID of ketoprofen is 0,16µg/kg/ day [range 0,15 to 0,37]. The mean fat content in milk sample was 3,23g/100mL ± 1,15 and mean protein content 0,87g/100mL ± 0,27 (Table 5). Then RID in function of lactation age, fat content and protein content in milk sample was illustrate in (Figures 2-4). Ibuprofen is frequently used to treat pain and inflammatory diseases especially since codeine was proven to be toxic for breastfed infants, but it could be in little situation insufficiency to treat high score [7,12]. Until August 2010, according to the American Academy of Pediatrics policy, NSAIDs are compatible with breast-feeding [13]. The data showing a limited milk transfer, are available only for ibuprofen [3,14]. There is a very low concentration of ibuprofen, detected in breast milk due to hydrophilic properties of NSAID. These results initially only concerned colostrums. But we have also demonstrated that the transfer of ibuprofen into mature breast milk is minimum4. Concerning ketoprofen's transfer into breast milk, data concerning colostrum were published in 2007. In this clinical study (26 breastfeeding women) RID was under 1%10. We decided to confirm these data in mature milk as suggested by epidemiologist study in 2014 [11]. In the present study, the maximum milk concentration was used to calculate the maximum dose of ketoprofen
Journal of Pharmacology & Clinical Research
that the baby would ingest, by using his personal milk intake. Gas-liquid chromatography assay methodology was used to determine concentrations of ketoprofen in serum and breast milk [15]. Our first results showed a ketoprofen TID of 0,16 µg/kg/day [0,02-0,37] and a relative infant dose RID of 0,037% [0,009-0,138] of the weight-adjusted maternal daily dose. This proves that the amount of ketoprofen ingested by the baby is very small and therefore compatible with breast-feeding. We showed in the present study that, after oral administration of ketoprofen in prolonged breastfeeding (more than one week duration), the amount transferred into mature breast milk and ingested by the nursing baby is negligible. In addition, according to our study, the mother declared no side effects and no infant present adverse reaction. Due to ketoprofen hydrophilic properties and high protein bound [10] we were expecting this low milk transfer. The RID was significantly inferior to 1% and lower than the RID found in colostrum (0,6%). The ketoprofen's transfer into mature breast milk, decreases with the age of lactation. This decrease can be explained by the protein milk content decreases with the age of lactation (Figures 2&3). No correlation was found between the fat content and the relative infant dose (Figure 4). MIRSA offers an accurate method to simultaneously determine protein, fat and even energy concentration of the DHM provided by human milk banks 116. Therefore, MIRSA-based instruments may be useful in the daily practice of milk banks and in the clinical practice of neonatal intensive care units [16]. This experimental study could respond to epidemiological interrogation by Soussan and et al. [11] about ketoprofen into breast milk. Our results have proven that maternal ketoprofen could not be responsible for symptoms found in breastfeed neonates because of no milk transfer. According to our results, breastfeeding may be allowed when ketoprofen is administered to the mother. It can be used to treat any type of pain and infectious diseases after the first postpartum week. However, additional factors should be considered, such as associated therapy, infant's clinical state [17]. Human milk bank would also be questioned about donors who take or want to take ketoprofen. Indeed, infants who receive milk from a donor's milk bank are premature babies. In our study four mothers breastfed their preterm baby (under than 37 weeks gestational age). Their RID was still under 1%. So we could advice that ketoprofen is safe during breastfeeding. It is also approved for women who donate their own milk to their child or even anonymous donors.
Conclusion
No specific data, can be find on painkillers transfer and prolonged breastfeeding. What can we prescribe to breastfeeding women, who are in need of a stronger painkiller? Ketoprofen could be prescribed to breastfeeding mothers even for prolonged period without any side effect for the infant. Our study is the first one about pharmacologic data of ketoprofen in mature milk. There is no passage of ketoprofen in breast milk on the long term and we report no adverse reaction. Our results would participate to promote prolonged breastfeeding, and to accept safely donor human milk [18,19]. As suggested by Soussan et al. [11] pharmacokinetic data on analgesics and NAIDS are needed to understand drug-related adverse events occurring in breastfed children. Further research like this experimental design for other frequent drugs prescribed (neurological, endocrine, psychotropic and antihypertensive drugs) or auto medication (plants) during breastfeeding is necessary to analyse the balance risk-benefit and to promote prolonged breastfeeding [20,21].
Conflict of Interest
We declare no conflict of interest. This study was funded by National funding program AOR 10127. | 2019-03-16T13:07:26.511Z | 2015-11-02T00:00:00.000 | {
"year": 2015,
"sha1": "50bb3e050d299f3a36dd6237ba4a2da772a95147",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.19080/jpcr.2015.01.555552",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3187042366a6a43ec701dc5f777074877e3e78ce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240563864 | pes2o/s2orc | v3-fos-license | Biodecolorization of Remazol Brilliant Blue–R dye by Tropical White-Rot Fungi and Their Enzymes in The Presence of Guaiacol
The ability of the tropical white-rot fungi and their enzyme to decolorize synthetic dyes was investigated. Production of lignin-modifying enzymes (LMEs) from the three new isolated fungi, namely Trametes hirsuta D7, Ceriporia sp. BIOM 3, and Cymatoderma dendriticum WM01 were observed for 9 days incubation under static condition. The results showed that the LMEs production enhanced in the present of guaiacol. T. hirsuta D7 produced only laccase (Lac), with the highest activity was 22.6 U/L on the 5th-day of the cultivation. At the same time, Ceriporia sp. BIOM 3 and C. dendriticum WM01 secreted both laccases (Lac) with the activities 0.2 U/L and 1.0 U/L, respectively, and manganese peroxidase (MnP) with the activities 0.1 U/L and 1.0 U/L, respectively. Among the fungi, T. hirsuta D7 efficiently degraded 65% Remazol Brilliant Blue–R (RBBR) dye within 72 h using the only laccase. This study shows that laccase may have a major role in synthetic dyes' decolorization process, followed by MnP and LiP.
Introduction
The production of numerous chemicals, including dyes, increased along with intensive industrialization. Synthetic dyes are mostly used in the coloring process of paper, plastics, leather, cosmetics, and textiles industries. During processing in industries, 10-15% of dyestuff is released into waterways as effluents [1], [2] . The presence of dyes in the aquatic ecosystem reduces the penetration of sunlight, declining photosynthetic activity, and decreasing the solubility of gases that causes toxic effects on aquatic organisms. Moreover, dye-containing effluents are toxic, mutagenic, carcinogenic, and highly resistant to degradation by native microorganisms [3] . Therefore, the search for appropriate technologies for removing dyes from industrial wastewater is an important priority [4] .
Treatment of dye wastewater can occur physically, chemically, and biologically. The physical treatment removes the dye by the adsorption process, in chemical treatment, chromophores of dyes are modified by chemical reaction, while biological treatment occurs through adsorption and enzymatic degradation. However, biological treatment using microbial provide an alternative technology to the existing physical and chemical process. The advantages of biological technology are the low cost, complete mineralization process, and environmentally friendly [5], [6] .
Many researchers have extensively reported biodegradation of the dye by white-rot fungi (WRF).
Molares-Alvarez [7] reported the malachite green and crystal violet dye decolorization by WRF Pleurotus ostreatus. Falah et al [8] also reported that Leiotrametes flavida was a potential white-rot fungus for the decolorization of anthraquinone dyes. Several other WRF such as Hirschioporus larincinus, Inonotus hispidus, Phlebia tremellosa, and Coriolus versicolor can be used to decolorize dye effluent [9] . White rot fungi have commonly known as the most efficient microorganisms in breaking down synthetic dyes due to their ability to produce one or more extracellular lignin-modifying enzymes (LMEs) such as laccase, manganese peroxidase (MnP), and lignin peroxidase (LiP). Their lack of substrate specificity makes them potentially degrading a wide range of synthetic dyes such as anthraquinone, mono-azo, and diazo [10], [11] .
Indonesia, as a tropical country, has a high biodiversity of microorganisms, including fungi. A total of 200,000 species of 1.5 million species of fungi are estimated to be found in Indonesia [12] . Research on bioremediation or the use of microorganism or their enzymes for biodegradation of contaminated environments into the original condition is still ongoing. Many investigators have isolated fungi from the environment for the biodegradation of textile dyes. Phanerochaete chrysosporium, Trametes hirsuta [13] , Lentinus polychrous [14] , Trametes versicolor, Pestalotiopsis sp. [15] , and many others have been used in the decolorization studies. Researchers should utilize the biodiversity of fungi in Indonesia's forests by exploring, collecting, and screening potential fungus for different industrial use, especially for decolorizing textile wastewater. This paper presents the potential of the tropical fungi isolated from several locations in Indonesia for decolorization of anthraquinone synthetic dye. The LMEs production by the fungi is also investigated during the decolorization process.
Instruments
The instrument used in this study were centrifuge (Wise Spin CF-10), hot-plate, digital analytical balance (Mettler Toledo, Switzerland), UV-Vis spectrophotometer (UV-1800 Shimadzu, Japan) for measuring the absorbance of dyes and laboratory glassware such as Erlenmeyer flask, volumetric flask, test tube, and petri dish.
Fungal pre-culture and determination of fungal growth and decolorization rate
Fungal isolates were pre-cultured on malt extract agar (MEA) individually and incubated at 27±3 °C for 7 days. One plug of pre-cultured isolate (diameters of 8 mm) was placed onto a double-layer agar medium containing RBBR dye and incubated at 27±3 °C within 7-9 days. The composition of the double layer agar medium (per liter) was according to Anita et al [16] . The diameter of fungal growth and the clear zone's formation indicated that the decolorization rate was measured every day for 7-9 days. The growth and the decolorization rate data were expressed in cm/day. DOI: https://doi.org/10.25077/jrk.v12i2.388
Enzyme production
Fungal isolates were grown individually in 20 mL of malt extract-glucose-peptone (MGP) medium in a 100-mL Erlenmeyer flask. The MGP medium (per liter of distilled water, pH 4.5) consists of malt extract 20 g, glucose 20 g, and peptone 1 g. The addition of guaiacol was investigated to determine the effect of inducer on enzymatic activity. Twenty milliliters of sterilized MGP were inoculated with three plugs of fungal pre-culture and incubated at the static condition at room temperature (27±3 °C) for 9 days. Enzyme activity was observed for 1, 3, 5, 7, and 9 days. All the experiments were conducted in two replications.
Dye decolorization
One milliliter of RBBR dye stock solution (2000 mg/L) was added to the fungal culture at the optimum incubation time for enzyme production to a final concentration of 100 mg/L. Uninoculated Erlenmeyer flasks served as a control. The decolorization efficiency was observed for 24, 48, and 72 h after the reaction. All the experiments were conducted in two replications.
Decolorization assay
The supernatant from the filtration of the fungal culture was used for decolorization assay using a UV-vis spectrophotometer (UV-1800 Shimadzu, Japan). Dye decolorization was determined by measuring the absorbance change of RBBR dye at 592.5 nm. Decolorization efficiency (R, %) was calculated according to the following formula [15] :
Enzyme assay
Laccase activity was measured by monitoring the oxidation of 2,2-azino-bis-[3-ethyl benzothiazoline-6-sulphonic acid] (ABTS) at 420 nm. One unit of laccase activity was defined as the amount of enzyme necessary to oxidase one µmol of the substrate in 1 min. The reaction mixture for laccase assay contained 100 μL of culture filtrate, 400 μL of 0.1 M acetate buffer pH 4.5, and 500 μL of 2 mM ABTS [17] .
LiP activity was determined by the oxidation of veratrylic alcohol at 310 nm. One unit of LiP was defined as the amount of enzyme that oxidized one µmol of veratrylic alcohol per minute. The reaction mixture for the LiP assay was composed of 1 mL culture filtrate, 0.3 mL of 2 mM H2O2, and 2 mL LiP buffer [18] .
The formation of Mn 3+ determined manganese peroxidase activity in sodium malonate buffer (pH 4.5) in the presence of H2O2 at 470 nm. One unit of MnP was defined as the amount of enzyme required to form 1 µmol of Mn 3+ in 1 min. The reaction mixture for MnP assay consisted of 100 µL culture filtrate, 845 µL of 50 mM malonic buffer (pH 4.5), 12.5 µL of 20 mM 2,6-dimethoxyphenol (DMP), 12.5 µL of 20 mM MnSO4, and 30 µL of 2 mM H2O2 [16] .
Results and Discussion
Fungal isolates were analyzed for their growth and decolorization rate on solid agar medium ( Table 1). The results showed that decolorization activity by T. hirsuta D7 occurs along with the growth process. It can be seen from the value of both growth and decolorization rate, which were identical: 1.54 cm/day-meanwhile, two other fungal isolates, Ceriporia sp. BIOM3 and C. dendriticum WM01 had a higher growth rate than the decolorization rate.
Variation in the yield of LMEs production making, not all the WRF species can appropriate for mycoremediation. In this study, we examined the LMEs production of each fungal isolate. The effect of guaiacol as an inducer on LMEs production was also evaluated. LMEs, especially laccase, were constitutive or inducible enzymes and were produced during secondary metabolism. Its production depends on the nutrient such as carbon, nitrogen, and inducer in the medium [23] .
The LMEs production by three fungal isolates is shown in Figure 1-3. The LMEs were produced higher in the presence of guaiacol than that of the absence. Other research also reported that guaiacol enhanced the laccase production by Pleurotus ostreatus [24] . The highest laccase activity observed in T. hirsuta D7 was 22.6 U/L at five days of cultivation. However, manganese peroxidase (MnP) and lignin peroxidase (LiP) were only detected on the first-day cultivation, and the activities were not detected until the end of cultivation (9 days) ( Figure 1). By contrast, Ceriporia sp. BIOM 3 and C. dendriticum WM01 secreted both laccase and MnP. Ceriporia sp. BIOM 3 produced MnP with lower activity than laccase activity, and no LiP activity was detected ( Figure 2). The MnP activity produced by C. dendriticum WM01 was proportional to laccase activity and optimum at five days of cultivation. LiP activity by C. dendriticum WM01 was only detected on the first-day cultivation ( Figure 3).
WRF can be classified based on the type LMEs produced by the fungi. In our study, the three isolates produced high laccase and MnP, but poorly LiP was detected. Dao et al. [25] also reported that their isolates, Cerenna sp. isolate Lyc23 and Rigidoporus vinctus NZD-mf190, did not produce LiP in the extracellular medium. However, it is uncertain whether the fungal collection can produce LiP enzyme because, in this study, the fungi were only cultivated in a liquid medium. Another study reported that WRF secreted the LiP enzyme at a solid-state fermentation system or in a carbon-limited medium [25] .
The dye decolorization ability of LMEs produced by three fungal isolates was tested against the anthraquinone synthetic dye, RBBR ( Figure 4). The study showed that T. hirsuta D7 decolorized 65% of 100 mg/L RBBR within 72 h. While Ceriporia sp. BIOM 3 and C. dendriticum WM01 exhibited 52% and 50% decolorization, respectively. Although the two enzymes: laccase and MnP were mainly detected in the liquid medium during the decolorization, the laccase seems to have a major role in the decolorization process of RBBR dye. It can be supposed from the highest decolorization in the culture of T. hirsuta D7 due to the noteworthy production of laccase alone compared to Ceriporia sp. BIOM 3 and C. dendriticum WM01. Generally, laccase is the main enzyme for dye decolorization, followed by MnP and LiP [26] .
Conclusions
Tropical fungal isolates, especially T. hirsuta D7, showed a positive correlation between the growth rate and decolorization ability, which was 1.54 cm/day. The addition of guaiacol as an inducer enhanced the production of LMEs.
Remarkable decolorization of RBBR dye as much as 65% was found in the culture of T. hirsuta D7, which produced the highest laccase activity (22.6 U/L), which indicated the role of laccase in the decolorization of RBBR dye. The study suggests that the LMEs, particularly laccase produced by tropical fungal isolates, can be used for bioremediation of synthetic dyes wastewater from textile industries. | 2021-10-21T15:17:37.720Z | 2021-09-09T00:00:00.000 | {
"year": 2021,
"sha1": "f2ef0080aa90359780366d2dc01e9bdc8d2c27f1",
"oa_license": "CCBY",
"oa_url": "http://jrk.fmipa.unand.ac.id/index.php/jrk/article/download/388/315",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4c8301e5e162ff3e820c3ad2693ed2bfae921e20",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
263777413 | pes2o/s2orc | v3-fos-license | Impact of a simulation-based education approach for health sciences: demo, debrief, and do
Background Skill-based practice (e.g., communication skills) is important for individuals to incorporate into students' learning and can be challenging in large classes. Simulation-based education (SBE) is a method where students can learn and practice skills in a safe environment to use in real world settings with assistance of peer coaching. The COVID-19 pandemic presented challenges to providing students with sufficient SBE. The purpose of this paper is to: a.) describe a SBE approach for health coaching referred to as “Demo, Debrief, and Do” (DDD), b.) discuss how this approach became important in COVID-19 classroom experiences, c.) describe the impact of DDD activity on students in a health sciences curriculum. DDD is a collaborative activity where graduate health coaching students demonstrate coaching skills, debrief their demonstration, and support undergraduate students to demonstrate (or do) their own coaching skills in a small virtual online setting. Methods Qualitative feedback from 121 undergraduate students enrolled in 3 sections of a behavior change strategies course and quantitative surveys to examine their confidence in applying the skills and overall satisfaction with DDD were gathered. Results The overall average confidence level following the lab was 31.7 (0–35). The average satisfaction level following the lab was 23.3 (0–25 range). The most common highlight of this DDD experience described was observing the coaching demonstration (i.e., demo), followed by the feedback (i.e., debrief), and the practice (i.e., do). Conclusion The (DDD) simulation approach fulfilled an educational need during the COVID 19 pandemic and filled a gap in offering SBE opportunities for both graduate and undergraduate students while learning effective client-communication skills health coaching delivery. Supplementary Information The online version contains supplementary material available at 10.1186/s12909-023-04655-w.
Background
Skill-based practice is important for individuals to utilize in health-related fields and may be critical to incorporate into students' learning.However, educating students to learn skill-based techniques can be challenging, especially in large classes given the complexities of implementing hands-on practice with large groups.Most of the literature focused on large class sizes with skill-based techniques is in medical related fields and generally incorporates "flipped classroom" techniques followed by the use of simulation-based experiences [1,2].Simulation-based education (SBE) is another method by which students can learn and practice skills that they would use in real world settings.SBE is a practice in which students undergo guided experiences where they take on various roles (client, health care professional, family member) in acting out a case study experience [3].It may be used in situations where participants may have hands-on experience in an environment where learners may feel more comfortable practicing with reduced fear of making mistakes [4].
SBE activities have shown promise in increasing competencies in healthcare delivery in both undergraduate and postgraduate students [3].In a recent review article [5], researchers determined that SBE included the following as best practices: simulation design and delivery (i.e., interactivity and repeated practice), resources (i.e., facilitator competency), and curriculum related integration and planning (e.g., curriculum application, opportunities for practice).The authors further suggest that SBE should focus on these best practices [5].
The literature suggests that SBE using peer coaching may be beneficial for student learning.Ickes and Mcmullin [6] reported successful teaching outcomes from utilizing graduate students in the health coaching field.In their example, 15 students were enrolled in a graduate health promotion and behavior change course that focused on health coaching techniques.The health coaching students were paired up with participants in a campusbased physical activity course designed specifically for obese college students.As a result of fostering a coaching relationship with these undergraduates, the graduate students reported improvements in self-efficacy for health coaching skills, knowledge scores, and comfortability with the skills necessary to be a coach [6].Although literature has discussed the challenges for students in health sciences to transfer the classroom knowledge into practice [7], peer coaching is one method to enhance peer learning and increase skill learning and professional development as a form of SBE.For the purpose of this paper, peer coaching is described as peers participating in a demonstration, observation, discussion and feedback experience to learn with and from one another.The use of peer coaches is well-aligned with social cognitive theory of learning and can help students to increase their self-efficacy [8] in implementing these skills [7].
Teaching and learning challenges posed by COVID-19 Pandemic
The COVID-19 pandemic presented new challenges to providing students with sufficient simulation-based education.The pandemic forced a quick pivot to online learning.This was a difficult transition for both educators and students who had to adapt to teaching and learning in virtual environments [9,10].Regarding university level students, this transition posed a greater learning challenge because of the diverse populations, various student learning needs, and limited access to technology to facilitate successful online learning [9].Furthermore, some of the students who felt most productive with face-to-face instruction and small group assignments may have felt a void in their overall learning experience.In several cases, students expressed needing real-time interactions with their professors and peers to assist with learning techniques [11].A study by Adnan & Anwar [11] found that 42.9% of college students felt they struggled to complete group projects in distant learning formats.The need for real-time interactions during the transition to virtual learning environments posed the question of how to incorporate simulation-based education into curriculums.
Potential solutions
Researchers have examined potential solutions to educate students using SBE throughout the pandemic [12][13][14].In Canada, a group of educators worked with master's in social work students to implement "Virtual Practice Fridays" [13].This experiential learning technique consisted of splitting a large class into groups of 10 students by utilizing virtual breakout rooms.During the sessions, the students practiced the role of a social worker working with a client.After completing their role as the social worker, the students received feedback from both their faculty and peers [13].In this case, the students processed case notes, reviewed recordings, and completed reflections on their client interactions.Similarly, in a medical school setting, Jeong and colleagues [12] implemented virtual peer teaching into their curriculum to work on skills related to patient education.In this example, teaching guides outlining what should be covered were created and implemented throughout each session.These guides were a way to maintain fidelity of the sessions.Jeong and colleagues [12] planned to continue administering these experiential techniques in their curriculum after returning to in-person learning, as they believed this style of teaching can benefit faculty, students, and peer teachers.Malone [14] also used a virtual platform to facilitate learning during the pandemic in their nurse residency program.The use of a virtual platform in this setting allowed for nursing students and faculty to interact, while the small group settings used for case study reviews provided enhanced opportunities for feedback and interactions from peers.Survey results emphasized the importance of virtual peer interactions as 95% felt they were helpful.In multiple health science disciplines, utilizing small groups in a virtual setting allowed students to continue to progress in their knowledge while providing opportunities to connect with their peers and mentors.
The purpose of this paper is to: a.) describe a simulation-based education approach for health coaching referred to as "Demo, Debrief, and Do", b.) discuss how use of this approach became more important in COVID-19 classroom experiences, and c.) describe the impact of this Demo, Debrief, and Do activity on both the graduate and undergraduate students in a health sciences curriculum.
Health behavior science course context
The overall context for this study was an undergraduate Health Behavior Science course focused on teaching junior or senior undergraduate level students the application of behavior change strategies often completed through case studies and practice scenarios.The graduate component incorporated graduate health coaching students within this institution's Health Coaching Certificate program.In Health Coaching training, emphasis is placed on individual and group-oriented coaching scenarios through personal practice and observation using simulation-based education approaches.As part of the training process, students practice creating a safe space for client interactions as well as debriefing a client-centered interaction with peers as well as clients [15,16].
The standards for training health coaches in the certificate program are set forth by the National Board for Health and Wellness Coaches (NBHWC).Health and Wellness Coaches work one on one or with groups of individuals pursuing improved overall health.This is achieved through the individual's partnership with a health coach who may assist in the creation of selfdirected health behavior changes to support sustainable and long-term health and wellness [17].
The health coaching skills included motivational interviewing (a collaborative style of communication between a practitioner and client, appreciative inquiry) a strength-based approach to creating lasting positive change, goal setting, evoking, guiding, and supporting clients' decisions around their health behaviors to reduce the impact of chronic disease and improve their health and well-being [14,18,19].
Instructor challenge
The instructors of both the graduate and undergraduate courses were also challenged in teaching these health coaching skills in a larger class size.Due to the large class size, it was difficult for the instructor to gauge student's ability to practice the communication styles being taught while providing specific feedback in a timely fashion.
Rationale for graduate students working with undergraduate students
Lecture-based teaching alone does not offer the experiences and skills needed to prepare students for a career in health and wellness coaching [20].Informal student feedback obtained after course completion indicated that the graduate students desired more opportunities for SBE learning experiences.To address the needs of the students, combining graduate and undergraduate students in this format in which both student populations could obtain these hands-on learning opportunities helped address the desires of the students as well as the challenges of instruction during COVID.This format allowed demonstration of high-quality coaching, time to debrief the demonstration with the instructor and peers, and time to practice coaching in front of the instructor and peers.Additionally, the format also supported opportunity to practice proficiency in the necessary coaching skills to work with actual clients, as it is suggested to practice immediately after observing, practice often, and practice in different modalities to adequately refine skill sets, all in a controlled environment [3,21,22].
Rationale for undergraduate students
Initially, it was difficult for the undergraduate students to gain a clear understanding of the micro-skills necessary for developing health coaching competencies.Ickes & McMullin [5] note the complexities of teaching health coaching skill sets while ensuring regular practice and feedback from experienced health coaches.The authors highlight specifically the skills of, "fostering autonomy, expressing empathy, intrinsically motivating individuals, and suggesting strategies to improve self-efficacy" and acknowledge that they are developed over time and with generous opportunity to practice [6].Previously in this course, videos, class activities, and practice with peers were the most common strategies used by instructors to teach undergraduates coaching skills.Unfortunately, the video demonstrations were long, which made it difficult for students to focus on the sections needed to learn the skills.The large number of students enrolled in the course made it difficult for them to experience personalized guided practice, feedback, clarification, and discussion about the variety of health coaching scenarios they were practicing.For some individuals, group work can be intimidating to practice skills while experienced faculty and peers are present.To increase learning proficiency and address these concerns, the instructors brainstormed ideas and decided to have health coaching graduate students demonstrate the skills for undergraduates, as well as facilitate class discussions and SBE in smaller groups.It was felt this approach would support a more relaxed collaborative environment, encourage greater engagement, and facilitate comfort in asking questions.
"Demo, Debrief and Do": Description, development, and implementation
The "Demo, Debrief and Do" is a collaborative activity in which graduate health coaching students demonstrate coaching skills, debrief their demonstration with the undergraduate students within the group, and support the undergraduate students as they work in pairs to demonstrate (or do) their own version of the coaching skills in front of the small group.Through the graduate coaching demonstration, the undergraduate students observe a live interaction between a coach and client.The graduate student coaches may pause the demonstration to debrief in the moment the skills utilized during the coaching conversation.They additionally debrief after the conclusion of the demonstration which allows continuous, rich discussion about the interaction as well as open conversation about strategies (tips and tricks) to support undergraduate learning and engagement.Finally, when the undergraduate students take turns as the coach and client, immediate feedback and opportunities to "re-coach" are offered in the moment which supports improved individual and group learning.See Table 1 for a thorough description of Demo, Debrief, and Do.
The development of the Demo, Debrief, and Do approach was an iterative process.Initially, in the Fall of 2019, the technique started with two graduate coaching students visiting the undergraduate classes and demonstrating a live coaching interaction on a few occasions during the semester.At that point, there was not a specific case study assignment.This was primarily a demonstration of a general role-played coaching interaction with a bit of engagement from students practicing the skills demonstrated and participating in a question/answer session.Therefore, this iteration of Demo, Debrief, and Do was not structured and did not include demonstrating a specific set of coaching skills from beginning to end.One of the themes from the feedback obtained from both the undergraduate and graduate students during this experience was that they wanted more similar simulation-based education opportunities.As COVID-19 restrictions related to in person learning continued to be in place for the Fall of 2020, offering an opportunity for both groups began to surface as a viable solution to provide more simulation-based education class sessions while also filling the need for graduate students to get sufficient training hours in health coaching.In the fall of 2020, the first small group sessions with graduate student coaches occurred virtually in the undergraduate course.During the class session, graduate students demonstrated a behavior change case study for the full class designed to showcase specific coaching skills and then discussed the demonstration in small groups (using break out rooms) with the undergraduate students in a safe environment to help reduce the fear of completing the activity in front of a large group.This specific case study was used as a course skill demonstration assignment in which the undergraduate students created their own video demonstrating the skills that were taught and practiced during the session.
Research questions
The research component of this paper focuses on addressing the following questions: 1. What is the impact of the Demo, Debrief, and Do simulation-based education approach on undergraduate students' learning, satisfaction, and confidence in the microskills taught? 2. What is the impact of leading the SBE experiential activity on health coaching graduate students?
Design
The study used a convergent mixed methods design.A parallel cross-sectional survey approach was used to gather qualitative feedback from undergraduate students on the lab experience and quantitative responses ; and summarization may be used throughout and at the end of an interaction (e.g., You have shared a lot of important information, let me summarize and see if I understand so far…) and "ensures shared understanding and reinforces key points made by the client" [24].
Students are taught to use these micro-skills, along with other concepts and strategies from MI and other health behavior change theories in this behavioral science course.OARS micro-skills were a main component included in this DDD simulation.A detailed description of Motivational Interviewing is beyond the scope of this paper.A variety of resources, textbooks, and research papers are available describing this approach and the evidence supporting it [25,26].
This behavior change course is a required course for all students in the Health Behavior Science Bachelor of Science degree program at a mid-Atlantic area University.
Participants were junior or senior year students enrolled in three sections of this behavior change strategies course delivered from Fall 2020 to Spring 2021.In total, 121 undergraduate students were enrolled in the three sections and there were 78 responses to surveys and 113 responses to open-ended questions.
All procedures were reviewed and approved by the institution's Institutional Review Board; and the research was exempt from federal policy requirements for the protection of human subjects.
Evaluation methods Undergraduate
Satisfaction and self-confidence quantitative measure The "Student Satisfaction and Self-Confidence in Learning Scale" was used to examine satisfaction and self-confidence regarding the DDD lab experience [27,28].This is a 13-item questionnaire with two dimensions, i.e., satisfaction with current Learning (5-items; range = 0-25) and self-confidence in learning (8-items; range = 0-40).Five items were slightly modified to reflect the students' satisfaction specific to the demo, debrief and do labs, such as replacing "medical curriculum" with "health coaching skill set".Items pertaining to satisfaction with learning included "The methods …were helpful and effective." I enjoyed how the instructor taught the simulation." Items referring to self-confidence in learning included "I know how to use simulation activities to learn critical aspects of these skills.", "I am confident that I am mastering the content of the simulation activities…".A 5-item Likert response scale is used with responses from "strongly disagree" (scored 0) to "strongly agree" (scored 5).An average score and standard deviation were calculated for of the two dimensions where higher scores indicate higher satisfaction and higher self confidence in learning.This measure was used with the two sections offered in the Fall 2020 semester.
Qualitative survey questions
The qualitative survey questions were developed by the researchers to address the question: "What is the impact of the Demo, Debrief, and Do simulation-based education approach on undergraduate students' learning?"A qualitative survey (Supplemental File 1) was completed after the DDD simulation activity that asked the following questions: (a) What was the highlight of working with the graduate students for you? and (b) How did your coaching improve by working with the graduate students?
Qualitative data coding and summarization The qualitative data from the two survey questions was reviewed multiple times by the coding team, including 2 graduate students and two faculty, to identify codes.A conventional inductive content analysis approach guided this process [29].The team met over the course of several weeks to consolidate and streamline the coding process.First, each team member independently reviewed the student responses and highlighted themes.After each member finalized themes independently, the team met and compared their themes.Team members agreed upon similar themes and used these to finalize codes and create a codebook.The code book was used to guide summative content analysis of student responses and report findings.In total, five team members used the codebook to code the responses from each of the two qualitative survey questions for the first two simultaneously delivered sections of the course.For the third subsequent course section, only two coders reviewed the qualitative questions to code the data since the codebook was previously developed.A third coder participated, when needed, to resolve inconsistencies of the first two coders for this course section.All coders were paper authors.Themes are listed and described in Table 2.
Health coaching graduate students
The qualitative data assessing the graduate student experience was obtained through email.Two reflection questions were sent to each graduate student, and they submitted their responses through email.The questions included: 1) What was the highlight of working with the undergraduates for you?; 2) How did your coaching improve by working with the undergraduates?Completion of the graduate student survey was optional.A total of 4 of 8 graduate students completed the follow up reflection questions.Once the students submitted their responses, a content analysis was conducted by 2 team members; themes were compared, inconsistencies were discussed, and final themes/notable quotes were determined.The team members chose a unique quote from each graduate respondent to represent the breadth of themes.Furthermore, the team members omitted responses from the graduate students that were repetitive (See Table 3).
Student satisfaction and self-confidence in learning scale
The overall average confidence level following the lab was 31.7 (SD = 2.8; possible scale range = 0-35).The average satisfaction level following the lab was 23.27 (SD = 2.3; possible scale range = 0-25).Overall, these composite scale levels are high indicating that students felt satisfied with the experience and confident in their learning (see Table 4 for detailed summary of Student Satisfaction and Self-Confidence in Learning Scale).
Qualitative feedback
The themes titles, descriptions, frequencies and sample quotes are shown in Table 2.As can be seen in this table, regarding the highlight of the student experience, the most common highlight described was observing the coaching demonstration.A representative quote is "Just observing their approach helped me learn and see different ways of health coaching that I can incorporate myself." The second and third most common themes were "debrief " and "do" and were reported in similar frequency (See Fig. 1).The debrief comments focused on the feedback they received from coaches on their own role-played coaching interaction.A representative quote is "Highlight to working with the practicum students was getting really personalized feedback when practicing doing the coaching." The "do" feedback focused on students' experiences with completing their own role-played interaction.A representative quote is "Being able to practice coaching in front of them, and then they were able to guide us on the right path." Other common themes included their awareness of the health coaching approach more generally and getting general information (e.g., tips, advice, guidance).In response to the question about the areas they felt improved based on the DDD experience, the most common skill-based themes were learning to be "client-centered" and feeling more confident in their skills.Also, building rapport and maintaining a comfortable flow during the interaction were also common themes.Regarding the themes associated with the mechanisms that impacted their improvement, the most frequently reported theme was receiving feedback or guidance from the coaches.The other common themes were practice, demonstration by the coaches, and the debrief (e.g., getting advice, tips, suggestions on how to reframe questions using open-ended questions, creating appropriate goals, using OARS techniques) with these representing similar frequencies.
Graduate student qualitative feedback
The graduate students reported various highlights and improvements associated with the Demo, Debrief, and Do (see Table 3).A quote provided by a graduate student on the highlight of the experience includes, "I would see specific students take our feedback and improve on their delivery in the next DDD.I was impressed by the questions they asked and how most of the students were engaged in the process." Along with this, the graduate students indicated having the opportunity to gain real experience and constructive feedback from the students and professors was a highlight.An example quote of how
Increased Confidence
The student discussed improvements in their confidence 6 "They are also students in the master's program, so they understood what it was like to be new at this and were great at giving us ideas and helping us build our confidence when we practiced the coaching sessions as well."
Small Group Interaction
The student discussed smaller group interactions in some way being a highlight in comparison to the large class 1 "With the smaller group, I definitely felt better about talking and having someone on one help with the health coaching, so I really loved being in that small group environment with that extra help." the experience improved their coaching includes, "The undergraduates provided me the opportunity to reflect on my style of coaching, and to acknowledge the different coaching styles that different people have, and to find ways to accommodate that in different settings/with different clients." In addition, becoming more secure and confident with the entire coaching process helped to improve their coaching.Although the Demo, Debrief, and Do were designed to provide undergraduate students with the opportunity to gain "real-life" coaching experiences, it is evident that there were beneficial experiences for graduate students as well.
Discussion
The main objectives of this paper were to describe a SBE approach for health coaching and the need for this teaching method during COVID-19 and describe the impact this technique had on both the undergraduate and graduate students enrolled in a health-related curriculum.This study's contributions to the literature include highlighting an SBE approach that was successful • "I would see specific students take our feedback and improve on their delivery in the next DDD.I was impressed by the questions they asked and how most of the students were engaged in the process." • "Getting a chance to gain real experience and receive constructive feedback from both clients and professor." #2: How did your coaching improve by working with the undergraduate students?
• "The undergraduates provided me the opportunity to reflect on my style of coaching, and to acknowledge the different coaching styles that different people have, and to find ways to accommodate that in different settings/with different clients." • "It helped me feel more secure/confident with the whole coaching process.Working with them also helped give room to figure out my coaching style."
Subscale or Question Mean (SD) [range]
The teaching methods used in this simulation were helpful and effective.(Satisfaction Question) Mean: 4.75 ± .45Range: 3-5 The simulation provided me with a variety of learning materials and activities to promote my learning in the coaching skill sets (Satisfaction Question) in its implementation and had a positive impact on the acquired learning in both graduate and undergraduate student populations.
The Demo Debrief and Do (DDD) simulation approach fulfilled an educational need during the COVID 19 pandemic.It also continues to fill a gap in offering simulation-based education opportunities for both graduate and undergraduate students while learning effective clientcommunication skills for best practice in health coaching delivery in a safe environment.Additionally, students learned how to navigate and communicate within a virtual session, a valuable skill for our future leaders in health care.Overall, we found that this SBE practice had benefits across all participants in the class, the faculty, undergraduate students, and graduate students.This is similar to findings that Jeong and colleagues (2020) found in their implementation of virtual peer teaching related to medical patient education as this program benefited the faculty, students and the peer teachers.
Impact on undergraduate students
After implementation of the Demo, Debrief, and Do simulation approach, the instructors found that the DDD was highly effective and valuable in teaching benchmarks skills visible in the higher quality coaching showcased in the videos created by the students after the learning experiences.In our study, the undergraduate students indicated, with highest frequency, that the feedback and tips from the graduate students during the debriefing section of the DDD was the highlight of the sessions.They were able to apply this feedback, as well as feedback from their peers to their videos demonstrating their skills.Kourgiantakis and Lee [13] also utilized feedback in their virtual practice Fridays and their students commented that these techniques assisted them in improving their social worker skills.In our study, the undergraduate students indicated that their coaching skills improved and they gained more confidence in their abilities to effectively communicate with individuals related to health behavior change.This is evidenced by the following quote: "working with the health coaches provided me with a demonstration of how the video should run with my partner for the case study.Practicing with my partner in front of the class and coaches allowed me to overcome my fear of acting as a coach.Seeing the other teams perform the same scenario and receive advice also made me feel more comfortable and confident".
Instructors also noticed improved health coaching skill sets compared to past undergraduate cohorts in similar video assignments.The following quote exemplifies how the DDD was a supportive mechanism in their learning and application of targeted health coaching skills: "My coaching improved by watching their session, discussing what they did right and wrong, and then by practicing it with other people in my group.In the beginning of the zoom call, the health coaches played a video of them in a session with each other, then we all discussed what the health coach did poorly and what they health coach did well, then we applied those observations and practiced with each other".Overall, demonstrating, debriefing and practicing with the graduate students was valuable as it allowed the undergraduate students an opportunity to "try out" their assignment in a safe environment with their peers and receive real time feedback before attempting to create their own graded video.
Furthermore, the smaller groups may have increased overall engagement with the material.The graduate student to undergraduate student ratio was 2 to 10.Additionally, the undergraduate students reported a greater understanding of the process of health coaching in this format.These in class instructions offered a foundation of coaching skills which was focused on segments of a coaching session and the DDD offered an example of putting the segments together in real time as demonstrated by the following quote: "The highlight of working with the graduate students was getting student interaction with individuals with students in the graduate program and getting their perspective on how they would typically administer a health coaching session.It was nice to learn from other students and get that hands-on experience of what a health coaching session entails and how we can become better with learning the proper tools to be a successful coach." The results from The Student Satisfaction and Self-Confidence in Learning survey completed by the students after each DDD experience captured favorable responses indicating that the students learned, practiced and refined the intended skills and strategies with greater confidence, competence and satisfaction.Additionally, the survey results indicate that they were satisfied with the SBE method in which the instructors taught the intended skills.SBE has been shown in literature to promote active learning and improve self-confidence by practicing, in real time, skills and strategies to support client health behavior choices and changes [12][13][14].
Impact on graduate students
The graduate coaches also showed improvements in specific areas, such as feeling comfortable with discussing certain topics, the use of effective nonverbal skills, and satisfaction with active listening skills.An example of a graduate student quote is, "one of the skills I worked on improving for myself was my use of affirmations.I find they are the hardest part of OARS to use well, so it was interesting to hear the student's ideas on how to use affirmations and develop some that naturally integrated into the conversation." The results of this program may indicate that this learning technique can be adapted and used in a variety of educational settings.
Strengths and limitations
Strengths of this research include using mixed methods approaches to gather both quantitative and qualitative information.An additional strength was the inclusion of multiple perspectives in examining the DDD approach, including undergraduate student, graduate student, and instructor perspectives.Limitations of the DDD include student feedback bias as the student may offer a higher satisfaction rate or positive experience because the assignments and surveys were not anonymous and part of a graded assignment.Also, the information obtained through surveys and reflections capture one cohort of students during one semester of the undergraduate course.Additionally, there are only a few academic programs that offer health coaching training at a graduate level and therefore fewer students possessing the skill set needed to facilitate a DDD experience.To accurately assess the effectiveness of the DDD, additional cohorts of students (graduate and undergraduate) over a course of several semesters would offer more data to measure this SBE teaching method.
Lessons learned for practice of innovative higher education
We found that implementation of this SBE experience, the DDD, was easy to implement via a virtual platform, ZOOM.During COVID, the virtual platform was needed and then was continued remotely.This platform allowed the graduate students to participate even though some of them were not close by geographically, while removing the barrier of time constraint.This type of teaching can be continued as part of the course instruction after COVID-19 restriction in both the graduate and undergraduate curriculum.
Even though the SBE experience was conducted virtually, the undergraduate students responded positively to having a graduate student instead of an instructor facilitate the group.These small virtual groups facilitated by the graduate students may have felt more informal and allowed the undergraduate students to feel less intimidated while participating in this safe environment.One student explains: "I was able to practice my skills in a smaller breakout room setting with the coaches, instead of a regular, big class session.This helped me practice my skills while also increasing my confidence levels and learning along the way.Also, being able to view the coaches' demonstration as well as the peer partner examples, helped to give me better ideas of the possible responses I could encounter and how I would professionally respond to them.For example, how to handle a client that is hesitant to get started or maybe be really nervous to share their concerns with me".The instructors learned through individual undergraduate student reflections and satisfaction of learning surveys that students found value in the DDD experience.Additionally, through standard university course evaluation, several students added in comment sections that the DDD was a highlight of their learning experience.Instructors also gleaned through qualitative feedback from graduate students that first learning the skills through their own coursework, and then demonstrating the skills in front of the undergraduate students, and lastly offering supportive feedback to the undergraduate students during the session aided in their own development and growth.
The DDD has the potential to be implemented not only in teaching health behavioral change skills, but also in other healthcare specialties where SBE is the gold standard.To further support the efficacy of using a DDD style format is evidence of this technique being successfully used with graduate students teaching undergraduate students nutrition concepts; specifically, a dietetics program incorporating health coaching skills [30].Researchers found an increase in overall knowledge of health coaching in both the graduate student health coaches and the undergraduate participants.
Future research
Future research may include investigating the implementation of this technique as supplemental modules that can be utilized at other institutions to support best practices in health behavior change curriculum.Furthermore, there is potential for the DDD to be modified and adapted in various academic settings and formats to support best practice for experiential learning of client-centered skill sets needed to work with individuals in a wide variety of healthcare settings and academic fields.With the development of technology and continued advancement of educational initiatives, this type of teaching technique has the potential to be adapted to meet the needs of students pursuing client-centered communication skills in various learning environments and diverse healthcare settings.
Fig. 1
Fig. 1 Word cloud of three major themes: demo, debrief, and do
Table 1
Demo, debrief, and do description confidence in applying the skills taught, and overall satisfaction with the experience.In addition, qualitative feedback was informally gathered from graduate health coaching students and instructors.
to debrief both during the demonstration and again at the conclusion of the demonstration • Undergraduate students prepare to demonstrate micro skills with peers by asking questions and taking notes Do Undergraduate Students • Undergraduate students in peer pairs (i.e., client and coach roles) practice micro skills in front of graduate students and peers • Sessions may have "starts and pauses" to allow students the opportunity to "re-coach" and gather real-time feedback and suggestions.Either graduate or undergraduate may pause session for clarification or suggestions to examine their "OARS" is an acronym for a set of micro-skills that are central to MI and represent open questions, affirmations, reflections, and summarization.In brief, open questions are those that "draw out and explore the person's experiences, perspectives, and ideas" (e.g., How would you like things to be different?)and avoid closed questions that obtain limited (e.g., How often do you _____?) or oneword (e.g., yes/no) responses; affirmations are used to acknowledge a person's "strengths, efforts, and past successes" (e.g., You were really resourceful in your efforts to quit smoking in the past) and "help to build the person's hope and confidence in their ability to change" (e.g., You have shown determination and skillful problem solving so far; these will help you reach your new healthy lifestyle goals); reflections "are based on careful listening and trying to understand what the person is saying, by repeating, rephrasing or offering a deeper guess about what the person is trying to communicate" (e.g., It sounds like you…)
Table 2
Undergraduate student qualitative feedback about the demo, debrief, and do simulation
Table 3
Graduate student qualitative feedback about the demo, debrief, and do simulation | 2023-10-10T13:40:53.567Z | 2023-10-10T00:00:00.000 | {
"year": 2023,
"sha1": "049b91dcfb50c712d5bcc7bca4e557864127cdde",
"oa_license": "CCBY",
"oa_url": "https://bmcmededuc.biomedcentral.com/counter/pdf/10.1186/s12909-023-04655-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37d4bc2c002e5cc6b11be8775bfc9d2d3b34dc6c",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245907979 | pes2o/s2orc | v3-fos-license | An integrative review of primary health care nurses’ mental health knowledge gaps and learning needs
Background The global COVID-19 pandemic has escalated the prevalence of mental illness in the community. While specialist mental health nurses have advanced training and skills in mental health care, supporting mental health is a key role for all nurses. As front-line health care professionals, primary health care (PHC) nurses need to be prepared and confident in managing mental health issues. Aim To critically analyse and synthesise international literature about the knowledge gaps and learning needs of PHC nurses in providing mental health care. Design and methods An integrative review. The quality of papers was assessed using the Mixed Methods Appraisal Tool. Data were extracted into a summary table and analysed using narrative analysis. Data sources CINAHL, Ovid MEDLINE, Web of Science and EBSCO electronic databases were searched between 1999 and 2019. Papers were included if they reported original research which explored mental health education/training of nurses working in PHC. Findings Of the 652 papers identified, 13 met the inclusion criteria. Four themes were identified: preparedness; addressing knowledge gaps, education programs, and facilitators and barriers. Discussion Despite increasing integration of physical and mental health management in PHC, there is limited evidence relating to knowledge gaps and skills development of PHC nurses or their preparedness to provide mental health care. Conclusion Findings from this review, together with the global increase in mental illness in communities arising from COVID-19, highlight the need for PHC nurses to identify their mental health learning needs and engage in education to prepare them to meet rising service demands.
Introduction
Mental health is defined by the World Health Organization, 2018 as "a state of well-being in which an individual realises his or her own abilities, can cope with the normal stresses of life, can work productively and is able to make a contribution to his or her community." In contrast, mental illness consists of a health challenge that significantly impacts how someone feels, thinks, behaves, and interacts with those around them. A diagnosis of mental illness is made according to standardised criteria and are a broad spectrum of disorders ranging in severity and duration ( American Psychiatric Association, 2013 ;World Health Organization, 2019 ). Internationally, the true prevalence of mental health disorders remains poorly understood. However, around 1 in 7 people globally are estimated to have one or more mental or substance abuse disorder ( Institute of Health Metrics & Evaluation IHME, 2018 .) In 2017, it was estimated that 1 in 5 (20%) adult Australians experienced a common mental illness in the previous 12 months ( Australian Institute of Health and Welfare, 2020 ). Of those who sought medical assistance, some 70.8% consulted a general practitioner ( Australian Institute of Health and Welfare, 2019 ). Increasingly, evidence of the co-existence of physical and mental health issues suggests that people presenting with physical health issues may also have associated mental health issues ( Das, Naylor, & Majeed, 2016 ). Social isolation, job losses, health concerns, and increased drug and alcohol consumption during the COVID-19 pandemic have further escalated levels of distress, fear and anxiety within communities ( Bäuerle et al., 2020 ;Rahman et al., 2020 ;Usher, Durkin, Gyamfi, Bhullar, & Jackson, 2020 ).
Despite the complex interrelationship between physical disorders and mental illness, physical and mental health services have historically largely functioned independently from one another ( Anderson, 2019 ; Australian Institute of Health and Welfare, 2020 ). Internationally, however, the move towards models of care that deliver integrated physical and mental health care in PHC settings are reporting positive outcomes ( Das et al., 2016 ;Perkins, 2015 ). Additionally, the complex co-existence of both physical and mental health concerns suggests that some people may not seek help for their mental health concerns but may present with physical health issues. At such encounters opportunistic assessment and management of mental health can provide important early intervention. This means that the responsibility for supporting mental health needs to be shared across the multidisciplinary health workforce, requiring skilled clinicians to deliver these services in diverse clinical settings ( Shah et al., 2020 ).
While nurses are the biggest group of health professionals providing primary mental health care, they are often inadequately prepared for this role ( World Health Organization, 2007 ). The PHC nurse's scope of practice varies internationally and is shaped largely by the nature and structure of local PHC services, service funding, and interdisciplinary collaboration ( Halcomb, McInnes, Patterson, & Moxham, 2019 ;Shah et al., 2020 ). Further complexities in the provision of primary mental health care are attributed to the variable qualifications, skill level, and education of PHC nurses ( Freund et al., 2015 ;Halcomb, Stephens, Ashley, Foley, & Bryce, 2016 ). Additionally, specialist mental health expertise is not generally required in most PHC nursing roles.
The PHC nurse's role in the prevention and management of chronic health conditions is well understood. Despite this, there is little empirical evidence about the PHC nurse's knowledge or learning needs in providing primary mental health care ( Lee & Knight, 2006 ). This is despite several workforce studies identifying a lack of confidence and/or competence about PHC nurses' mental health education and practice ( Crompton & Hardy, 2018 ;Secker, Pidd, & Parham, 1999 ). However, a recent systematic review of the evidence reports a trend towards improved outcomes when PHC nurses are prepared with the knowledge and education to meet community mental health care needs ( Halcomb et al., 2019 ).
The prevalence of mental illness in the community, and the recent impact of the COVID-19 pandemic on the burden of mental illness, will require a PHC nursing workforce with the skills to undertake mental health screening appropriate to their roles. The growing burden of mental illness has created awareness of the urgent need to assess PHC nurses' mental health knowledge and learning requirements to meet evolving community needs ( Halcomb et al., 2019 ). It is timely, therefore, to review the literature to address this gap through critically analysing and synthesising the international literature around the knowledge gaps and learning needs of PHC nurses in providing mental health care.
Methods
An integrative review method, as described by Whittemore and Knafl, (2005) , guided the synthesis and critical review of the empirical literature. Integrative reviews are a robust method to synthesise heterogeneous literature so that a comprehensive understanding of the phenomenon is achieved.
Search strategy
CINAHL, Ovid MEDLINE, Web of Science and EBSCO electronic databases were searched for relevant literature, using keywords as described in Fig. 1 . The reference lists of identified papers were also examined for additional papers.
This search sought papers published between 1999 and 2019 in the English language. This date range was chosen based on the rapidly changing environment of PHC and changes to nursing education/professional development in many countries over time that would make older literature less consistent with more contemporary trends. Papers were included if they reported original research around the learning needs of PHC nurses about mental health. Papers that focussed on PHC nurse education needs in general, or nurses with specialist mental health qualifications were excluded. Papers were also excluded if findings included other PHC professionals but where data about PHC nurses could not be extrapolated.
The initial database search identified 652 papers ( Fig. 2 ) that were imported into Endnote X8 ( Clarivate, 2016 ). Duplicates and irrelevant papers were removed (n = 487). The titles and abstracts of the remaining papers (n = 165) were assessed against the inclusion criteria. Two authors (SM) and (EH) screened the full text of the remaining papers (n = 34) and excluded a further 21 papers which did not meet the inclusion criteria. All authors reached an agreement about the final 13 included studies.
Data abstraction and synthesis
Data from each paper were extracted into a summary table by one author (SM) and confirmed by all authors ( Table 1 ). Given the significant heterogeneity of the included papers, thematic analysis provided narrative synthesis of the data ( Braun & Clarke, 2006 ;Whittemore & Knafl, 2005 ). One researcher (SM) conducted the initial synthesis and identified preliminary themes, with all researchers contributing to the development of the themes until consensus was reached.
Quality appraisal
Two researchers (SM & AK) independently assessed the quality of included papers using the Mixed Methods Appraisal Tool (MMAT) ( Pluye & Hong, 2014 ). The MMAT is a 19-item checklist suitable for the quality appraisal of qualitative, quantitative and mixed methods research ( Pluye & Hong, 2014 ). This tool has been well evaluated and its validity reported in the health sciences ( Pluye, Gagnon, Griffiths, & Johnson-Lafleur, 2009 ). Only minor quality issues were identified by this appraisal, for example, confounding variables and sampling methods were vague in two papers ( Lee & Knight, 2006 ;Naji et al., 2004 ). Given the limited literature and minor quality issues, all papers were included in the review ( Whittemore & Knafl, 2005 ).
Prince and
Nelson (2011) NZ To describe GPNs' MH education needs and to explore their involvement with patients with MH concerns.
GPNs Survey
GPNs are caring for patients on a daily to weekly basis with anxiety and depression. Low use of screening and diagnostic tools (37%, n = 19). Confidence in caring generally for MH patients was average (mean 2.8 ± SD 0.90, range 1-4). The GPNs perform a variety of MH interventions such as counselling and advice on medication and have minimal confidence in their skill level. 78% (n = 41) of GPNs knew how to access specialist services. However, only 24% (n = 12) knew of a process to follow when accessing services and there appears to be no standardisation of this process. Only 82% (n = 43) of GPN participants would inform the GP if concerned about the MH of a patient. GPNs expressed learning needs included education on MH conditions including suicidal ideation, all types of depression and bipolar disorder, and of therapies such as cognitive behavioural therapy and family therapy. Secker et al. (1999) UK To describe the MH education needs of PHC nurses.
Focus groups and interviews
Consistency between groups that a locally focussed approach required for MH education. Identified education required in MH awareness, safe working practices, management of personal/professional and role boundaries, cultural issues, information on services, counselling skills and PN depression. Structured format preferred for delivery. Interdisciplinary supervision and team support required Secker et al. (2000) UK To describe the issues arising from a training needs assessment study relating to MH, conducted on non-specialist general p ractice staff.
30 GPNs, school nurses and district nurses
Focus groups and interviews
GPNs felt that there were unmet needs among their patients for MH care and that their MH workload was increasing. MH problems were rarely the formal or presenting reason for the GPNs' involvement with patients, largely because MH work was rarely a recognised aspect of their role. GPNs did not undertake tasks such as monitoring patients' psychiatric medication. GPNs encountered a range of 'less serious' problems (eg bereavement, dementia) in patients who came to them for general nursing. Several GPNs stated that patients would choose to talk openly with a nurse rather than with their GP, particularly female patients whose GP was male. GPNs felt they were working on the basis of instinct informed by experience in judging what they could deal with and what should be referred on.
Waidman et al. ( continued on next page ) Respondents considered 20% (mean value) of patients they had seen had MH problems; 74% (n = 161) of Registered Nurse participants had not attended any MH courses. 60% (n = 130) of nursing staff had been asked about depression and antidepressant treatment by patients. The most common intervention that staff had delivered was bereavement counselling. If a worthwhile MH course were to become available to them, 50% (n = 108) of nurses responded that they would definitely attend, whilst 42% (n = 91) would possibly attend. Improved detection of MH problems is the most favoured area for education, with additional knowledge and skills for anxiety, suicide and crisis management also considered to be important. Post-training scores were very high and some demonstrated ceiling effects. All nine items on the questionnaires showed highly significant improvements for GPNs (all p < 0.0005). Following attendance, GPNs were positive and wanted to take on the role of caring for people with SMI. The course provided an opportunity for MH nurses to support GPN colleagues and make links with primary care. Lee and Knight, (2006) UK To investigate district nurses' involvement with MH issues and to explore their perceptions of education needs.
district nurses Survey
Bereavement counselling (55%, n = 26) was the main intervention that district nurses were involved with in practice, followed by anxiety management (28%, n = 13), problem solving (23%, n = 11) and alcohol advice (23%, n = 11). 28% (n = 13) of the sample reported no involvement in MH interventions. Participants rated recognition of signs/symptoms of mental disorders (96%, n = 44) anxiety management (85%, n = 39) and pharmacological treatment of depression (83%. N = 38) as priority education needs. District nurses were most likely to be involved with social workers and, to a lesser extent, community psychiatric nurses (CPN), in care of patients with MH problems. District nurses were most likely to direct a referral is the GP followed by the CPN, and their own manager.
Maconick et al. The number of referrals from primary care nurses to the MH nurse decreased following the training program. Referrals received from training participants after training were of high quality and much more considered than before training. The implementation of this model of training in a PHC clinic was well received. Increased confidence in PHC nurses completing program. MH nurses as tutors sometimes lacked confidence and authority in teaching. Identified need for improved knowledge of MH illness, assessment and referral services. Over 82% (n = 320) of GPNs have responsibilities for aspects of MH and wellbeing where they have not had training. 98% (n = 382) of GPNs would like to attend a relevant course in MH. GPNs expressed preference for mixture of face-to-face training in a classroom environment and e-learning (59%, n = 230), as opposed to teaching in the workplace (19%, n = 74) or e-learning only (21%, n = 82). Several barriers to participation identified, with 34% (n = 132) noting that gaining agreement from employers presented the greatest hurdle.
Trimmer et al. ( Trimmer et al., 2019 ). Nine (69%) papers specifically explored the learning needs of PHC nurses. The remaining four (30.7%) papers described the delivery and/or outcomes of mental health training programs for PHC nurses ( Table 1 ). Four themes were identified across included papers, namely: preparedness; addressing knowledge gaps; education programs and education considerations.
Several papers explored PHC nurses' preparedness to screen for mental health issues, concluding that many PHC nurses lacked the education, training and confidence to provide mental health care. Both Secker et al. (1999) and Naji et al. (2004) described knowledge gaps resulting from limited exposure to mental health theory or clinical experience in undergraduate or general training programs, with nurses reporting feeling ill-equipped, and expressing 'professional unease' in dealing with depressed people. Similarly, Prince and Nelson, (2011) reported that 21% (n = 11) of participants had no confidence in providing mental health care. Those who had experienced mental health placements during their general nursing training tended to display lower levels of 'professional unease' in their attitudes to providing mental health care ( Naji et al., 2004 ). Hardy, (2014) identified that more than 82% (n = 331) of participants had not received training for aspects of mental health and wellbeing where they had responsibilities.
The situation was no different in PHC areas where there was more likelihood of mental health issues. Trimmer et al. (2019) confirmed that participating correctional nurses had variable confidence levels and expertise in providing general and targeted mental health care, and Secker et al. (20 0 0) identified that school nurse participants perceived a lack of mental health knowledge and a need to gain skills in specific areas such as counselling.
Despite evidence of lack of preparedness to provide mental health care, two papers found mental health as a training need was not prioritised, citing lack of motivation ( Naji et al., 2004 ) and low levels of confidence when caring for mental health conditions such as dementia ( Lee & Knight, 2006 ). Motivation and capacity to attend mental health training was addressed by Haddad et al. (2005) , whose study of 217 district and community nurses found that that 74% had not completed any mental health training in the preceding five years. This was despite 90% noting they would definitely or possibly attend a relevant course if available. Similarly, in Hardy, (2014) 's study, 98% (n = 382) of participants stated they would like to attend relevant mental health training.
Addressing knowledge gaps
The training needs identified by PHC nurses in the included studies largely aligned with the mental health conditions most commonly encountered. Several papers acknowledged a need for a broad range of training for PHC nurses including basic awareness of mental health, mental health assessment, identification of 'red flags', and training in working safely and defusing difficult situations ( Hardy & Huber, 2014 ;Prince & Nelson, 2011 ;Secker et al., 20 0 0 ;Trimmer et al., 2019 ;Waidman et al., 2012 ). The management of depression, the most commonly reported condition seen by PHC nurses was also rated highly as a training need ( Naji et al., 2004 ;Prince & Nelson, 2011 ), in particular increasing knowledge around pharmacological treatments for depression ( Hardy, 2014 ;Hardy & Huber, 2014 ;Lee & Knight, 2006 ;Naji et al., 2004 ). Suicide/suicide ideation ( Hardy & Huber, 2014 ), schizophrenia and post-natal depression ( Prince & Nelson, 2011 ) and anxiety management ( Hardy, 2014 ;Lee & Knight, 2006 ) were also identified as knowledge gaps that needed to be addressed. Bereavement and bereavement counselling was a notable exception in most studies, with nurses indicating that their experiences of general practice nursing, in particular, had built greater skills in this area.
As well as training relating to specific mental health conditions, Hardy, (2014) identified that nurses working in general practice indicated gaps in their knowledge relating to 'signposting'. That is, referral services and access to these, as well as the need to develop skills in using mental health assessment tools and basic counselling skills. Skills in cultural awareness, behaviour modification and focused training on communication and listening were also perceived as training needs ( Lee & Knight, 2006 ;Secker et al., 1999 ).
Those studies which explored the methods for addressing learning needs and organisation of training found that structured faceto-face learning was the preferred method of delivering mental health training ( Hardy & Huber, 2014 ;Secker et al., 1999 ), with small group learning, multidisciplinary learning ( Trimmer et al., 2019 ), and a combination of e-learning and face-to-face learning also scoring highly ( Hardy & Huber, 2014 ;Secker et al., 1999 ). Hardy (2014) suggested informal small group learning, exploration of case studies, professional reading, and interactive learning would be of benefit in addressing mental health learning needs. However, Secker et al. (1999) indicated that innovative methods of delivery had disengaged some nurses, who found e-learning was overused, and that these approaches were unhelpful unless accompanied by group discussion.
Each program identified positive outcomes, ranging from increased confidence through to improved knowledge and understanding of mental health issues ( Maconick et al., 2018 ;Trimmer et al., 2019 ), evidence of changes to practice following attendance ( Crompton & Hardy, 2018 ;Trimmer et al., 2019 ), and a re-focus on patient-centred care ( Trimmer et al., 2019 ). Other positive outcomes included improvements in the quality of referrals for specialist mental health care ( Maconick et al., 2018 ), increased engagement with specialist mental health teams ( Hardy & Huber, 2014 ) and the fostering of innovation in the PHC setting ( Maconick et al., 2018 ). None of the programs measured a change in patient care or health outcomes.
Facilitators and barriers
Most included papers discussed the benefits of education programs in developing connections with health professionals experienced in providing mental health care. For some, the establishment of referral processes and contacts was greatly valued ( Crompton & Hardy, 2018 ;Lee & Knight, 2006 ;McKinlay et al., 2011 ;Waidman et al., 2012 ), and the value of interprofessional models of training to establish ongoing links were also identified ( Haddad et al., 2005 ;McKinlay et al., 2011 ;Secker et al., 1999 ). Secker et al. (1999) also drew attention to the importance of ongoing group discussions as sources of learning and support, suggesting that regular team meetings allow time for team learning. Participants noted that where meetings were hierarchical or used purely for workload allocation, opportunities to share information and expertise were lost ( Secker et al., 1999 ).
Hardy, (2014) discussed the need for GPNs to be given opportunities for supervised mental health practice, ongoing updates and study time. Crompton and Hardy, (2018) further emphasised the importance of ongoing learning to maintain skills and to increase mental health knowledge. This was supported by Maconick et al. (2018) who noted that there is little evidence that short intensive learning programs or once-off training in mental health care translates into changes in clinical practice. Integration of mental health care requires changes in clinician attitude, perception and behaviour ( Maconick et al. 2018 ). In their long-term education program, Trimmer et al. (2019) described how contextually based learning was particularly beneficial in providing ongoing peer support within the clinical environment. However, caution was raised by Crompton and Hardy (2018) who drew attention to the technical and managerial difficulties in maintaining support, citing the need for dedicated input to maintain templates, website information and other materials.
Discussion
This integrative review extends and refines the body of knowledge about preparing PHC nurses for the provision of mental health care. While the papers in this review represent a global focus, only four countries had generated research that met the inclusion criteria. This is despite an international escalation of mental health-related encounters in PHC ( Berger & Reupert, 2020 ). Evidence arising from this review highlights the need to prepare PHC nurses to provide mental health care by addressing knowledge gaps and through developing contextual training programs which meet local needs.
Findings support the World Health Organization & United Nations Children's, 2018 21st-century vision for PHC, which identifies the critical role played by PHC nurses in providing mental health care. The impact of COVID-19 on mental health presentations ( Dragovic, Pascu, Hall, Ingram, & Waters, 2020 ) has further demonstrated the importance of having a well-trained PHC nursing workforce with competence and confidence in mental healthrelated skills and knowledge ( World Health Organization, 2007 ). Being able to recognise mental health issues early and either provide intervention or referral to appropriate therapy is a key step to optimising mental health outcomes ( Halcomb et al., 2019 ).
In recognition of the increasingly prominent role of PHC nurses in mental health, and to support and guide practice in this setting, the Australian College of Mental Health Nurses (ACMHN) published mental health practice standards for nurses in Australian general practice ( Australian College of Mental Health Nurses, 2018b ). These Standards are believed to be world-first for GPNs, and highlight the importance of PHC nurses developing capacity around mental health and mental illness. They also serve to define and articulate the role of the generalist nurse in mental health care, identifying their scope of practice and role in the health system. To address mental health knowledge gaps, an eLearning program was developed to support PHC nurses to increase their understanding, skills and confidence in mental health ( Australian College of Mental Health Nurses, 2018a ). Potentially, this program could provide a model to introduce in countries with a comparable situation to Australia concerning primary health care nursing and mental health.
This review also highlights the ongoing impact of limiting exposure to mental health care during undergraduate nursing education ( Schwartz, 2019 ). In some countries such as Australia, the move to university education has seen mental health content severely diminished even though undergraduate nursing education comprises a 3-year program, aimed to prepare new graduates for beginner-level practice in diverse settings, including mental health ( Happell, 2009 ). This diminution of mental health theory and practice leads to fewer graduating students wanting to work in the area of mental health ( Cregan et al., 2016 ;Henderson, Happell, & Martin, 2007 ) and negatively impacts the skills and confidence of graduates working in all clinical settings to effectively care for people living with mental illness. A means to address this knowledge gap could be an appropriate and supportive work-integrated learning experience, such as Recovery Camp. This immersive experiential undergraduate clinical placement has demonstrated its capacity to increase nursing students' clinical confidence and competence in mental health nursing ( Patterson et al., 2018 ). Nursing accreditation bodies need to consider these issues and ensure that undergraduate curricula contain mandatory mental health theory and practical components to ensure that graduates are prepared to provide safe and effective nursing care for people living with mental illness.
The issues around ongoing professional development for PHC nurses that were identified in this review have been previously re-ported, and are not confined to mental health education. The nature of employment in PHC settings that are small businesses or non-government organisations creates challenges in terms of access to paid leave and funding for ongoing professional development ( Halcomb, Ashley, James, & Smythe, 2018 ;James, Halcomb, McInnes, & Desborough, 2021 ). Additionally, there is often limited reward in terms of career progression or additional remuneration for PHC nurses who engage in professional development or develop knowledge and skills in a particular area of practice ( Halcomb et al., 2018 ). To promote the sustainability of any educational programs for PHC nurses both employers and nurses must appreciate the value of the program to their practice and patient outcomes.
Limitations
Despite conducting a comprehensive systematic search of the international literature, only 13 studies from 4 countries were found to be appropriate for inclusion. It is feasible that studies relevant to this review have been published in non-English speaking journals or that there may be publication bias since this area of research is not well covered by clinical trial registries. The low number of studies in our review is also indicative of the low priority often placed on mental health education. While our review did not originally seek to explore specific mental health education programs for PHC nurses, they have provided clarity and highlight improved confidence and competence in providing primary mental health care.
Conclusion
It has long been established that integrating mental health services into primary care is a viable method of ensuring that community mental health needs are met. As COVID-19 restrictions ease, and the burden of mental illness increases within our communities, it will be essential to ensure that PHC nurses are adequately prepared with education programs that meet their mental health learning needs. Such programs need to address issues of preparedness, knowledge gaps and support as identified in this study. It is only by having appropriately skilled, competent and confident PHC nurses that we will optimise mental health outcomes within our communities.
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Ethical statement
As a review paper an Ethical Statement is not applicable. | 2022-01-14T14:09:06.057Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "5629e8137d13fadd7f4a3768edc60f77018a1dac",
"oa_license": null,
"oa_url": "http://www.collegianjournal.com/article/S132276962100175X/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "042bce2164ab459b8ea1745e513bd39eefe82c46",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18559417 | pes2o/s2orc | v3-fos-license | Transcriptome Phase Distribution Analysis Reveals Diurnal Regulated Biological Processes and Key Pathways in Rice Flag Leaves and Seedling Leaves
Plant diurnal oscillation is a 24-hour period based variation. The correlation between diurnal genes and biological pathways was widely revealed by microarray analysis in different species. Rice (Oryza sativa) is the major food staple for about half of the world's population. The rice flag leaf is essential in providing photosynthates to the grain filling. However, there is still no comprehensive view about the diurnal transcriptome for rice leaves. In this study, we applied rice microarray to monitor the rhythmically expressed genes in rice seedling and flag leaves. We developed a new computational analysis approach and identified 6,266 (10.96%) diurnal probe sets in seedling leaves, 13,773 (24.08%) diurnal probe sets in flag leaves. About 65% of overall transcription factors were identified as flag leaf preferred. In seedling leaves, the peak of phase distribution was from 2:00am to 4:00am, whereas in flag leaves, the peak was from 8:00pm to 2:00am. The diurnal phase distribution analysis of gene ontology (GO) and cis-element enrichment indicated that, some important processes were waken by the light, such as photosynthesis and abiotic stimulus, while some genes related to the nuclear and ribosome involved processes were active mostly during the switch time of light to dark. The starch and sucrose metabolism pathway genes also showed diurnal phase. We conducted comparison analysis between Arabidopsis and rice leaf transcriptome throughout the diurnal cycle. In summary, our analysis approach is feasible for relatively unbiased identification of diurnal transcripts, efficiently detecting some special periodic patterns with non-sinusoidal periodic patterns. Compared to the rice flag leaves, the gene transcription levels of seedling leaves were relatively limited to the diurnal rhythm. Our comprehensive microarray analysis of seedling and flag leaves of rice provided an overview of the rice diurnal transcriptome and indicated some diurnal regulated biological processes and key functional pathways in rice.
Introduction
A diurnal cycle is defined as a pattern that recurs over a 24-hr period. Plant diurnal oscillation is universal for plants and coordinates many biological pathways related to extracellular or intracellular signals, adapting the plants to daily alternation and maintaining a balance between metabolic reactions during light and darkness, especially for fluctuations of the carbon balance [1,2,3,4,5,6,7,8,9,10]. During the light period, carbon fixation in leaves leads to sucrose synthesis through photosynthesis and the starch produced accumulates in the leaves. During darkness, the starch is degraded and so starch content decreases. Rice (Oryza sativa) is the major food staple for about half of the world's population and it is also a model monocot for studies of crop plants with relatively smaller genomes, due to the completion of its genome sequence. The rice genes related to starch synthesis are essential to improving grain quality (such as eating and cooking quality). In rice, flag leaves play important role in providing photosynthates to the filling grain.
The diurnal cycle also coordinates the opening and closing of stomata, affecting transpiration and changing the water potential in leaves. The opening of stomata facilitates carbon dioxide uptake for photosynthesis during the light [11,12,13]. The light regulation is essential in the metabolic and physiological functions of plants during the diurnal periods. Light is also crucial to entrain the endogenous circadian clock, ensuring the precise cyclic expression of circadian-regulated genes during the day. The significant correlation between diurnally oscillating genes during the diurnal cycle and growth hormone-responsive genes was revealed mostly through microarray-based transcriptome analysis [14,15]. The plant growth-hormone pathways tightly interact with light signalling and the diurnal cycle in the control of plant growth.
Microarray experiments are normally used to collect large-scale time-series data, monitor genome-wide gene expression, profile the changes in transcripts, and identify novel genes regulated in the diurnal cycling and circadian clock during a 24-hr period [16]. The first version of an Affymetrix Arabidopsis microarray (containing 11,521 Arabidopsis ESTs) was applied to study the circadian clock-regulated key pathways in Arabidopsis by Harmer in 2000 [17]. In 2001, Ellen Wisman's group used another platform of microarray and found about 11% of the Arabidopsis genes showed a diurnal expression pattern and about 2% with a circadian rhythm [18]. Then the ATH1 Arabidopsis wholegenome array was also used to analyze the transcriptome throughout the diurnal cycle for clues concerning the diurnal and circadian-regulated starch metabolism in Arabidopsis leaves [1]. The transcriptome analysis for the diurnal changes of the starch metabolism-related genes indicated that there was transcriptional and posttranscriptional regulation of starch metabolism in Arabidopsis leaves. Recently, large microarray datasets related to the plant diurnal cycle (e.g. diurnal and circadian microarray data for Arabidopsis [1,15,18,19], maize [20,21,22], barley [23] and soybean [24]) were published and some are available in public databases. In addition, the database of DIURNAL project (http:// diurnal.cgrb.oregonstate.edu/) provides web-based data-mining tools that are multiple user-friendly for searching the diurnal and circadian microarray data of Arabidopsis, rice and poplar [19,25].
During statistical analysis of diurnal microarray data, there are many challenges to correctly identify the subset of genes with a clear diurnal signature. A variety of methods have been developed to extract the cycling-expressed genes from microarray data for diurnal, circadian rhythm or cell cycle research. The computing methods mainly fall into two categories: frequency-domain and time-domain analyses [26].
The frequency-domain method is generally based on classical spectral analysis (e.g. Fourier transform and periodogram). After applying those methods, microarray expression profiles are transformed into frequency domains, then the rhythmic genes will yield the spectra that have a well-defined peak in the frequency domain. For diurnal or circadian-related genes, their periodogram will have a peak near a 24-hr period. Several prior studies have adapted these spectral methods to analyze biological data, e.g. FFT-NLLS [27], average periodogram [28] and Lomb-Scargle periodogram [29].
The time-domain method is an alternative method to analyze a time series using a pattern matching technique. To detect periodic patterns, the theoretical model is usually sinusoid. Two different methods are commonly used to measure the similarity between the models and the real data: nonlinear least-square curve fitting and cross correlation (CC). These algorithms assign to each time-series the properties of the model to which it is most similar. There are several programs available to perform the computation, including Cosinor [30], CORRCOS [17] and COSOPT [31]. Recently, we developed a new algorithm called ARSER to analyze diurnal or circadian expression data by combining the time-domain and frequency-domain analyses [32]. Testing by synthetic and real experimental data showed that it efficiently identified periodicity in short time-series.
Recently, little is known about the possible mechanism related to the rice diurnal cycle involved in metabolism, cellular function and growth. Global transcriptome analysis of the rice diurnal-cycle is also limited. In our study, we used the GeneChip Rice Genome Array representing 51,279 transcripts to monitor the expression profiles of rice flag leaves and seedling leaves during diurnal cycling. We employed the ARSER and CC methods together to analyze our microarray datasets. We also compared the predicted diurnal genes between rice and Arabidopsis. To elucidate the biological process involved in the rice day-night cycle, several approaches are being used, such as gene ontology (GO) enrichment analysis, MapMan, and cis-element analysis for the genes with similar diurnal phases. The present study might give some interesting insight into the light-dark diurnal cycle in plants.
Experimental design and data quality
Rice flag leaf and seedling leaf samples were collected every 4 hr in a period of 36 hr with two biological replicates. The expression profile for each sample was obtained by Affymetrix Rice Genome GeneChip analysis. Detailed pair-wise scatter plots of biological replicates were generated for flag leaves ( Figure 1A) and for seedling leaves ( Figure S1A). For biological replicates of each time point, nearly all of probe sets were fallen along the diagonal of plots, indicating no major variation. The correlation coefficients and false-positive rates of each pair of biological replicate samples were calculated by GCOS (Table 1). All correlations were .0.95, while false-positive rates were ,5%. In summary, the data quality was satisfactory for identifying genes with diurnal patterns.
In addition, we calculated the correlation coefficient of each probe set (Cr) across the time-series of two sets of biological replicates in flag leaves and seedling leaves ( Figure 1B and Figure S1B, respectively). As we had assumed, there were two forms among the expression patterns of each probe set along the detection time points: one was consistent and similar between biological replicates, with the Cr of replicate time-series tending to 1; the other was random, occasionally matched in biological replicates, and the average of Cr tended to 0. The number distribution of Cr (flag or seedling leaves) could be divided into two parts during analysis: the random part, close to a normal distribution, shown in purple in Figure 1B and Figure S1B; and the conserved part, shown in yellow, which represented the probe sets with consistent expression pattern.
In the meanwhile, we performed real-time RT-PCR to validate microarray results of flag leaves ( Figure S2A) and seedling leaves ( Figure S2B) in seven time points from 12 hr to 36 hr. Several genes were selected, including light-harvesting chlorophyll a/b binding protein (LOC_Os03g39610), starch and sucrose metabolism pathway related proteins such as glycogen/starch synthases (LOC_Os07g22930), 4-alpha-glucanotransferase (LOC_Os07g46790), and Glucan water dikinase (LOC_Os06g30310).The real-time RT-PCR results mostly matched the microarray expression patterns.
Algorithms selected for diurnal pattern identification
To identify the genes with periodic expression patterns, we assumed that the expression profile of a gene exhibiting a diurnal pattern approximated a cosine wave with a period of nearly 24 hr. Several methods or algorithms are available for diurnal pattern identification, and two were selected: ARSER and CC. ARSER is a newly developed algorithm to analyze diurnal or circadian expression data by combining the time-domain and frequencydomain analyses, and was shown to be efficient in identifying periodicity in short time-series [32]. CC calculates the Pearson's correlation between a rhythmically-expressed gene and a theoretical cosine wave with a defined 24-hr period, and is a typical pattern matching technique [17]. Figure 2 shows some probe set examples with a diurnal pattern identified. Figure 2A presents the Os.7890.1.S1_x_at (CAB gene) expression value during 8-44 hr in 4 hr spaces with its predicted cosine-curve model using ARSER (AR) and CC methods. The calculation showed that the two methods almost agreed. The ARSER method gave a better fit than CC for the probe set for Os.15803.1.S1_at ( Figure 2B). The probe set for Os.22928.2. S1_at showed a typical spike diurnal pattern ( Figure 2C), which was only recognized by the ARSER method.
The rice diurnal pattern genes in flag and seedling leaves
The cosine-curve model of each probe set was calculated by the ARSER and CC algorithms, for every replicate sample series of flag and seedling leaves. To obtain more reliable diurnal pattern genes, we considered the p-or q-value of the model curve and the Cr between biological replicate sample series for each probe set. If the p-or q-values were all ,0.05 in biological replicates and the Cr .0.5, then we used the combined results of the two algorithms. With the combinative criteria ( Figure 3), 6,266 probe sets (10.96%) were identified with diurnal patterns in seedling leaves and 13,773 (24.08%) in flag leaves. The detailed information of these probe sets included raw intensity, cosine-curve model parameters, and gene annotation were shown in File S1. Within these diurnal pattern probe sets, there were 4,394 probe sets identified with diurnal patterns in both seedling and flag leaves, and 9,379 showed diurnal patterns that were preferred in flag leaves ( Figure 3).
The phase distribution of rice diurnal pattern genes in seedling and flag leaves
To further analyze the rice diurnal-pattern genes in seedling and flag leaves, all these genes were disassembled based on their phase of the cosine-curve model. The distribution of these probe sets in seedling and flag leaves are shown in Figure 4A and 4B, respectively. The distribution in two biological replicates was almost identical. For seedling leaves, the phases were during 2:00 am to 4:00 am, whereas in flag leaves the phases were during 8:00 pm to 2:00 am.
To elucidate the biological process of the rice diurnal-pattern genes involved, we employed gene ontology (GO) enrichment analysis [33] on the probe sets within each phase-period ( Figure 5). In the enriched GO term distribution, along with the diurnal pattern phase, there were several interesting findings. Some important processes were induced by the light, such as photosynthesis, response to abiotic stimulus, transporters, and secondary metabolic processes. Some other biological processes related to the nucleus and ribosomes were active during the night: the transcriptional regulation-related processes such as RNA processing, RNA splicing and DNA repair, were enriched in the later afternoon and evening; circadian rhythm was enriched in late afternoon; and small GTPase-mediated signal transduction was highlighted in early morning. Some flag leaves' specifically enriched GO terms were also shown in Figure 5, such as fatty acid biosynthesis processes in the early evening, phosphorylation at midnight, and post-translational protein modification from night to early dawn.
Rice transcription factor gene families with diurnal patterns in flag and seedling leaves
There were large numbers of transcription factor genes with diurnal patterns in both seedling and flag leaves. Several gene Seedling ! ! families of transcription factors, such as EIL, ZIM and ZF-HD, showed a flag-leaf-preferred diurnal pattern. About 65% of overall transcription factors were flag leaf preferred (Table 3). There were 20 families with a higher proportion of flag-leaf-preferred diurnal pattern, five were .90%, eight were within 80 to ,90%, another four within 70 to ,80%, and three within 65 to ,70%.
Diurnal patterns of starch and sucrose metabolism related genes in seedling and flag leaves The rice genes related to starch synthesis are essential to improving grain quality (such as eating and cooking quality). We searched the possible rice orthologs for the Arabidopsis starchrelated genes, and made a rice gene-expression profile for the diurnal changes of sucrose metabolism-related enzymes in seedling and flag leaves, which were further displayed by MapMan (Version 3.0). We employed BLAST to map the probe sets to the BINs of MapMan, giving detailed mapping information of every probe set with a diurnal pattern (File S2). The phase distribution of the diurnal pattern probe-sets related to starch and sucrose biosynthesis and degradation pathways in seedling and flag leaves are shown in Figure 6A and 6B, respectively. There were similarities and a diversity of diurnal patterns between rice flag leaves and seedling leaves. The enzymes involved in the starch synthesis pathway mainly showed a daytime diurnal phase both in seedling and flag leaves. Overall, more probe sets with diurnal patterns occurred in flag leaves compared to seedling leaves: e.g. in the starch synthesis pathway, the majority of related genes showed a diurnal pattern during day time, both in seedling and flag leaves, but the genes encoding ADP-glucose pyrophosphorylases (AG-Pases) and starch-branching enzymes had significant diurnal patterns in flag leaves. Furthermore, the pathways derived from Tian et al. [34] showed diurnal patterns of individual enzymes in the starch synthesis pathway ( Figure S3).
Comparative analysis of rice and Arabidopsis diurnal pattern genes
There were similarities and differences between Arabidopsis and rice transcriptome analyses throughout the diurnal cycle. Recently, a large body of Arabidopsis expression profiling data related to diurnal or circadian rhythms was made publicly available. We selected the raw data (NCBI's Gene Expression Omnibus (GEO) accession number: GSE3416) from the study by Blasing [1] on diurnal gene expression of 5-6-week-old rosette leaves in Arabidopsis thaliana Col-0, and recalculated the diurnal pattern genes using the same approach, i.e. combining ARSER and CC. The detailed diurnal pattern Arabidopsis genes of GSE3416 are also listed in File S1. Thus, it is possible to compare the Arabidopsis diurnal pattern genes with those we identified in rice seedling leaves and flag leaves. Using the regular BLAST method, a close homolog between rice and Arabidopsis genes was mapped. Most diurnal-pattern genes in seedling and flag leaves had the Arabidopsis homolog (Table 4), and about half of these had a diurnal expression pattern. However, there were a large number of genes preferentially expressed with a diurnal pattern in rice flag leaves and seedling leaves that were not found in Arabidopsis leaves.
Some phase specific diurnal cis-elements were compared between rice and Arabidopsis. The morning element, G-box, Evening Element, TBX and Element II of PCNA-2 were conserved and enriched in similar phases across rice and Arabidopsis. We also compared the GO term distribution between Arabidopsis and rice diurnal-pattern genes ( Figure 7). Several processes such as photosynthesis and indole-derivative metabolic processes were similar between rice and Arabidopsis; however, many (e.g. rhythmic processes and RNA processing) were in different phases between the species.
Discussion
Identification of diurnal pattern genes in rice seedling and flag leaves A variety of methods have been developed to extract the cycling-expressed genes from microarray data for circadian rhythm or diurnal research. The computational methods mainly fall into two categories: traditional time-series spectral analysis and cosine-based pattern matching methods. The advantages of the traditional spectral method are that they have been widely used to analyze time-series, and many programs are available for their calculation; however, their limitation is that they only perform well for long time-series. For microarray data, which usually have only a few points (due to high costs), the traditional spectral methods do not efficiently detect the exact diurnal or circadian rhythm period [35]. The cosine-based pattern matching methods are mathematically convenient with a reasonably good description of welldefined properties; however, they may not efficiently identify nonsinusoidal periodic patterns. Thus it is essential to use a suitable approach to identify the rhythmically-expressed genes during our analysis of diurnal microarray data.
The ARSER method [32] combines the time-domain and frequency-domain analyses and was shown efficient in identifying the periodicity in short time-series. Comparing with other cycling algorithms, ARSER can handle noise in the expression data and identify periodic patterns from limited sample-sizes and low numbers of replications of short time-series. Particularly, unlike cosine-curve-based algorithms, ARSER can identify both nonsinusoidal and sinusoidal patterns. For example, two transcripts with sinusoidal expression pattern can be identified as periodic by both ARSER and CC methods (Figure 2A and 2B), while one transcript with a non-sinusoidal (spike) expression pattern was identified as periodic by ARSER ( Figure 2C). ARSER determines the statistically significant periodic transcripts by FDR q-values, which are calculated based on the distribution of p-values. The period range for computing will also impact on the selection of gene sets during the analysis of microarray data. By setting appropriate values for parameters, we could get more significant results using ARSER.
The combined use of ARSER and CC methods showed some special periodic patterns that were non-sinusoidal periodic patterns, such as probe set Os.22928.2.S1_at ( Figure 2C) with a periodic expression value and a spike-shaped pattern according to HAYSTACK (http://haystack.cgrb.oregonstate.edu/). This approach appears to be efficient in analyzing short time-series compared with the spectral method and has a simpler computational procedure than for nonlinear curve fitting.
Combining the mathematical model and the reproducibility in biological replicates, we globally defined 6,266 (10.96%) diurnal probe-sets in seedling leaves and 13,773 (24.08%) in flag leaves. Transcriptome analysis showed that gene transcription levels in flag leaves were mostly of diurnal rhythm. Recently, a maize custom high-density 105 K Agilent microarray was conducted to investigate the diurnal expression patterns between the leaf and developing ears; about 22.7% (10,037) of expressed transcripts exhibited a diurnal cycling pattern in leaves, but only 0.39% (47) in developing ears [22]. These results indicated that the diurnal rhythms are related to developmental stages and tissue specificity, revealing a 'third-dimension' of diurnal rhythm regulation.
Interactions between diurnal patterns and plant hormone signalling regulation in rice leaves
Light is a key environmental cue, and interactions among light, plant hormones and the circadian clock appear to control the diurnal patterns of plant growth. We analyzed the rice transcription factor families with flag-leaf-preferred diurnal patterns, and the results suggested that ethylene may also affect the diurnal pattern in rice flag leaves (Table 3). For example, four ethylene-insensitive3-like (EIL) genes (LOC_Os07g48630, LO-C_Os03g20780, LOC_Os03g20790 and LOC_Os09g31400) were identified as diurnal only in flag leaves but not seedling leaves. EIL may be involved in the ethylene signal-transduction pathway. Additionally, we also found that TIFY family proteins (recently discovered to play a critical role in repression of jasmonate signalling [36,37,38,39,40]), might also affect the rice diurnal pattern. Among 20 rice TIFY genes, only OsTIFY11a (OsJAZ9, LOC_Os03g08310) showed a diurnal pattern in seedling leaves, while another five showed a diurnal pattern in flag leaves, including OsTIFY3 (OsJAZ1, LOC_Os04g55920), OsTIFY6a (Os-JAZ3, LOC_Os08g33160), OsTIFY6b (OsJAZ4, LOC_Os09g23660), OsTIFY10b (OsJAZ7, LOC_Os07g42370) and OsTIFY10c (OsJAZ8, LOC_Os09g26780).
Rice diurnal microarray analysis showed that some abscisic acid (ABA)-dependent transcription factor genes (e.g. ABI3VP1 and OsNAC5) had a significant diurnal pattern in rice flag leaves (Table 3). Common motifs such as CACGTG (G-box/ABRE), ACGTG (ABRELATERD1) and CACG (NAC core motif), were identified in the promoter regions (2 kb 59-upstream from the ATG) of the genes with similar diurnal pattern peaks in the day time (from phase 8 to phase 18). In Arabidopsis, a large number of ABA-responsive and/or methyl jasmonate (MeJA)-responsive genes were identified with oscillation expression diurnally and robustly during the light-dark cycle [14,15]. Plant stomatal movements are rhythmic and ABA can regulate the diurnal oscillator period [41,42]. From motif analysis (Table 2), we also found that CGCG Box (calmodulin-binding) was enriched in the promoter regions of the diurnal genes with expression peaks from morning to noon (phases 6, 8, 10, 12 and 14). This may be related to the possible dynamic change of the concentration of internal calcium which oscillates diurnally, peaking during the day and dropping at night.
Auxin is a key regulator of plant growth and development, and auxin signal transduction can be regulated by the circadian clock in Arabidopsis [15,43]. The plant sensitivity to auxin was observed to vary according to the time of day. Through microarray analysis, several AUX/IAA and ARF genes showed significant diurnal patterns in rice flag and seedling leaves. Through promoter analysis for the rice diurnal genes, we found that common motif ATGTCA/TGTCA [SURECOREATSULTR11, the core of sulfur-responsive element (SURE) which contains auxin response factor (ARF) binding sequence] was significantly enriched in the promoter regions of the diurnal genes in flag leaves, with expression peaks from morning to noon (phases 8, 10 and 12), while in seedling leaves the expression peaks were in the evening (phase 22).
From the diurnal genes in flag and seedling leaves, some gibberellin-mediated signalling and metabolic-related genes were enriched in seedling leaves, including ent-kaurene synthase A (LOC_Os04g09900 and LOC_Os02g36210), ent-kaurene synthase B (LOC_Os02g36140, LOC_Os04g10060, LOC_Os11g28530 and LO-C_Os12g30824) and ent-kaurene oxidase (LOC_Os06g37364). Among the seven gibberellin-related genes, two (LOC_Os12g30824 and LOC_Os06g37364) also peaked at midnight in flag leaves.
Starch metabolism in rice diurnal cycling
During the diurnal cycle, starch is stored in leaves in a pattern such that starch content increases during periods of light and decreases during darkness [44,45,46]. Starch is one of the most important compounds synthesized by plants, and higher starch levels in leaves can lead to increased biomass. The carbon contained in leaves can be converted to ethanol for use as a biofuel and an alternate energy source. The genes related to starch synthesis are also essential for improving grain quality (e.g. eating and cooking quality). Our microarray-based diurnal gene identification showed similarities and diversity of diurnal patterns for starch metabolism-related genes between rice flag and seedling leaves ( Figure 6 and Figure S3). The enzymes involved in the starch synthesis pathway mainly showed a daytime diurnal phase, both in seedling and flag leaves; whereas there were more genes of the starch synthesis enzymes with a diurnal pattern only in flag leaves (e.g. AGPase and SBE1). Our results showed no AGPases with a diurnal pattern in seedling leaves, but LOC_Os09g12660 (the small subunit of AGPase) had a diurnal phase at 4:00pm in flag leaves. AGPase catalyzes the reaction generating the sugar nucleotide ADP-glucose and inorganic pyrophosphate from glucose 1-phosphate and ATP, which is the first step of starch biosynthesis. AGPase is considered as a major enzyme controlling starch synthesis [47]. Starch branching enzyme (SBE) acts on glucose polymers through a-1,6-glucosidic bonds to form branches on the a-1,4-linked glucose backbone, which is also a key enzyme in the starch biosynthesis pathway [48,49,50]. In sorghum endosperm, three SBE genes showed a diurnal rhythm in gene expression levels during a 24-h cycle [51]. Our result showed that four rice probe-sets matched LOC_Os06g51084 (SBE1) with a diurnal pattern only in flag leaves. The starch metabolism genes with a diurnal pattern in flag leaves may play special roles during graining filling, beneficial to grain quantity and quality.
Possible ribosome and chromatin related transcriptional regulation during light-dark diurnal cycle
Enriched GO term analysis for the assigned diurnal genes ( Figure 5) showed that the genes related to the nuclear and ribosome-involved processes were active mostly during the period of light-dark change, which is similar to the GO phase distribution of diurnal transcripts in maize leaves [22]. This may be related to lightcaused DNA mutations and DNA repair during the day and night. We also found that SNF2 family genes were significantly expressed with a diurnal pattern in flag leaves (Table 3). SNF2 is the catalytic subunit of the SWI/SNF chromatin remodelling complex and, SNF2-family genes play important roles in transcriptional regulation, maintenance of chromosome integrity and DNA repair [52,53,54,55]. The Arabidopsis SWI2/SNF2 chromatin remodel-ling gene-family was reported to be involved in DNA damage response and recombination [53]. The diurnally-regulated SNF2 family genes may be related to changes in chromatin structure at the core of the diurnal oscillator, which may provide a clue concerning the regulation of diurnal progression by chromatin dynamics.
Plant materials
Seeds of rice (Oryza sativa subsp. japonica var. Nipponbare) were surface-sterilized in 5% (w/v) sodium hypochlorite for 20 min and then washed in distilled water three or four times, then germinated in water for 2 d at room temperature and 1 d at 37uC. The seedlings were transferred to water-saturated Whatman filter paper and grown in a greenhouse (28/25uC and 12/12 h of light/ dark, and 83% relative humidity). After about 17 d, seedling leaves were harvested every 4 hours.
For flag leaf samples, the rice plants were grown in the field under natural conditions, within the May-October growing season, on an experimental farm in Zhejiang, China. Threemonth-old rice plants were entrained into the greenhouse under the regular condition (32/30uC and 12/12 h of light/dark). Flag leaves were harvested every 4 hours.
RNA isolation and Affymetrix GeneChip experiments
All leaf samples were homogenized in liquid nitrogen before isolation of RNA. Total RNA was isolated using TRIZOLH reagent (Invitrogen, CA, USA) and purified using Qiagen RNeasy columns (Qiagen, Hilden, Germany). For each sample, 8 mg of total RNA was used for making biotin-labelled cRNA targets; cDNA and cRNA synthesis; cRNA fragmentation, hybridization, washing and staining; and scanning, following the GeneChip Standard Protocol (Eukaryotic Target Preparation). In this experiment, a Poly-A RNA Control Kit and a One-Cycle cDNA Synthesis kit were applied. Affymetrix rice genome arrays were used for hybridizations.
Real-time RT-PCR
Reverse transcription was performed using M-MLV kit (Invitrogen). We heated 10 ml samples containing 2 mg of total RNA, and 20 pmol of random hexamers (Invitrogen) at 70uC for 10 minutes to denature the RNA, and then chilled the samples on ice for 2 min. We added reaction buffer and M-MLV enzyme to a total volume of 20 ml containing 500 mM dNTPs, 50 mM Tris-HCl (pH 8.3), 75 mM KCl, 3 mM MgCl 2 , 5 mM dithiothreitol, 200 units of M-MLV, and 20 pmol random hexamers. The samples were then heated at 37uC for 1 h. The cDNA samples were diluted to 8 ng/ml for real-time RT-PCR analysis.
For real-time RT-PCR, triplicate assays were performed on 1 ml of each cDNA dilution using the SYBR Green Master Mix (Applied Biosystems, PN 4309155) with an ABI 7900 sequence detection system according to the manufacture's protocol (Applied Biosystems). The gene-specific primers were designed by using PRIMER3 (http://frodo.wi.mit.edu/primer3/input.htm). The amplification of 18S rRNA was used as an internal control to normalize all data (forward primer, 59-CGGCTACCACATCCAAGGAA-39; reverse primer, 59-TGTCACTACCTCCCCGTGTCA-39). The primer sets of four selected genes were listed below: LOC_Os03g39610
Transcriptome data analysis and diurnal pattern identification
The signal intensity for each probe set on the GeneChip microarray was extracted by Affymetrix GCOS software and the TGT (target mean value) was scaled as 500 for each chip. Pair-wise scatter plots of replicate samples were generated by Partek Genomics Suite (Version 6.3). For each probe set, we calculated the correlation coefficient (Cr) of two sets of biological replicates across the time series.
Two methods were applied to identify diurnal pattern probe sets: ARSER and cross-correlation (CC). ARSER employs autoregressive spectral estimation to predict the periodicity of an expression pattern from the frequency spectrum and then models the rhythmic patterns using a harmonic regression model to fit the time-series [32]. There are four steps during the ARSER method: the 'detrending process', performs a data pre-processing strategy that removes any linear trend from the time-series; then autoregressive spectral analysis calculates the power spectral density of the time-series; further harmonic analysis provides the estimates of the parameters that describe the rhythmic patterns; and finally, false-discovery rate q-values are calculated for multiple comparisons. The CC method was used to calculate the Pearson's correlation between a rhythmically-expressed gene and a theoretical cosine wave with defined phase [17]. The brief calculation process follows. First, we used the cosine curve (Equation 1) with phases of 0-24 hr to prepare 60 test cosine-curves of 24-hr periodicity. The time span was 36 hr long with one and half cycle and interval between adjacent phases equal to 0.4 hr. Second, we calculated the C a of the best-fitting cosine curve for each expression profile and the phase of the best-fitting cosine curve was defined as the phase of the related probe sets. Third, we used a random Monte Carlo simulation to determine the statistical significance p-value: we randomly produced 100,000 expression profiles, and calculated the maximum C b for each of them. We then counted the number of times that a random expression profile showed C b greater than a specified value and defined the p-value as the number divided by the number of all random expression profiles [56]. Gene annotation, Gene Ontology analysis, and pathway analysis The consensus sequence of each probe set was compared by BLAST (Basic Local Alignment and Search Tool) against the TIGR Rice Genome version 5 to map the probe set ID to the locus ID in the rice genome. The cut-off e-value was set as 1e-20. Within the 57,195 designed probe sets in the Affymetrix rice genome array, there are 52,697 probe-sets mapped to rice genes in TIGR rice pseudomolecules.
The GO category enrichment analysis was applied by in-house agriGO analysis service [33], the Singular Enrichment Analysis (SEA) and Cross comparison of SEA (SEACOMPARE) tools with default parameters for the Affymetrix rice genome array used for analysis.
MapMan (http://gabi.rzpd.de/projects/MapMan) was used for key regulation group analysis. The starch biosynthesis pathway was adopted from that of Tian [34] and the corresponding MapMan pathways were created through the mapping files. | 2017-04-03T13:33:14.441Z | 2011-03-02T00:00:00.000 | {
"year": 2011,
"sha1": "a7ac7bbdb42a266fb17f5c09470b2548d48930f2",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0017613&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a7ac7bbdb42a266fb17f5c09470b2548d48930f2",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
8047317 | pes2o/s2orc | v3-fos-license | Quality Improvement in Neurology: Dementia Management Quality Measures (Executive Summary)
The interdisciplinary Dementia Measures Workgroup, composed of members from diverse national organizations, has defined optimal standards of dementia care for individual practitioners as well as multidisciplinary teams.
Because so many of us today are part of the "sandwich generation" (Miller, 1981)-concurrently caring for children and aging parents-an alarming statistic may have significance for us: Approximately every 4 seconds, an older adult will develop dementia. That number is estimated to nearly double over the next 20 years, to almost 65.7 million in 2030 and 115.4 million in 2050 (World Health Organization & Alzheimer's Disease International, 2012). Therefore, we can reasonably assume that we may be caring for a person with dementia at some point in our lives.
The pain and consuming sadness that family members face as they watch their loved one deteriorate both cognitively and physically can be overwhelming. Forgetfulness and loss of memory are often attributed to a normal aging process, but for people experiencing dementia, behavioral changes and functional limitations can progress quickly, eventually leading to the inability to perform even basic daily tasks and self-care. To provide both preventative and compensatory skilled intervention that will maximize function and preserve quality of life for people with dementia, occupational therapy should be an integral part of any health care dementia team. However, what constitutes appropriate and optimal dementia care has challenged medical and health care personnel for several years. Until recently, interventions have been described as "inconsistent, often suboptimal, and largely unplanned" (Odenheimer et al., 2013, p. 704).
In collaboration with Neurology and the Journal of the American Geriatrics Society, the American Journal of Occupational Therapy is proud to publish "Quality Improvement in Neurology: Dementia Management Quality Measures" in this issue. Although previous groups have tried to establish systematic standards of care for dementia, none thus far have been widely accepted or utilized. This article represents the efforts of a new interdisciplinary work group, the Dementia Measures Work Group (DWG), composed of members from diverse national organizations such as the American Academy of Neurology (AAN), the American Geriatrics Society, the American Medical Directors Association, the American Psychiatric Association, and the American Medical Association-convened Physician Consortium for Performance Improvement Ò , who convened specifically to define optimal standards of dementia care for individual practitioners as well as multidisciplinary teams.
Executive Summary
Professional and advocacy organizations have long urged that dementia be recognized and properly diagnosed (Ashford et al., 2006(Ashford et al., , 2007. With the passage of the National Alzheimer's Project Act (NAPA; Pub. L. 111-375) in 2011, an Advisory Council for Alzheimer's Research, Care and Services was convened to advise the U.S. Department of Health and Human Services. In May 2012, the Council produced the first National Plan to Address Alzheimer's Disease, and prominent in its recommendations was a call for quality measures suitable for evaluating and tracking dementia care in clinical settings (see U.S. Department of Health and Human Services, 2013). Although other efforts have been made to set dementia care quality standards, such as those pioneered by RAND in its series Assessing Care of Vulnerable Elders (ACOVE; Feil, MacLean, & Sultzer, 2007), implementation has not been widely embraced by practitioners, health care systems, or insurers.
In this Executive Summary (full report available at www.neurology.org and online at http://ajot.aotapress. org; navigate to this article and click on "Supplemental Materials"), we report on a new measurement set 1 for dementia management developed by an interdisciplinary Dementia Measures Work Group (DWG) representing the major national organizations and advocacy organizations concerned with the care of patients with dementia. This effort was led by the American Academy of Neurology (AAN), the American Geriatrics Society (AGS), the American Medical Directors Association (AMDA), the American Psychiatric Association (APA), and the American Medical Association (AMA)-convened Physician Consortium for Performance Improvement Ò (PCPI Ò ). Both the ACOVE measures and the measurement set described here apply to The article articulately describes the gaps in dementia care and the opportunities available for improvement. The DWG performed an exhaustive literature search of clinical practice guidelines and dementia reviews to identify 10 performance measures that should serve as the foundation Occupational therapy can greatly contribute to the dementia team and to improving the health of people with dementia; the second and third quality measures listed are cognitive assessment and functional status assessment, skills that are integral to the occupational therapy process. The guidelines represent a systematic, comprehensive approach to dementia care and management and should be used as a standard of care when working with this patient population. Our focus on functional independence and meaningful activity will preserve our place on the dementia health care team for years to come. We thank the AAN for allowing us to publish such an important resource and encourage our readers to disseminate this work to other health care professionals and teams working with the dementia population. for quality dementia intervention. The measures were developed after a thorough examination of randomized controlled trials and effectiveness studies.
patients whose dementia has already been identified and properly diagnosed. Though similar in concept to ACOVE, the DWG measurement set differs in several important ways: It includes all stages of dementia in a single measure set, calls for the use of functional staging in planning care, prompts the use of validated instruments in patient and caregiver assessment and intervention, highlights the relevance of using palliative care concepts to guide care prior to the advanced stages of illness, and provides evidence-based support for its recommendations and guidance on the selection of instruments useful in tracking patient-centered outcomes. In addition, the DWG measurement set specifies annual reassessment and updating of interventions and care plans for dementia-related problems that affect families and other caregivers as well as patients. Here, we first provide a brief synopsis of why major reforms in health care design and delivery are needed to achieve substantive improvements in the quality of care, and then list the final measures approved for publication, dissemination, and implementation.
Opportunities for Improvement
Health Care for Persons With Dementia Is Inconsistent, Often Suboptimal, and Largely Unplanned.
Peer-reviewed studies of dementia care document inconsistency in outpatient care (Chodosh et al., 2007;Reuben et al., 2010), high rates of potentially preventable episodes of acute care (Bynum et al., 2004;Phelan, Borson, Grothaus, Balch, & Larson, 2012), and increased numbers of locus of care transitions (Callahan et al., 2012). These findings suggest that much of health care for patients with dementia is reactive and unsystematic. Ambulatory care is driven largely by chronic conditions, for which prevention, early recognition, and timely treatment can be delayed in the setting of dementia, leading to exacerbations of other chronic conditions. Proactive outpatient care and care coordination could reduce avoidable emergency room visits and hospital admissions and potentially avert negative impacts on patients and caregivers that arise from preventable health crises.
Ethnic and Socioeconomic Disparities Are Important Influences on the Quality of Dementia Care.
Ethnic and socioeconomic disparities influence the rate and quality of dementia diagnoses, the stage of decline at which diagnosis occurs, the use of antidementia medications, the quality and type of end-of-life care, and the use of communitybased supportive services (Cooper, Tandy, Balamurali, & Livingston, 2010). While beliefs about dementia's origins and significance may contribute to some of these health care disparities, many quality issues affect minority and mainstream populations alike: a lack of knowledge of what constitutes good dementia care, inadequate resources, insufficient insurance coverage, low access to knowledgeable professionals, and institutional barriers. All contribute to the need for improvements in health care design.
Partnership With Caregivers Is Integral to Improving Care.
Several different models of integrated care for dementia have been described and have been shown to improve utilization of community-based services, reduce the use of central nervous system-active medications that may worsen cognition, increase family caregivers' competence and reduce their stress, and enhance the capacity of practice environments to provide dementia-specific care (Borson, Scanlan, Watanabe, Tu, & Lessig, 2006;Boustani, Sachs, & Callahan, 2007;Callahan et al., 2011Callahan et al., , 2012Mittelman, Haley, Clay, & Roth, 2006;Reuben et al., 2010;Vickrey et al., 2009). Focus is increasingly turning toward nonpharmacological modes of management for mood and behavioral problems due to the newly questioned value of antidepressant medications for depression in dementia (Banerjee & Wittenberg, 2009;Gitlin, Kales, & Lyketsos, 2012;Nelson & Devanand, 2011), the modest efficacy of antipsychotic medications for behavioral problems (American Geriatrics Society 2012; Beers Criteria Update Expert Panel, 2012) and the increased risks of cardiovascular events and mortality associated with their use, the cognitive toxicity of anticholinergic medications (Vigen et al., 2011), and recognition of the risks of falls and other adverse outcomes associated with use of benzodiazepines in the elderly (Fick & Resnick, 2012). Caregivers are essential partners in health care management as well as implementation of nonpharmacological interventions that complement health care; their knowledge, well-being, and sustained engagement with health care providers are critical to the success of both medical and psychosocial components of care. & Teri, 1998;Mittelman et al., 2006;Teri et al., 2003). However, these interventions are not typically covered under Medicare and other insurance plans, and when such interventions are locally available and used by caregivers, their effects may not be apparent to medical providers, integrated into the overall patient care plan, or tracked as components of quality of care.
Comprehensive, Integrated Care and Quality Improvement Initiatives Must Be Explicit and Practical.
Despite the quality promise of comprehensive dementia management, provider productivity standards and current billing and reimbursement systems discourage its adoption and undermine its consistency. Although a great deal of dementia care is actually done through work with caregivers, the patient must be present in order for most physician services to be reimbursed under Medicare, regardless of whether the patient is able to participate actively in his or her own care. Moreover, there may be differential handling of "neurological" and "psychiatric" codes for the same dementing condition: The ICD-9 code 331.0 identifies Alzheimer's disease and is reimbursed as a medical code; ICD-9 code 294.1 denotes "senile dementia" and is a psychiatric code reimbursed by some plans under a mental health benefit for which coverage may be more limited. Measuring dementia care activities by providers and health systems will create a solid data resource for redesigning payment and coding structures so that they reflect the work providers need to, and actually, do to provide high quality of care for persons with dementia.
Dementia Management Quality Measures
In dementia care, desired outcomes include preserving, to the maximum possible extent, cognitive and functional abilities; reducing the frequency, severity, and adverse impact of neuropsychiatric and behavioral symptoms; sustaining the best achievable general health; reducing risks to health and safety; and enhancing caregiver wellbeing, skill, and comfort with managing the patients with dementia in partnership with health care providers. Clinical performance measures would ideally include patient-level outcomes as well as processes of care. However, the progressive nature of most dementing diseases, the heterogeneity of comorbid conditions and the medical and other management requirements, and the multiplicity of factors that influence outcomes in dementia make development of reliable patient-reported outcome measures impracticable. In their place, assessing the quality of dementia care must rely on measuring care processes that have been associated with positive outcomes in a rapidly evolving evidence base. The DWG measurement set consists of 10 separate, auditable quality measures. These measures are inclusive of the multiple stages of illness and can be viewed in five categories relevant to therapeutic decision making: (1) assessment of the person with dementia post diagnosis (Measures 1-4 and 6), (2) management of neuropsychiatric symptoms (Measure 5), (3) patient safety (Measures 7 and 8), (4) palliative care and end-of-life issues (Measure 9), and (5) caregiver issues (Measure 10). For most measures, care quality is indicated by the proportion of eligible patients whose documented care meets the identified goal. Situations in which the use of a particular quality measure may not be appropriate for a particular patient (e.g., counseling regarding risks of driving for a patient who does not drive) are specified with an exception to the measure. A brief summary of each measure is found in Table 1. For the full measure specifications, visit the PCPI Web site at www.physicianconsortium.org. Readers interested in examples of how to meet individual measures are referred to this Web site.
Conclusion
The DWG measures have the potential to dramatically impact practice and improve the quality of care provided to patients with dementia. In fact, all of these measures, except Measure 9, were selected for the 2012 and 2013 Physician Quality Reporting System (PQRS) measures list (Centers for Medicare and Medicaid Services, 2013b). PQRS provides an incentive payment to eligible professionals who demonstrate provision of high-quality care for specified conditions and can accelerate adoption of dementia care quality standards across all types of practice organization and all clinical disciplines providing health care for affected patients. In addition, Measure 2, Cognitive Assessment, is included in the clinical quality measure list for Meaningful Use (MU) 2. MU is a Medicare and Medicaid Electronic Health Record (EHR) incentive program designed to offer financial incentives for the "meaningful use" of certified EHR technology to improve patient care (Centers for Medicare and Medicaid Services, 2013a).
The emphasis on dementia management in this measurement set recognizes the enormous challenge dementia presents to individual patients and their caregivers, health care providers, public health, and government and private insurers. While patients, caregivers, and health professionals await more effective disease-modifying treatments for patients with dementia, adherence to the measures outlined here will improve the quality of life for patients and caregivers with dementing illnesses.
Amy E. Sanders receives salary and research support from Einstein Clinical and Translational Science Awards Grants UL1 RR025750, KL2 RR025749, and TL1 RR025748 from the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH) and NIH Roadmap for Medical Re-search; has received loan repayment support from the Loan Repayment Program of the National Institute on Aging (NIA); has received pilot funds from the Resnick Gerontology Center; has reviewed for NIH and NIA, the Center for Medicare and Medicaid Innovation (CMMI), the Patient-Centered Outcomes Research Institute (PCORI), and the Alzheimer's Association; has received honoraria for serving on peer-review panels from CMMI and PCORI; and is a member of a federal advisory committee (Medicare Evidence Development and Coverage Advisory Committee). The contents of this manuscript are solely the responsibility of the authors and do not necessarily represent the official view of NCRR or NIH.
Rebecca J. Swain-Eng is a full-time employee of the American Academy of Neurology.
Samantha Tierney is a full-time employee of the American Medical Association.
John Absher serves on the board of directors of the Alzheimer's Association South Carolina chapter.
The content of this article is the sole responsibility of the authors and does not necessarily represent the views of the National Institute on Aging. s Percentage of patients, regardless of age, with a diagnosis of dementia, or their caregiver(s), who (1) received comprehensive counseling regarding ongoing palliation and symptom management and end-of-life decisions AND (2) have an advance care plan or surrogate decision maker in the medical record or documentation in the medical record that the patient did not wish or was not able to name a surrogate decision maker or provide an advance care plan within 2 years of initial diagnosis or assumption of care 10. Caregiver Education and Support Percentage of patients, regardless of age, with a diagnosis of dementia whose caregiver(s) were provided with education on dementia disease management and health behavior changes AND were referred to additional resources for support within a 12-mo period Note. Full specifications are available on the Physician Consortium for Performance Improvement Web site at www.physicianconsortium.org. Readers interested in examples of how to meet the measurement requirements are referred to this document. Readers are also referred to Appendix e-1 in the full article, online at www. neurology.org. Copyright Ó 2012 by the American Medical Association. Reprinted with permission. | 2017-06-18T16:32:24.462Z | 2013-11-01T00:00:00.000 | {
"year": 2013,
"sha1": "b0603cc39d96b9ace89bfec4f7c699c015b17817",
"oa_license": null,
"oa_url": "https://research.aota.org/ajot/article-pdf/67/6/704/25469/704.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "7de3b0c2cc70e2de4a37b042e493d79bf675a05b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
23170674 | pes2o/s2orc | v3-fos-license | A Simple Composite Phenotype Scoring System for Evaluating Mouse Models of Cerebellar Ataxia
We describe a protocol for the rapid and sensitive quantification of disease severity in mouse models of cerebella ataxia. It is derived from previously published phenotype assessments in several disease models, including spinocerebellar ataxias, Huntington s disease and spinobulbar muscular atrophy. Measures include hind limb clasping, ledge test, gait and kyphosis. Each measure is recorded on a scale of 0-3, with a combined total of 0-12 for all four measures. The results effectively discriminate between affected and non-affected individuals, while also quantifying the temporal progression of neurodegenerative disease phenotypes. Measures may be analyzed individually or combined into a composite phenotype score for greater statistical power. The ideal combination of the four described measures will depend upon the disorder in question. We present an example of the protocol used to assess disease severity in a transgenic mouse model of spinocerebellar ataxia type 7 (SCA7). Albert R. La Spada and Gwenn A. Garden contributed to this manuscript equally.
Protocol
To prevent bias, the experimenter performing the assessments should not have knowledge of the animal's genotype. Individual measures are scored on a scale of 0-3, with 0 representing an absence of the relevant phenotype and 3 representing the most severe manifestation. Each test is performed multiple times to ensure that the score is reproducible. Obesity will complicate the interpretation of all measures described. The investigator may wish to weigh mice following phenotype scoring to assess the possible role of adiposity in the results.
Ledge test
The ledge test is a direct measure of coordination, which is impaired in cerebellar ataxias and many other neurodegenerative disorders. This measure is the most directly comparable to human signs of cerebellar ataxia.
1. Lift the mouse from the cage and place it on the cage's ledge. Mice will typically walk along the ledge and attempt to descend back into the cage. 2. Observe the mouse as it walks along the cage ledge and lowers itself into its cage. A wild-type mouse will typically walk along the ledge without losing its balance, and will lower itself back into the cage gracefully, using its paws. This is assigned a score of 0. If the mouse loses its footing while walking along the ledge, but otherwise appears coordinated, it receives a score of 1. If it does not effectively use its hind legs, or lands on its head rather than its paws when descending into the cage, it receives a score of 2. If it falls off the ledge, or nearly so, while walking or attempting to lower itself, or shakes and refuses to move at all despite encouragement, it receives a score of 3. Some mice will require a gentle nudge to encourage them to walk along the ledge or descend into the cage. 3. Record the ledge test score.
1. Grasp the tail near its base and lift the mouse clear of all surrounding objects. 2. Observe the hindlimb position for 10 seconds. If the hindlimbs are consistently splayed outward, away from the abdomen, it is assigned a score of 0. If one hindlimb is retracted toward the abdomen for more than 50% of the time suspended, it receives a score of 1. If both hindlimbs are partially retracted toward the abdomen for more than 50% of the time suspended, it receives a score of 2. If its hindlimbs are entirely retracted and touching the abdomen for more than 50% of the time suspended, it receives a score of 3. 3. Place the mouse back into its cage and record its hindlimb clasping score.
Kyphosis
Kyphosis is a characteristic dorsal curvature of the spine that is a common manifestation of neurodegenerative disease in mouse models [2]. It is caused by a loss of muscle tone in the spinal muscles secondary to neurodegeneration.
Representative results
With a sufficient number of animals, this procedure is capable of detecting phenotype differences between strains and within the same strain over time. For data analysis, calculate the score for each measure by taking the mean of the three measurements in each individual. Measures can be analyzed separately or combined into a composite phenotype score for each individual.
Chart the data and apply the appropriate statistical methods to determine significance. Figure 1 shows results from a composite phenotype assessment of a murine transgenic SCA7 model. In this floxed-SCA7-92Q transgenic model, the human ataxin-7 gene with 92 CAG repeats is flanked by loxP sites and expressed from a bacterial artificial chromosome. The composite phenotype score includes the clasping, ledge walking, gait and kyphosis assessments for a maximum possible score of 12. The progressive SCA7 phenotype in the floxed-SCA7-92Q transgenic mice is demonstrated by an increasing composite phenotype score, which is consistent with the progressive nature of the human disease. Figure 1. Floxed-SCA7-92Q mice exhibit a progressive SCA7 phenotype that is significantly different from Non-Transgenic littermates beginning at 20 weeks (2-way ANOVA: Bonferroni post-hoc; ***P<0.001). Mice were assessed on a 0-3 scale each for ledge test, clasping, gait, and kyphosis. Average composite score for each genotype at each age was calculated. Bars represent SEM.
Discussion
This protocol is designed to be a sensitive and easily performed evaluation of disease severity in mouse models of cerebellar ataxia. Individual components of the scoring system will be more or less effective in different mouse models of neurodegeneration.
Elements of this scoring system have been effectively used to assess a variety of mouse models of human neurodegenerative disease, including cerebellar ataxias, Huntington s disease and spinobulbar muscular atrophy [1-3]. The ideal combination of tests will depend upon the disorder in question. This protocol was originally designed by the authors to evaluate transgenic mouse models of spinocerebellar ataxia type 7 (SCA7).
In the model of SAC7 employed to generate the data presented here, animals were sacrificed at the age of 40-43 weeks or sooner if the behavioral abnormalities progressed to a stage that an animal is no longer sufficiently mobile to independently maintain nutrition or hydration. | 2014-10-01T00:00:00.000Z | 2010-05-21T00:00:00.000 | {
"year": 2010,
"sha1": "553a59e6daa0205b5ebcd31d4d530f7a3222ffac",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc3121238?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "553a59e6daa0205b5ebcd31d4d530f7a3222ffac",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
16811896 | pes2o/s2orc | v3-fos-license | Diagnostic accuracy for major depression in multiple sclerosis using self-report questionnaires
Objective Multiple sclerosis and major depressive disorder frequently co-occur but depression often remains undiagnosed in this population. Self-rated depression questionnaires are a good option where clinician-based standardized diagnostics are not feasible. However, there is a paucity of data on diagnostic accuracy of self-report measures for depression in multiple sclerosis (MS). Moreover, head-to-head comparisons of common questionnaires are largely lacking. This could be particularly relevant for high-risk patients with depressive symptoms. Here, we compare the diagnostic accuracy of the Beck Depression Inventory (BDI) and 30-item version of the Inventory of Depressive Symptomatology Self-Rated (IDS-SR30) for major depressive disorder (MSS) against diagnosis by a structured clinical interview. Methods Patients reporting depressive symptoms completed the BDI, the IDS-SR30 and underwent diagnostic assessment (Mini International Neuropsychiatric Interview, M.I.N.I.). Receiver-Operating Characteristic analyses were performed, providing error estimates and false-positive/negative rates of suggested thresholds. Results Data from n = 31 MS patients were available. BDI and IDS-SR30 total score were significantly correlated (r = 0.82). The IDS-SR30total score, cognitive subscore, and BDI showed excellent to good accuracy (area under the curve (AUC) 0.86, 0.91, and 0.85, respectively). Conclusion Both the IDS-SR30 and the BDI are useful to quantify depressive symptoms showing good sensitivity and specificity. The IDS-SR30 cognitive subscale may be useful as a screening tool and to quantify affective/cognitive depressive symptomatology.
Introduction
Multiple sclerosis (MS) is an inflammatory, demyelinating disease of the central nervous system and is regularly accompanied by psychiatric symptoms such as depression . With a lifetime risk of up to 50% and a point prevalence of up to 25%, major depressive disorder (MDD) is a frequent comorbidity of MS (Patten et al. 2003). Multiple sclerosis-associated depression has a substantial negative impact on patients' quality of life, cognition, and psychosocial functioning (Hakim et al. 2000;Sa 2008). Higher levels of depressive symptoms are also linked to poorer treatment compliance (Ivanova et al. 2012), and thus can affect long-term health outcomes. If left untreated, depressive symptoms in MS may worsen over time (Ensari et al. 2014). Despite the high clinical relevance of depression in MS, it remains frequently underdiagnosed and undertreated.
The diagnostic criteria for MDD include a number of somatic and vegetative symptoms that overlap with typical symptoms of MS (e.g., fatigue, sleep disturbance, impaired concentration), which can make accurate MDD diagnosis particularly difficult in this patient population. Therefore, valid and reliable, easy-to-use diagnostic tools taking into account the potential confounding of MS symptoms are needed. Adjustment of cutoff scores may be required to prevent false diagnoses due to somaticsymptom-related score inflation. This is particularly important in patients who might be at risk for a comorbid mood disorder, for example, patients with elevated self-reported depressive symptoms.
A wide range of self-rated questionnaires are available for quantification of depression. Some of these have been validated and used in MS patients (see Avasarala et al. 2003;Benedict et al. 2003;Moran and Mohr 2005;Mohr et al. 2007;Honarmand and Feinstein 2009;Quaranta et al. 2012). Guidelines published by the American Academy of Neurology recommended only the BDI as well as a two-question tool to screen for depressive disorders with a weak level of evidence and did not find sufficient evidence for other instruments (Minden et al. 2014). Importantly, only a few studies to date (Sullivan et al. 1995;Pandya et al. 2005;Honarmand and Feinstein 2009;Quaranta et al. 2012;Patten et al. 2015) have used a structured clinical interview to establish MDD diagnosis, and only the most recent ones also included Receiver-Operating characteristics (ROC) analysis, the gold standard to verify diagnostic accuracy. The Hospital Anxiety and Depression Scale (HADS) showed good diagnostic accuracy (Honarmand and Feinstein 2009), however, it only covers some of the diagnostic criteria of MDD. Moreover, it is copyrighted and may not be easily available, particularly for clinics or research groups in developing countries. A clinician-based, MS-specific depression scale (MSDRS (Quaranta et al. 2012)) also achieved good accuracy overall, however, it has relatively poor sensitivity (38%) and so far has only been used in Italian patients. Finally, a very recent paper demonstrated good accuracy of the patient health questionnaire PHQ-9, the Center for Epidemiologic Studies Depression rating scale (CES-D), and the HADS in MS (Patten et al. 2015). However, there is still paucity of data directly comparing different selfreport questionnaires head-to-head and against structured interviews. No study to date has addressed this question in German-speaking MS patients.
The 30-item self-rated Inventory of Depressive Symptomatology (IDS-SR 30 ) was developed as part of the STAR*D trial (Rush et al. 1996) and has been validated for several patient populations with physical illness so far but not for MS. In contrast to most self-rated questionnaires for depression such as the Beck Depression Inventory (BDI) or the HADS, it assesses all symptom domains for MDD as designated in the DSM-IV and is available both in patient self-rating as well as clinicianbased rating form. Moreover, it has been validated in more than 30 languages and is freely available (http:// www.ids-qids.org/) without licensing charges. It also offers a self-rated validated 16-item short version (QIDS-SR) and subscales providing separate scores for cognitive and somatic symptoms that have been derived (Duivis et al. 2013). It might therefore be a promising tool to screen for and quantify depressive symptomatology in MS.
Aims of the Study
Here, we compare diagnostic accuracy of the BDI, the IDS, its subscales, and its short form (QIDS) in a group of German MS patients who reported elevated depressive symptoms. This sample might therefore model a clinical situation where detection of MDD is particularly important. We aim to establish meaningful threshold values based on a structured clinical interview.
Subjects
MS patients (n = 31) were recruited via the MS clinic of the University Medical Center Hamburg Eppendorf using our patient database and written consent prior to inclusion in the study was obtained. We contacted patients by mail if the scores from their last clinical visit recorded in the database indicated elevated depressive symptoms as measured by the Mood subscale of the Hamburg Quality of Life Questionnaire for MS (HAQUAMS) (Gold et al. 2001).
Diagnosis of major depression
Patients underwent structured diagnostic interviews by trained raters (A.F., S.L.) (The Mini International Neuropsychiatric Interview, M.I.N.I.) (Ackenheil et al. 1999). Several approaches have been proposed to implement DSM-IV diagnostic criteria in patients with physical illnesses: "aetiological" (case-by-case exclusion of somatic symptoms judged likely to be due to the comorbid medical illness), "inclusive" (use all symptoms regardless of etiology), and "substitutive" (substitution of most or all somatic symptoms with additional cognitive or affective symptoms). For the current study, we used the inclusive approach, that is, MDD diagnosis was made if a patient met at least five of the nine criteria which must include "depressed mood" or "loss of interest/anhedonia."
Self-report measures of depression
All patients completed the Beck Depression Inventory (BDI) (Hautzinger et al. 1995) and the IDS-SR 30 (Trivedi et al. 2004). Subscore calculation of the IDS-SR 30 included a somatic and a cognitive subscale as published by Duivis et al. (2013). The cognitive scale contains 10 IDS-SR 30 items, one for each of the following symptoms domains: Feeling sad or irritable, the quality of mood, concentration/decision making, self-perception, suicidal ideation, general interest, as well as capacity for pleasure excluding and including sexuality. The somatic subscale includes items on sleep, appetite, weight, energy level, psychomotor retardation/restlessness, and leaden paralysis/physical energy.
Statistics
Major depressive disorder diagnosis was established based on the M.I.N.I. (criterion). Receiver-Operating characteristics curves were created using MatLab and MedCalc software, giving an overview of sensitivity and specificity combinations for possible thresholds in each questionnaire. Error estimates and confidence intervals were calculated by bootstrapping using 1000 replications. Using MedCalc, the BDI, IDS-SR 30 total and somatic and cognitive subscore ROC curves were compared statistically using the method of DeLong et al. (1988) for the calculation of the Standard Error of the Area Under the Curve (AUC) and of the difference between two AUCs . This algorithm is particularly useful because it adjusts the AUCs for the expected frequency of the condition (MDD in this case) in the population of interest (in this case MS). Based on available epidemiological research (Patten et al. 2003), we estimated the MDD point prevalence in the MS population at 25%. AUC values were interpreted according to the following guidelines: 0.9-1 excellent, 0.8-0.9 good, 0.7-0.8 fair, 0.6-0.7 poor.
Cutoff values were established with the (0, 1) minimum distance method giving equal weight to sensitivity and specificity. Distributions of the thresholds as well as the falsepositive and false-negative rates were determined to estimate uncertainty and control for the small sample size. Finally, BDI and IDS-SR 30 scores were correlated using Pearson correlation coefficients. All values are given as mean AE SEM. P-values of <0.05 were considered significant.
Demography
Patients were aged 22-66 years old (M = 49.06 AE 1.89). About 75% of the participants were female (n = 25). Clinical and demographic characteristics can be found in Table 1.
Depression frequency and severity
Twenty-one of the 31 patients enrolled fulfilled the criteria of MDD according to M.I.N.I. interviews. As expected, most patients had also psychiatric comorbidities including other mood disorders (dysthymic disorder, n = 4; lifetime mania or hypomania, n = 5), anxiety disorders (generalized anxiety disorder, n = 11; agoraphobia with and without panic disorder, n = 5; social phobia n = 5, posttraumatic stress disorder, n = 1; OCD, n = 1), or substance abuse (n = 2).
As expected, patients with MDD scored well over usual cutoffs for clinical depression in the IDS-SR 30 as well as the BDI (Table 1). In addition, due to screening criteria for this patient group, IDS-SR 30 depression scores were also slightly elevated in the patients not meeting diagnostic criteria for MDD (Table 1). BDI and IDS-SR 30 showed a highly significant intercorrelation (r = 0.82, P < 0.0001, 95% CI 0.67-0.91).
ROC analyses
All ROC-derived sensitivity and specificity values are shown in Table 2, and ROC curves are depicted in Fig. 1. The AUC derived from the ROC for the IDS-SR 30 indicated good accuracy (AUC = 0.86 AE 0.08). A cutoff of 28 (SD (IDS-SR 30 _total) = 3.66) provides a sensitivity of 80% and specificity of 77% (Table 2). The false-positive (negative) rate for IDS-SR 30 total when using 28 as the cutoff was estimated as 19.9 AE 7.3% (23.0 AE 12.0%). This results in a positive likelihood ratio of 5.67 and a negative likelihood ratio of 0.38. Furthermore, we determined diagnostic accuracy of the IDS-SR 30 cognitive and somatic subscales. The cognitive subscale reached excellent accuracy (AUC = 0.91 AE 0.06). For the IDS-SR 30 cognitive scale, the analysis yielded a cutoff value of 10 (sd(IDS-SR 30 _cog) = 3.15, Table 3). The false-positive (negative) rate for the cognitive IDS-SR 30 subscale cutoff was estimated as 19.30 AE 7.74% (30.69 AE 13.32%), leading to a positive likelihood ratio of 4.25 and a negative likelihood ratio of 0.25. In contrast, the IDS-SR 30 somatic scale only showed fair accuracy (AUC = 0.72 AE 0.1). The QIDS-SR had good accuracy AUC of 0.80 AE 0.08 (CI 0.669-0.997) with a suggested cutoff of 13 (Sensitivity 66.67, Specificity 90.00).
Receiver-Operating characteristics analysis for the BDI revealed good accuracy (AUC = 0.85 AE 0.07) and a cutoff value of 12 (SD (BDI) = 3.69, Table 4). This cutoff yields Sensitivity of 88% and Specificity of 70%. The false-positive (negative) rate for the BDI with this cutoff was estimated as 12.48 AE 6.72% (30.15 AE 15.03%). For the BDI, we thus determined a positive likelihood ratio of 6.00 and a negative likelihood ratio of 0.43.
Comparison of AUC values for the IDS-SR 30 total score, IDS-SR 30 cognitive subscore, IDS-SR 30 subscale and BDI yielded significant differences between IDS-SR 30 total and IDS-SR 30 somatic (P = 0.02) as well as IDS-SR 30 cognitive and IDS-SR 30 somatic (P = 0.04) while the difference between the BDI and IDS-SR 30 somatic subscore failed to reach statistical significance (P = 0.09). There were no significant differences between the IDS-SR 30 total score and the cognitive subscore (P = 0.38) as well as the BDI (P = 0.80).
Discussion
Our results indicate that two widely used patient-based instruments, the IDS-SR 30 and the BDI, yield good accuracy for depression in MS when compared to a structured clinical interview. Moreover, we provide first evidence for validity of the IDS-SR 30 total score, IDS-SR 30 cognitive subscale, and the QIDS-SR short from for assessment of depression in MS. Several studies have previously investigated psychometric properties of self-report depression questionnaires in MS. For the most part, analyses have been restricted to measures of reliability (such as internal consistency), correlational analyses with questionnaires measuring related concepts, or response to therapy (Nyenhuis et al. 1995;Sullivan et al. 1995;Avasarala et al. 2003;Benedict et al. 2003;Moran and Mohr 2005;Mohr et al. 2007;Honarmand and Feinstein 2009;Quaranta et al. 2012;Wang and Gorenstein 2013). However, a few have assessed diagnostic accuracy against a structured clinical interview: Mohr et al. (2007) demonstrated that two questions covering the two core symptoms of MDD (anhedonia and depressed mood) yield 99% sensitivity and 87% specificity. This approach is, therefore, highly accurate as a screening tool, although a more recent study reported lower estimates of specificity and sensitivity for this instrument (Patten et al. 2015). Moreover, it does not provide a quantitative score of depression severity. The 8item depression subscale of the HADS (Honarmand and Feinstein 2009) was previously found to provide a sensitivity of 90% and a specificity of 87% for MDD in MS (as determined by the SCID). In this study, the authors also conducted a ROC analysis, which yielded an AUC of 0.94, which can be considered excellent. A recent study explored the diagnostic accuracy of the BDI in Italian MS patients against the SCID (Quaranta et al. 2012). Here, the AUC was 0.83 (good accuracy). The results from our study confirm the good accuracy of the BDI (AUC = 0.85), although we obtained markedly better sensitivity. We also provide first evidence that a comparatively new depression questionnaire, the IDS-SR 30 , also provides good accuracy when validated against a structured clinical interview.
The very recent study by Patten and colleagues provided the first available head-to-head comparison of selfreport scales of depression in MS (Patten et al. 2015) and showed good accuracy for the CES-D, the PHQ (9 and 2), and the HADS. Since the PHQ is available free of charge, it might therefore be particularly interesting. With our study, there is now another freely available instrument (IDS) available for screening in MS depression. Moreover, our results also provide a direct comparison to the BDI, the only instrument that reached a sufficient level of evidence in the AAN guidelines.
Taken together, there are now several reliable and valid strategies for interested researchers and clinicians to screen for and quantify depression in MS, each with specific advantages and disadvantages. All scales evaluated to date (BDI, IDS-SR 30 , HADS, 2-question screen, PHQ, CES-D) show good sensitivity and specificity around 80% Predictive value of the self-rated cognitive Inventory of Depressive Symptomatology subscale for major depressive disorder: sensitivity, specificity, and their 95% confidence intervals (CI) for potential cutoff values. or higher. The QIDS-SR, however, appears to be less sensitive but more specific. As noted in the AAN guidelines (Minden et al. 2014), "valid screening tools might improve identification of individuals who could benefit from further evaluation and treatment." If this is the goal, a low false-negative rate is required. In our study, the IDS-SR 30 had a markedly better false-negative rate (23%) compared to the BDI (30%). However, this still means that 23% of cases will be missed. Clinically, a high false positive rate is less of a concern; it does however increase the administrative burden and may waste resources in particular settings such as primary care. For maximum sensitivity, specificity, and cost-effectiveness, the two-question approach proposed by David Mohr and colleagues might be the ideal choice. However, it does not yield a quantitative score of depression severity, which may be necessary in a research setting or to monitor treatment response in clinical care. The HADS provides a middle ground of a comparatively short scale offering both good accuracy for MDD diagnosis as well as a quantitative score. Generally, the HADS is a good measure for symptom severity in somatic, psychiatric, primary care patients and in the general population (Bjelland et al. 2002) and is therefore widely used. However, more recent work has revealed that it lacks consistent differentiation between symptoms of anxiety and depression (Cosco et al. 2012) and it does not cover all symptom domains of MDD.
The IDS-SR 30, validated for the first time in MS patients in the current report, in our opinion has a number of features that make it a good option for measuring depression in MS: (1) it covers all DSM-IV criteria (and only those) (2) it offers parallel patient-and clinicianrated versions; (3) it was translated in many languages and is increasingly used; and (4) subscales for cognitive and somatic symptoms can be constructed (Duivis et al. 2013) as we have done in our present analysis and an algorithm for identification of DSM-assigned melancholic depression based on the items of the IDS-SR 30 is available (Khan et al. 2006). This might be particularly relevant for use in studies to explore novel biological substrates of depression in MS as these were found to differ between data-driven designations of melancholic and atypical idiopathic depression (Lamers et al. 2013). Similar dissociations between biological correlates and clinical features might also exist in MS-associated depression, as our previous research has indicated that affective and cognitive symptoms of depression in MS might be more closely related to neuroendocrine-limbic abnormalities (Gold et al. 2010 while vegetative/somatic aspects show closer correlations with markers of inflammation (Gold et al. 2011).
First applications in an RCT for a behavioral intervention (exercise) in MS also suggest that the IDS-SR 30 may be responsive to detect changes in depressive symptomatology (Briken et al. 2014). Sensitivity to change remains an important issue for depression questionnaires in MS that have not systematically been addressed.
Some limitations have to be considered when interpreting the results from our present study. First of all, the sample size was small and all our patients were contacted because they had previously shown elevated depressive symptoms, that is, the sample was preselected for elevated levels of depression. On one hand, this sample might be a good model for clinical situations where accurate diagnosis is particularly important. On the other hand, in larger samples including many patients with very low or no depressive symptoms, diagnostic accuracy of IDS-SR 30 and BDI may be higher than reported here.
Despite finding the IDS-SR 30 somatic subscale to show only fair accuracy, the total IDS score was not found to perform significantly worse than the IDS-SR 30 cognitive subscale. This indicates that, while removal of somatic symptoms may be preferable, we found no evidence to suggest that it is strictly necessary for somatic symptoms to be removed from the IDS for diagnostic accuracy in MS. Future studies performed with a larger sample size will provide accurate/reliable estimates of the cutoff values. However, the specific values of the threshold estimates are not the most important results arising from this study. A far more meaningful and important result is the ability to provide estimates of the false-positive/negative rates for the various scores, given a particular score threshold. For example, we estimate the false-positive (negative) rate for IDS-SR 30 _total as 19.9 AE 7.3% (23.0 AE 12.0%), noting that the provision of error estimates implicitly accounts for the small sample size. Pragmatically, these results are perhaps the most important results in the article, as they provide an estimate of the error rates that would be expected, should the particular cutoff value (in this case, 28) be used as the decisionmaking criterion.
Furthermore, the present study does not address the ability of the BDI or the IDS-SR 30 for differential diagnosis of MDD versus other affective disorders. In our sample, two patients with high scores on the BDI and IDS-SR 30 were found who did not meet diagnostic criteria for MDD according to the M.I.N.I. When looking at the M.I.N.I. data of these individuals, we observed that both met diagnostic criteria of dysthymia. This means that while the questionnaires have readily identified a mood disorder, they do not seem to be a means of distinguishing between MDD and dysthymia. This illustrates that distinction between different affective disorders may therefore be a particular challenge in MS that requires clinical interviews and cannot be achieved with general self-report questionnaires for depression. In conclusion, both the IDS-SR 30 and the BDI are valid measures to quantify depressive symptoms and show good diagnostic accuracy. The IDS-SR 30 cognitive subscale may be useful as a screening tool and to quantify affective/cognitive depressive symptomatology. | 2017-09-02T04:02:11.380Z | 2015-07-14T00:00:00.000 | {
"year": 2015,
"sha1": "e707522e0edd021db2c1ccd462acacc097fd6261",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/brb3.365",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e707522e0edd021db2c1ccd462acacc097fd6261",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
212657977 | pes2o/s2orc | v3-fos-license | Cosmological models with asymmetric quantum bounces
In quantum cosmology, one has to select a specific wave function solution of the quantum state equations under consideration in order to obtain concrete results. The simplest choices have been already explored, in different frameworks, yielding, in many cases, quantum bounces. As there is no consensually established boundary condition proposal in quantum cosmology, we investigate the consequences of enlarging known sets of initial wave functions of the universe, in the specific framework of the Wheeler-DeWitt equation interpreted along the lines of the de Broglie-Bohm quantum theory, on the possible quantum bounce solutions which emerge from them. In particular, we show that many asymmetric quantum bounces are obtained, which may incorporate non-trivial back-reaction mechanisms, as quantum particle production around the bounce, in the quantum background itself. In particular, the old hypothesis that our expanding universe might have arisen from quantum fluctuations of a fundamental quantum flat space-time is recovered, within a different and yet unexplored perspective.
According to the Penrose-Hawking singularity theorems in General Relativity [1], the universe has a beginning described by a singularity in space-time, which is outside the scope of the theory and, hence, cannot be investigated. This led to the idea that, in this extreme domain, characterized by very high energy densities and curvature, General Relativity must undergo modifications, which may be due to quantum gravitational effects. Therefore, it is necessary to formulate a quantum theory of gravity to describe the domain previously held as a singularity. * pmordelgado@gmail.com † nelson.pinto@pq.cnpq.br Quantum Mechanics, on the other hand, is understood as a fundamental theory able to describe any physical system, including the whole universe. However, the Copenhagen interpretation cannot be applied to cosmology. The reason is that, in order to solve the measurement problem, this interpretation postulates that the wave function collapses when an observer performs a measurement on the system. Thus an external classical domain is required to perform the collapse of the wave function.
There are some proposals to circumvent this conceptual problem, the most famous being the Many-Worlds interpretation [2], the spontaneous collapse approach [3], and the de Broglie-Bohm quantum theory [4,5]. We will adopt this last one, a deterministic interpretation in which real trajectories in the configuration space exist. The probabilistic character of Quantum Mechanics is due to the existence of hidden variables (initial field configurations), and arises statistically. In this theory, the collapse of the wave function is effective: the system occupies one of the branches of the wave function, and the others remain empty and incommunicable to each other. Therefore, an external observer is no longer needed, and we achieve the conceptual coherence necessary to apply this approach to cosmology.
The quantum cosmological models that arise from this approach enable the avoidance of the initial singularity, giving rise to a bounce [6,7], or even multiple bounces [8,9], which are preceded by a contraction of the scale factor and followed by an expanding phase.
In this paper, we consider generalizations of the quantum cosmological models found in Refs [6,7] arising from the Wheeler-DeWitt quantization of the background, which are symmetric around the bounce, obtained from enlarged prescriptions for the initial wave function. Our aim is to obtain asymmetric bounces, capable to describe non-linear back-reactions coming from particle production around the bounce, which can alter the background evolution in the expanding phase. Indeed, taking into account generalizations of the initial Gaussian wave functions considered in Refs [6,7], we were able to obtain a variety of asymmetric quantum bounce trajectories in different contexts, with quite interesting properties, as it will be discussed in the sequel.
The paper is divided as follows: in the next section we present the mini-superspace model in which the de Broglie-Bohm quantization will be implemented, and the standard symmetric quantum bouncing trajectories obtained from initial Gaussian wave functions centered at the origin, and without phase velocity. The unique free parameter (besides the initial values of the trajectories), is the standard deviation of the Gaussian. In section III, we enlarge the set of initial wave functions by considering initial Gaussians, also centered at the origin, with phase velocity, hence adding a new parameter to the system. It is shown that unitary evolution of such initial wave functions continue to yield symmetric quantum bounces. As unitary evolution is not a mandatory requirement for mini-superspace wave functions in the de Broglie-Bohm theory, we gave up with unitarity, obtaining, in this way, asymmetric quantum bounces. In section IV, we enlarge once more the class of initial wave functions by taking superpositions with two more free parameters than the standard deviation of the Gaussian, obtaining asymmetric quantum bounces with unitarity preserved. In the Conclusion, we comment on our results, and discuss future developments.
II. DE BROGLIE-BOHM QUANTIZATION OF THE MINI-SUPERSPACE FRIEDMANN MODEL
For a flat, homogeneous and isotropic universe filled with a perfect fluid with equation of state P = ωρ, where P is the pressure, ρ is the energy density and ω is the equation of state parameter, the ADM [10] and the Schutz [11] formalisms lead to the following Hamiltonian with where L p is the Planck length, V is the volume of the comoving homogeneous 3-dimensional hyper-surface, which we are supposing to be compact, a is the scale factor of the universe, T is the parameter related to the degree of freedom of the fluid, which plays the role of time, P a and P T are their respective canonically conjugated momenta, and N is the lapse function. We are using natural units, = c = 1, hence all canonical variables above are dimensionless, and the Hamiltonian has dimensions of energy = 1/length, as it should be. The constant L p /V will be absorbed in the definition of time later on, yielding a dimensionless cosmic time 1 . The Friedmann equations can be readily obtained from the Hamiltonian where N is the lapse function of the ADM formalism. Applying the Dirac quantization procedure for constrained systems, where the wave function is annihilated by the the constraint operator,Ĥ 0 Ψ = 0, and taking into account a particular choice of the factor ordering [12], which leads to a Schrödinger equation with a covariant Laplacian under redefinitions of a, we arrive at the following Wheeler-DeWitt equation: Performing the variable transformation given by we obtain which can be identified as a Schödinger equation for a free particle of mass m = 2 in one dimension with the opposite sign of the time derivative term. The solutions of Eq. (6) are the wave functions of the universe. With the choice N = a 3ω for the lapse function, the parameter T relates to the dimensionless cosmic time t = (L 2 p /V )t c through dt = a 3ω dT , where t c is the usual cosmic time, with dimension of length.
Once the scale factor a and, consequently, the variable χ must assume positive values, we are dealing with a Schrödinger equation for a particle with negative kinetic energy in the half axis [13]. In order to obtain unitary solutions and, as a consequence, a consistent probabilistic interpretation, it is necessary to perform a self-adjoint extension, that is, to consider the perfectly reflecting boundaries, which are given by the following condition: Note, however, that the de Broglie-Bohm quantum theory is a dynamical fundamental theory, where probabilities arise in a secondary step, as in Classical Mechanics. And indeed, a probabilistic interpretation of the wave function of the Universe may not make sense, since there perform the Legendre transformation in order to find the Hamiltonian density, integrate in the spatial coordinates, and implement a canonical transformation in the fluid variables, leading to Eq. (1). The factor 1/4 comes from the gravitational part of the action, more specifically from the relation betweenȧ and the conjugated momentum Pa.
is only one universe in this approach. A probabilistic interpretation is required only for subsystems in the Universe, where we can perform measurements. In this situation, one can use the so called conditional wave functions for subsystems, in which the Wheeler-DeWitt equation reduces to an unitary Schrödinger form, and a probabilistic interpretation where the Born rule is valid can be recovered, which is called quantum equilibrium, see Ref. [15] for details. Of course this opens the possibility that during this process violations of standard quantum mechanics might occur. Unfortunately, almost all systems in Nature have evolved to the quantum equilibrium phase, where the probability distribution is described by ρ, see Refs. [17,18] for detailed investigations about this process, and possible exceptions. Concluding, in what follows, we will not require unitary evolution as necessary feature of the mini-superspace wave function.
The key feature of the de Broglie-Bohm quantum theory is to assume that positions in configuration space (in our case a) have objective reality, independently of any observation, and satisfy the so called guidance equatioṅ or With Eq. (10), one can interpret Eq. (8) as a continuity equation for the distribution ρ, and Eq. (9) as a generalized Hamilton-Jacobi equation supplemented by the so called quantum potential, If one wants to recover the physical dimensions of Eqs. (8) and (9), one can easily verify that Planck constant reappears only multiplying the quantum potential, Q → 2 Q. Hence Q brings the quantum effects to the dynamics. Once the total energy given by Eq. (9) includes also the quantum potential Q, the trajectory given by Eq. (10) will not be the same as the classical one, unless Q is negligible with respect to the other terms. This effect is responsible for the emergence of the quantum bounce, avoiding the standard classical initial singularity.
Let us consider an initial wave function of the universe given by which satisfies the boundary condition (7). In order to obtain an unitary evolution, we must apply the correspondent propagator to the Wheeler-DeWitt equation (6) considering the boundary condition (7). It means that we must sum two propagators of a Schrödinger equation with negative kinetic energy, one to χ 0 and another to −χ 0 . We then obtain The propagator (14) is not the most general one that satisfies the boundary condition (7). One could, for instance, change the relative sign to minus in order to obtain G(χ = 0) = 0. However, this propagator leads to a trivial solution for the propagated wave function of the universe. Thus, in practice, the propagator that results in a non-trivial solution satisfies a more restrictive boundary condition, which is given by the von Neumann condition ∂ χ G| χ=0 = 0. Superpositions of the propagators with relative signs plus and minus with a phase difference of ±π/2 are also allowed. However, the only difference in the propagated wave function is a factor that does not modify the Bohmian trajectories. Applying (14) to the initial wave function (13), we arrive at the wave function for all times which also satisfies Eq. (7). Using the phase S of the above wave function, we are able to obtain the trajectory of the parameter χ through Eq. (11). It reads where χ b is the value of χ at the bounce, which occurs at T = 0. One can re-obtain the classical solution by taking a Gaussian infinitely peaked. In order to do that, one should consider the differential equation with initial condition χ 0 = χ(T 0 ), which leads to the solution Then, by making σ 2 → 0, the classical cosmology given by χ(T ) = χ 0 T /T 0 is obtained.
In terms of the scale factor a one gets, where a b and χ b are related also through Eq. (5). Eq. (18) describes a symmetric bounce, which is plotted in figure 1. It tends to the classical solution for large values of T .
ab=1, A good model for the perfect hydrodynamical fluid in the early universe, where all particles are highly relativistic, is a radiation fluid with w = 1/3, which will be considered from now on. Note that, in this case, T = η, the conformal time (remember the relation of T with cosmic time t, dt = a 3w dT ).
It is convenient to express the bounce solution in terms of cosmological quantities, which is achieved by relating the parameters of the wave function to observables. With this purpose, we will follow the same procedure developed in [22]. We first obtain the Hubble function, given by H =ȧ a , where dot denotes the derivative with respect to the physical cosmic time 2 . We then take an expansion of the Hubble function squared for large times T , which reads where in the last equality we used the classical Friedmann equation, yielding 2 When relating the parameters with cosmological observables, one must go back to the physical cosmic time, tc = (V /L 2 p )t. The constant V /L 2 p can be absorbed in the dimensionless variance σ, see Eq. (16), yielding a variance with dimensions of length 1/2 . This turns the subsequent equations with the correct physical dimensions.
where Ω r0 = ρ r0 /ρ c0 is the dimensionless density parameter for radiation today. The subscript 0 in all quantities indicates their current values. The quantities ρ r0 and ρ c0 = 3H 2 0 /8πG are, respectively, the current energy density of radiation and the current critical density.
Performing the following transformation of variables we obtain In its turn, the curvature scale at the bounce is given by where R is the Ricci scalar.
To ensure that the Wheeler-DeWitt equation is a valid approximation for a more fundamental theory of quantum gravity [16], we must require that the bounce scale is larger than the Planck scale, that is L b > L p . Taking H 0 ≈ 70 km × s −1 × M pc −1 , Ω r0 ≈ 10 −4 and given that L p /R H0 ≈ 1.25 × 10 −61 , where R H0 = 1/H 0 is the Hubble radius today, we obtain the upper bound for x b The lower limit can be obtained by requiring that the bounce occurs at energy scales much larger than the nucleosynthesis energy scale, i.e. T BBN = 10 MeV. Using the CMB temperature equal to T γ0 = 2.7 K in Mev, and the linear relation between the temperature and the scale factor we obtain
A. Generalized symmetric quantum bounces
Although the simplicity of the previous symmetric bounce, it represents a fine-tuning in the theory, since the contraction phase is restricted to be the same as the expansion reversed in time. For this reason, we aim to obtain cosmological models with asymmetric trajectories for the scale factor a.
Our initial proposal to obtain asymmetric solutions was to include a factor of the form exp(ipχ) in the initial wave function, which represents a velocity for the Gaussian proposed in Eq. (13). Thus we have Note that this initial wave function does not satisfy the boundary condition (7), which means that unitarity is not satisfied at T = 0. However, implementing a convolution between this initial wave function and a propagator that satisfies condition (7), we are, in practice, dealing with the projection of Ψ 0 onto the subspace of squareintegrable functions on the χ half-line satisfying the von Neumann boundary condition. As a result, the propagated wave function that results from this convolution is going to satisfy (7). Propagating this initial wave function (28) with the propagator (14) from 0 to +∞, that is, performing a unitary evolution, we obtain the following wave function for all times: where and The wave function (29) satisfies the boundary condition (7). Thus, as mentioned before, the non-unitarity at the point T = 0 for the initial wave function (28) does not spoil the unitarity after the convolution with the propagator (14). We can see from Eq. (29) that the wave function was propagated equally to χ and to −χ. Thus terms and arguments that are linear in χ are symmetrized with respect to χ = 0 by the unitary evolution with the propagator (14).
In order to exemplify a Bohmian trajectory for the scale factor a related to an unitary wave function with factors of the form exp(ipχ), we are going to consider only the terms where which also constitutes a unitary solution of the Wheeler-DeWitt equation (6). The choice to disregard the Gauss's error functions is for the sake of simplicity.
Inserting the global phase S of the wave function (32) into Eq. (11), it is possible to obtain a differential equation for the parameter χ. It reads Using Eq. (5) in Eq. (34) and solving it numerically with initial condition a i = a(T i ), we obtain the trajectory of the scale factor a(T ), which is plotted in figure 2.
The result is a symmetric bounce, regardless of the value of the parameter p related to the asymmetry. It happens when the unitary evolution for factors of the form exp(ipχ) is maintained. As explained before, once these factors are linear in χ inside the exponential, they are going to be propagated equally to χ and to −χ, resulting in a symmetrization of the propagated wave function and, as a consequence, of the trajectory of the scale factor a.
Note that different symmetric bounces can be obtained in other approaches to quantum cosmology. For instance, in Refs [24,25], a relational quantization method was implemented, where unitarity is a necessary requirement in order to obtain a consistent probabilistic interpretation, and bouncing models were also found. On the other hand, our work relies on a deterministic interpretation of quantum mechanics, where probabilities are not fundamental, allowing to explore the consequences of wave functions of the Universe which are not restricted to evolve satisfying unitarity requirements.
B. Non-unitary asymmetric quantum bounces
An alternative to this hindrance is to give up unitarity, which is allowed according to the discussion previously made. In practise, it means to disconsider the boundary condition (7). The correspondent propagator is then only the first term of the propagator (14), given by where N U stands for non-unitary. Applying the propagator (35) to the initial wave function (28) without the normalization factor from −∞ to +∞, we obtain the following wave function for all times: We take the integration from −∞ to ∞ in Eq. (35) in order to avoid terms containing Gauss error functions that arise if the integration is performed from 0 to ∞. In the end we must check that the restriction χ > 0 is still staisfied. Writing Eq. (36) as Ψ(χ, T ) = R(χ, T )e iS(χ,T ) , we obtain where φ(χ, T ) is given by Eq. (33) (the first factor in the above equation does not depend on χ, hence it does not affect the calculation of the Bohmian trajectories). Then, by inserting S into Eq. (11), it is possible to obtain the trajectory in terms of χ. It reads where χ b = χ(T b ) is the value of the variable χ at the moment of the bounce T b = pσ 4 2χ b , which is not equal to zero as in the symmetric case. In terms of the scale factor, the trajectory reads where a b relates to χ b through Eq. (5). The trajectory (39) is shown in figure 3 for w = 1/3, where it is evidenced that the value of the parameter p is directly related to the intensity of the asymmetry. Note that Eq. (39) does not admit a singularity or negative values for a(T ), since we always have This ensures that the restrictions χ > 0 and a > 0 are satisfied, although we have disregarded the boundary condition (7) and propagated the wave function from −∞ to ∞. A bounce solution is naturally obtained, without the need to impose restrictions to recover the positivity of the scale factor.
For p = 0 we re-obtain the symmetric bounce (18), which makes explicit the relation between the asymmetry and the factor exp(ipχ).
As in the symmetric case, the classical solution arises for large values of T . In order to obtain a slope in the contracting phase lower than the slope in the expanding phase, one has to take p < 0, or, equivalently, to change the factor from exp(ipχ) to exp(−ipχ) in the initial wave function (28) keeping p > 0. This case is particularly interesting, once the contraction phase may consist of an almost Minkowski universe. Applying the same procedure to obtain the Bohmian trajectory, we obtain a, which is plotted in figure 4. Just as we did for the symmetric case, let us express the wave function parameters in terms of cosmological quantities for the case w = 1/3. Defining the parameters one can write a = a b ±y 2 η + 1 + y 4 1 + η 2 , where the ± signs correspond to wave function phases exp(∓ipχ), with p ≥ 0. In the limit |η| >> 1, we get for the Hubble function, in the expanding phase, and in the contracting phase, where Ω rc is the radiation energy density when the Universe has H = H 0 in the contracting phase divided by the critical density ρ c . These equations imply that and Note that the + sign in Eq. (46) implies, from Eq. (50), that 0 ≤ p < √ Ω r0 . From Eq. (51), one can see that Ω rc ≤ Ω r0 , and in the limit p → √ Ω r0 one has Ω rc → 0. Hence, the contracting universe can be made arbitrarily flat, and the radiation fluid is created around the quantum phase, during the bounce.
In this asymmetric case, the maximum curvature does not occur at the bounce, η bounce ∓ y 2 , but at the con- . Hence, the minimum curvature scale reads Note that Eqs. (50, 52) reduce to their correspondents in the symmetric case Eqs. (23,24) for p = 0. As in the symmetric case, we require that the bounce scale is larger than the Planck scale, that is L min > L p , and smaller then the curvature scale at nucleosynthesis. Hence, we demand Note that, in the asymmetric case, there is no direct relation between x b and L min due to the presence of p in Eq. (52). Hence, neither x b nor p have independent physical significance, just when combined to give L min . That is why, in this case, the condition must be put in terms of (53).
IV. UNITARY ASYMMETRIC QUANTUM BOUNCES
Another alternative to obtain asymmetric solutions is to perform superpositions of Gaussian wave functions multiplied by factors of the form exp[i(pχ) 2 ]. Once the term inside the exponential is not linear in χ, it is possible to generate asymmetry maintaining unitarity. Note that the asymmetry is achieved only when we perform superpositions. A single Gaussian in this format would lead to a symmetric bounce.
Considering the following superposition for the initial wave function where and applying the unitary propagator (14), we obtain a wave function for all times given by Note that both Eq. (54) and Eq. (56) satisfy the boundary condition (7). Thus this case is unitary for all times. Defining and writing Eq. (56) as Ψ(χ, T ) = R(χ, T )e iS(χ,T ) , we can insert the phase S into Eq. (11) to obtain the differential equation for the parameter χ, given by (59) For p 1 = 0 and p 2 = 0, i.e. γ 1 = γ 2 = 1/T and β 1 = β 2 = 1/T 2 + 1/σ 4 , we obtain which can be solved analytically and results in the trajectory (16) obtained before for the symmetric case. Solving Eq. (59) numerically with initial condition a i = a(T i ), we obtain the trajectory for the parameter χ and then, using Eq. (5), for the scale factor a. The result is plotted in figure 5. Note that symmetric bounces are also obtained if p 1 = p 2 .
The numerical solution of Eq. (59) also encompasses multiple bounces for certain values of the parameters σ, p 1 and p 2 and of the initial values a i and T i . See figure 6.
As we did for the other bounce solutions, we express the wave function parameters in terms of cosmological quantities. Expanding the square of the correspondent Hubble function for large times T , we obtain Identifying the dimensionless density parameter for radiation today Ω r0 = ρ r0 /ρ c0 as the coefficient of (a 0 /a) 4 , we obtain .
In order to rewrite Eq. (62) in terms of a b and T b , we expand Eq. (59) for T /σ 2 ≪ 1 to the first order and for p 1 σ ≪ 1 and p 2 σ ≪ 1 to the second order. Under these conditions, i.e. near the bounce and with small parameters related to asymmetry, we obtain a solution with a single bounce, where it is possible to relate T b , p 1 and p 2 by making da/dT = 0. Disregarding also terms containing p 2 1 p 2 2 , we obtain Performing the following transformation of variables we obtain σ 2 = 2 Note that Eqs. (61, 62, 68) reduce to their correspondents in the symmetric case Eqs. (19,20,23) for p 1 = p 2 = 0, which implies T i = T b = 0.
For this particular case, i.e. T /σ 2 ≪ 1 to first order and for p 1 σ ≪ 1 and p 2 σ ≪ 1 to second order, the curvature scale at the bounce L b assumes the same form of the symmetric case given by Eq. (24), but with σ 2 given by (68).
We now go back to the general case given by Eq. (59) and verify for which values of the parameters the bounce scale is larger than the Planck scale and smaller than the nucleosynthesis scale. We find L b numerically for some non-multiple asymmetric bounces, and we obtain the correspondent bounce energy E b = L Once L p ≈ 5 × 10 −44 s, we see that L b >> L p for all bounces considered. As mentioned before, this means that the validity of the Wheeler-DeWitt equation as an approximation to a more fundamental theory of gravity is well established. Beyond that, the bounce must occur at energy scales much larger than the nucleosynthesis scale, i.e. 10MeV, which is not achieved by all cases considered. Indeed, as one can see from table I, the energy scale of such bounces are not much bigger than the nucleosynthesis energy scale, but they are many orders of magnitude smaller than the Planck energy scale. Hence, the physically relevant consistency check of such bouncing models is the upper limit of L b , not its lower limit, which makes the distinction between L b and L min irrelevant.
The cases p 1 σ 10.9, p 2 σ = 1.0 and p 1 σ = 1.0, p 2 σ 5.8 represent multiple bounces. Multiple bounces are also encountered in quantum reduced loop cosmology, in a scenario called emergent bounce [26]. It describes a series of bounces with successive increasing amplitudes. In our work, the multiple bounces do not necessarily present this behaviour. The solutions we found also allow for more than one bounce, but with similar amplitudes, before being launched to the expanding phase.
V. CONCLUSION
We have obtained generalizations of the quantum bounce solutions obtained in Refs [6,7] which are asymmetric with respect to the bounce, and even possessing multiple bounces. These solutions may be used to take into account significant back-reaction due to quantum particle production around the bounce, see Refs. [22,23]. As an example, in future work we will investigate baryogenesis in those asymmetric bounces.
One particular class of interesting solutions is the one exhibited in figure 4. It describes expanding cosmological solutions arising from an almost flat space-time. As discussed in Section III, the energy density at contraction can be made arbitrarily small, depending on the new quantum parameter p, related to the phase velocity of the initial wave function of the universe. The emerging picture is of an arbitrarily flat and almost empty space-time, which is launched through a bounce into the standard Friedmann expanding phase, containing the usual hot and dense radiation field. This fact open new windows to an old speculation, that our Universe arose from quantum fluctuations of a fundamental quantum vacuum. The de Broglie-Bohm theory allows a different regard to this hypothesis and the concrete possibility to extend this particular mini-superspace model by incorporating quantum cosmological perturbations to the system and quantitatively study their observational effects. This is also subject for future work. | 2020-03-12T01:01:02.890Z | 2020-03-10T00:00:00.000 | {
"year": 2020,
"sha1": "2b35e9f41cab6076ba06e7bdf420d33cc69419f7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2003.04928",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2b35e9f41cab6076ba06e7bdf420d33cc69419f7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
267979060 | pes2o/s2orc | v3-fos-license | Cell-free DNA blood test for the diagnosis of pediatric tuberculous meningitis
Pediatric tuberculous meningitis (TBM) is a severe form of tuberculosis that may present in children. The current diagnostic methods may have a limited impact on initial clinical decision-making. We present three children with tuberculous meningitis who had Mycobacterium tuberculosis complex cell-free DNA (cfDNA) detected in their blood within three days of sampling. Our cases described here illustrate for the first time the potential role of cfDNA blood tests in the rapid diagnosis of TBM.
Background
Tuberculous meningitis (TBM) is a rare and severe form of extrapulmonary tuberculosis caused by the Mycobacterium tuberculosis complex (MTBC).In the United States (US), 64 TBM cases were identified in 2020, which accounted for 4 % of the 1513 extrapulmonary tuberculosis cases reported [1].Children younger than 2 years are at particularly high risk of disseminated tuberculosis, including TBM.Early diagnosis of pediatric TBM is important due to a high mortality rate of 19.3 % and the probability of survival without neurological sequelae of 36.7 % [2].Although culture remains the gold standard for diagnosis, there are multiple barriers that impair its effectiveness, including the low sensitivity of acid-fast bacilli (AFB) culture isolation, the risk of failing to collect samples, and results that may take over 6 weeks, which lead to a limited impact of culture on the initial clinical decision-making for these children [3,4].Therefore, the diagnosis and treatment decisions for pediatric TBM are often based on a combination of epidemiological history, clinical and cerebrospinal fluid (CSF) findings, and imaging studies.Novel diagnostic approaches that offer increased sensitivity and faster time to results are desired.We report on three pediatric cases of tuberculous meningitis that were promptly diagnosed using cell-free DNA (cfDNA) (Karius Test®) blood tests.To the best of our knowledge, this is the first case series of Mycobacterium tuberculosis complex cfDNA detection in blood in pediatric tuberculous meningitis.
Case 1
A previously healthy 2-year-old male presented with a 3-week history of daily fever and lethargy.He was born in the US and recently traveled with his parents to Mexico for family visits.Two weeks after symptom onset, the child had progressive weakness and emesis, followed by left-eye ptosis and refusal to walk prior to being transferred to our hospital.Upon arrival, the child underwent a lumbar puncture and brain magnetic resonance imaging (MRI).CSF analysis was significant for 61 white blood cells/uL with 83% lymphocytes, a protein level of 123 mg/dL, and a glucose level of 40 mg/dL.Brain MRI revealed multiple restricted focal areas in the left basal ganglia and mild ventriculomegaly.Diagnostic laboratory results (Table 1) included negative T-SPOT.TB® and negative CSF MTB PCR, but a positive cfDNA blood test for Mycobacterium tuberculosis complex and cytomegalovirus two days after sampling and a positive CSF mycobacterial culture for MTBC at four weeks.
Case 2
A previously healthy 14-month-old female presented with a 3-week history of daily fever, progressive lethargy, and recurrent seizures.She was born in Mexico and immigrated to the US with her parents.The child's brother and maternal grandmother, who visited her a few months prior, had recently tested positive for the tuberculin skin test (TST), indicating tuberculosis infection, but they had not received treatment.Upon transfer from the referring hospital, the child developed right-eye ptosis, and she required intubation and urgent extraventricular drainage at the bedside due to a deteriorating neurological status.The child's first TST test was 0 mm, but a repeated TST test showed a positive result with 10 mm of induration.CSF analysis demonstrated 76 white blood cells/uL with 44% lymphocytes, a protein concentration of 64 mg/dL, and a glucose level of 46 mg/dL.Brain MRI revealed a lacunar infarct in the right cerebral hemisphere, specifically in the right thalamus and frontal lobe.Empiric treatment for tuberculous meningitis was initiated while awaiting the results of the laboratory workup.Workup was significant for borderline T-SPOT.TB®, negative CSF MTB PCR and negative CSF mycobacterial culture, but positive cfDNA blood test for Mycobacterium tuberculosis complex after two days from sampling.In this case, the diagnosis of TBM was based on clinical, brain MRI, and CSF analysis findings without definitive microbiological confirmation.
Case 3
A previously healthy 4-year-old female presented with two weeks of fever, generalized seizures, and a disconjugate gaze.Prior to transfer to our hospital, she required intubation due to her deteriorating neurological status.The child had been exposed to active tuberculosis by her father, who recently died from complications of the disease.Brain MRI showed mild leptomeningeal enhancement in the midbrain and multiple punctate areas of restricted diffusion in the bilateral basal ganglia, inferior frontal lobes, bilateral hippocampal formations, midbrain, and cerebellar vermis.CSF analysis demonstrated 51 white blood cells/uL with 52% lymphocytes, a protein level of 206 mg/dL, and a glucose level of 39 mg/dL.Further laboratory workup yielded positive results for T-SPOT.TB® and the cfDNA blood test for Mycobacterium tuberculosis complex.Her gastric acid aspirate AFB culture was positive for MTBC at six weeks.
Discussion
Acid-fast bacilli staining, mycobacterial cultures, and conventional nucleic acid amplification tests (NAAT) of body fluids, including sputum, gastric aspirates, and CSF, are commonly utilized in the diagnosis of tuberculosis.However, it is often difficult to obtain sufficient respiratory tract specimens from nonintubated infants and children.In children with TBM, CSF mycobacterial culture is considered the gold standard for diagnosis, but specimen sampling can be delayed if lumbar puncture is contraindicated due to hemodynamic instability, elevated intracranial pressure, coagulopathy, or cardiovascular compromise.Furthermore, acid-fast bacilli staining only yields a sensitivity of 10%-20% for tuberculous meningitis, and the MTBC can be cultured from CSF in 32% of cases [4].Therefore, a rapid diagnostic method with high sensitivity for MTBC from readily available specimens, such as blood is desired.Within 3 days of sampling, we confirmed the presence of the Mycobacterium tuberculosis complex using blood cfDNA via the Karius Test® in three cases of tuberculous meningitis in children.The consistent positive results among our three patients with a shorter turnaround time from ordering time to lab result compared with existing tests (approximately three days for T-SPOT.TB® with variable results) show the promise of a potential new diagnostic tool for TBM in children.
Multiple studies, conducted mostly in adults, have demonstrated the utility of detecting blood cfDNA for the identification of pulmonary tuberculosis.The sensitivities and specificities are highly variable, ranging from 29 % to 80 % and 67 % to 100 %, respectively [5][6][7].A single retrospective study on pediatric pulmonary tuberculosis showed that cfDNA blood tests have a sensitivity of 75 % for detecting MTBC cfDNA in pediatric patients with smear-positive, culture-confirmed pulmonary tuberculosis [8].However, no current studies specifically test the validity of using cfDNA blood tests to detect MTBC in extrapulmonary infections, such as TBM.In our case series, cfDNA blood tests detected MTBC cfDNA, assisting in the diagnosis of TBM in cases confirmed by positive gastric or CSF mycobacterial culture, or strong clinical suspicion.
For each of our patients, the medical director from Karius informed our hospital's pediatric infectious disease specialist that Mycobacterium tuberculosis complex DNA was present in the cfDNA blood test but was found at levels under their statistical threshold.The initial reports were updated to include the relevant findings.In a separate study involving children with tuberculosis, a relaxed research-use-only (RUO) statistical threshold was applied, allowing for the reporting of Mycobacterium tuberculosis if cfDNA derived from this microorganism was identified in three or more unique sequencing reads.Researchers found that when using the RUO threshold, detection was 75 % sensitive in smear-positive children, compared with 50 % when using the Karius standard threshold [8].Further optimization of the statistical threshold is necessary to improve the sensitivity of Mycobacterium tuberculosis complex cfDNA detection in the blood.
In clinical practice, there are technological challenges associated with the use of MTBC as a potential biomarker for active tuberculosis.A primary concern is the uncertainty regarding whether the presence of MTBC-specific cfDNA in the blood is indicative of active tuberculosis or latent TB infection (LTBI).One prospective study reported that 2 cases of LTBI tested positive for MTBC-cfDNA, and levels declined following anti-TB therapy.These observations suggest that plasma MTBC-cfDNA may also serve as a microbiological indicator in LTBI cases [9].Another challenge is the differing sample preparation requirements for cfDNA blood tests, which presents barriers to integration into existing platforms commonly used in resource-limited regions.Compared with NAAT using PCR, such as Xpert MTB/RIF and Xpert MTB/RIF Ultra, the more costly techniques required for cfDNA blood tests, such as sequencing, further limit their applicability in low-resource settings [10].
In conclusion, the cfDNA blood test is a diagnostic tool that significantly shortens the time to diagnosis in suspected cases of pediatric TBM.Our findings necessitate further investigation into the validity and sensitivity of the cfDNA blood test for the rapid detection of the Mycobacterium tuberculosis complex, aiming to improve the diagnosis and treatment of patients with tuberculous meningitis.
Table 1
Initial laboratory diagnostic work-up and follow-up cultures. | 2024-02-27T17:32:39.535Z | 2024-02-21T00:00:00.000 | {
"year": 2024,
"sha1": "2565bf018827c13067af47a4e5dba9205c73decb",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jctube.2024.100421",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e79ab28a1acb3c89fa4d1790882e8226c97ae49",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244656176 | pes2o/s2orc | v3-fos-license | Reply on RC2
This said, I would also encourage the authors to think a bit more broadly about the context in which they are laying out these suggestions. Specifically, with the rapidly evolving soil C sequestration landscape and an infusion of private interests into soil C world (e.g. https://seqana.com/, IndigoAg, etc), how should academics, industry, ngo’s and government agencies maintain data access and communication in what’s potentially a more crowded, active (and presumably better funded) field? I appreciate this touches on ideas that are broader and somewhat more existential than the soils data challenge the paper more narrowly addressesbut it seems relevant to contextualize the broader landscape of who and why harmonize soils databeyond the how it can be done better.
The uses of soil databases for research context are varied (for example Earth system model benchmarking Collier et al 2018) but there are other private economic impacts of having soil data available. Soil health metrics in public databases could impact land evaluation and there is increasing interest in soil carbon data from carbon markets for offsetting CO2 emissions. As mentioned in the geolocation section, specific information on the nutrient and water retention of a soil can make it more or less valuable, making landowners reluctant to release data. More recently, an increasing interest in generating carbon offsets by increasing soil sequestration has led to a proliferation of new venture corporations that either generate new or use available soil data in order to define land management practices to increase soil C stocks (e.g. IndigoAg, CIBO Technologies, Seqana, Regrow, Nori, LoamBio). Industry companies generally treat data that they collect or process as part of their intellectual property, which is kept private. While there is clearly scientific value in these data, it's unclear how researchers, landholders, and private companies will negotiate the use and integration of these data into research outputs. Nonetheless, privately held data would also benefit from connecting with community developed standards.
My remaining comments are relatively minor, and largely intended to clarify aspects of the text.
I'm not sure I agree with the statement in Line 43-45. Modeled soil properties (here I'm thinking of hydraulic and thermal properties) rely on pedotransfer functions that use input data of soil physical characteristics (texture and organic matter content). None of these 'soil properties' are used for benchmarking or evaluation, making me wonder what the growing need for more data are really needed for-especially if ILAMB already uses information on soil C stocks and inferred turnover times?
We can see how this was confusing, this was meant to refer to carbon and nutrient stocks but on review this section is unclear. We are removing the ILAMB sections (ln 41-45) and replacing this with the following: "A number of databases have been compiled in soils data around specific themes or measurement types including: soil carbon and nitrogen Table XX for a complete list with database creation strategies)." Moreover, data products like SoilGrids already exist, which seems to have a wealth of data that can be used as inputs for or evaluation of Earth system models. Are you suggesting new efforts should go into recreating or augmenting the data processing wheel that informs ISRIC data products (SoilGrids and the Harmonized World Soils Database)? I don't get the sense this is what the authors are envisioning? I also appreciate that "This is just one of many potential uses for harmonized soil data", but I do worry that as written the authors are implying that the harmonized datasets we do have somehow do not reflect FAIR principles.
We contend that soil data products (like SoilGrids and HWSD) are not the same as an aggregated soil database and that a soil database is necessary to generate these data products but has other use cases as well. We address this, and related comments from R1, beginning on line 45.
Suggested text:
Soil resources curated by ISRIC (https://www.isric.org/) provide another example of how soil data feed into larger products. After archival on ISRIC servers, datasets from individual providers are incorporated into the World Soil Information Service workflow (WoSIS; https://www.isric.org/explore/wosis). The WoSIS workflow includes mapping diverse data contributions to a standard data model, harmonization, and distribution. Distribution includes a database, as defined in this paper (the WoSIS Soil Profile Database; https://www.isric.org/explore/wosis/faq-wosis#How_should_the_WoSIS_datasets_be_cited?), as well as derived data products, such as SoilGrids (Hengl T, de Jesus JM, MacMillan RA, Batjes NH, Heuvelink GBM, et al. Agreed. We will integrate this language on ln 73-74 into the figure and headings.
Line 79-84, I appreciate the challenge you're trying to articulate-but it kinds of seems like you're suggesting reviewers or journals need better evaluation of data publishing standards. I wonder I this is really where the responsibility should lie, specifically because I don't think as a community, we're well trained in best management of data practices.
Good point. We did not intend for the responsibility to lie with peer-reviewed journal, rather we diverted the focus to one that highlights that challenge as to who would be responsible, so it is more of an open question. We will add the following to ln 81 "... "high standard" are and whom is responsible for ensuring these standards are met. To complicate matters, key…" I think given better information, data providers would happily provide more useful datasets to repositories, but don't know how. Maybe this is what's implied in line 83 with data providers who 'become frustrated'? I realize you're trying to be brief here-and maybe a solution is articulated in Section 3-but I do worry that the takeaway message from this paragraph is 'currently archived data are incomplete and therefore useless, and we're not really going to tell you how to make them better'.
Good point. We propose extending this paragraph and adding to ln 84 "This is not to say that archiving data for the purpose of meeting funder requirements or reproducing the associated analysis can not be useful in and of itself. However this does not automatically lend the data to integration in a database."
Line 87, what's a harmonized template?
We agree this is unclear -we will reframe as 'aggregator provided template'
Line 99 What are TRUST and CARE? If an aim of this manuscript is to broadly educate soilminded scientists on best data practices describing features of these practices should be briefly articulated (not just referenced).
We propose adding to line 97: In general however, we feel that direct collaboration between data providers and data aggregators is a critical relationship to nurture. Other critical relationships for good data governance have been articulated by recent extensions of the FAIR Principles, including TRUST and CARE. The TRUST Principles (Transparency; Responsibility; User focus; Sustainability; Technology) articulate key features for trustworthy digital repositories, which are essential for preserving data access and reuse over time (Lin et al., 2020). The CARE Principles for Indigenous Data Governance (Collective benefit, Authority to control; Responsibility; Ethics) position decisions related to data management and reuse in the context of Indigenous cultures and knowledge systems, highlighting actions that ultimately support Indigenous data sovereignty (Carroll et al., 2020). As the community continues to converge on shared tenets of good data governance, it is becoming increasingly clear that "just put it in a repository" is only the beginning. Capturing these differences in a table form was challenging which is why we went with the narrative structure however we will create a table capturing some of these database strategies and add it as a new table.
Line 105. These different transcription / translation methods are nicely described in the text, with examples in Appendix A. Would a table help emphasize similarities and differences of databases listed in Appendix
Finally, both ISRAD and SODAH were organized with the nested hierarchy established with ISCN. Should this be mentioned? Should ISCN be highlighted in the text (a number of coauthors have contributed to this effort)? This hierarchical organization of the data is implied, but maybe not explicitly established in the metadata and data models we are or should be using.
Good point, We will add the ISCN connection to these two project descriptions.
Section 2.2, It seems like scripted transcription requires clear dictionaries, vocabulary and metadata to be successful, but based on text in 2.1 this is not common, OR is this just happening in keyed translation?
Both manual transcription and scripted methods require clear metadata descriptions that are formalized in different ways. We'll add this point here on lins 129: While this approach has the most explicit need for clear semantic resources, these are also essential for creating effective manual transcription templates and protocols.
Section 2.3 is pretty brief Would additional examples be helpful here to illustrate how different efforts have gapfilled or pruned their data? How do these databases expandwhich seems important aspect of curation (although discussed in 2.4 for COSORE).
With respect, these strategies are extremely diverse and beyond the scope of this paper, see lines 148-149.
Line 275, I may add something aboveground to this list (as vegetation, land use, productivity and climate are also important for belowground measurements, but rarely colocated with belowground measurements being collected).
We will extend section 2.3 to talk about annotation of soil observations with aboveground data (ie ISRaD annotating mean annual temperature and precipitation) Specifically ln 147 These activities include expanding the environmental context for a particular soil; for example, extracting net primary productivity and land use classification from satellite products. Soils are not unique and many of these are broad challenges in environmental data. Specifically we proposed adding on ln 60: "The approach and issues outlined in this paper are undoubtedly not unique to soils and are relevant to a wide range of scientific data, particularly environmental data. However we present this as a case study of soil specific database construction." I'm 100% behind the suggestions and vision the authors laid out, but I do wonder a bit about to what end? What are the pressing questions that a massive new soils database will let us address? Given the diversity of soil uses, measurements, and communities is a database of databases really what we need? OR, is the soil science community well enough served by individual collections of data that are more focused on more topical areas like radio carbon, respiration fluxes, spectral databases, or bulk C stocks? I realize this isn't you're grant proposal-but presumably it's heading that way. The text clearly delineates data providers and data aggregators, but who are the data users that will ultimately do something with these datasets once they're wrangled into something more useful?
Section 3.2 (or in the introduction
You are correct of course that this paper focuses on data aggregators as a class rather than data users, we choose to do this because the user community is exceptionally diverse but data aggregation is a common activity across this group. Respectfully we choose to focus this paper on the how instead of the why. | 2021-11-26T16:31:24.723Z | 2021-11-24T00:00:00.000 | {
"year": 2021,
"sha1": "b46807405ce40d992de1a43112c5b5216a9d123d",
"oa_license": "CCBY",
"oa_url": "https://nhess.copernicus.org/preprints/nhess-2021-251/nhess-2021-251.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c724e47a425f6437487eae4c65f5e02afeee2394",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
268146233 | pes2o/s2orc | v3-fos-license | Heavy metal contamination in duck eggs from a mercury mining area, southwestern China
Objective Mercury (Hg) contamination in the environment around mercury mines is often accompanied by heavy metal contamination. Methods Here, we determined concentrations of chromium (Cr), zinc (Zn), strontium (Sr), barium (Ba), and lead (Pb) in duck eggs from a Hg mining area in Southwest China to assess the contamination and health risk. Results Duck eggs obtained from the mining area exhibit higher concentrations of Cr, Zn, Sr, Ba, and Pb compared to those from the background area, with egg yolks containing higher metal levels than egg whites. Specifically, the mean Cr, Zn, Sr, Ba, and Pb concentrations of duck eggs from the Hg mining area are 0.38, 63.06, 4.86, 10.08, and 0.05 μg/g, respectively, while those from the background area are only 0.21, 24.65, 1.43, 1.05, and 0.01 μg/g. Based on the single-factor contamination index and health risk assessment, heavy metal contamination in duck eggs poses an ecological risk and health risk. Conclusion This study provides important insight into heavy metal contamination in duck eggs from Hg mining areas.
Introduction
Heavy metal pollution can pose serious harm to wildlife and human beings.Mercury (Hg), a globally transported heavy metal with strong bioaccumulation, is found in excessive levels in the atmosphere, water, soil, vegetables, and rice in Hg mining areas (1).Hg pollution is often accompanied by other heavy metal contaminations in Hg mining areas (2).Soil, water and crops in Hg mining areas are contaminated with heavy metals to varying degrees (3)(4)(5).For instance, cadmium (Cd) and arsenic (As) levels in the soil exceed the standard limits in Wuchuan mining area, SW China (2).A previous study revealed high heavy metal (including Hg, As, Cd and Se) levels in eight types of vegetables in mining regions (6).Excessive ingestion of heavy metals can be toxic (7,8).Cr has mutagenic, teratogenic, and carcinogenic properties (9).Pb and Hg are known to have neurotoxic effects, particularly harmful to the neurological development of children (10).Excessive amounts of Zn, Sr, and Ba have been found to induce genotoxic effects in cells (11).Given the potential harm heavy metals may pose to humans, heavy metal pollution in mining areas cannot be overlooked (6).Consumption of poultry products could be an important exposure of heavy metals to humans.Poultry normally uptake heavy metals from various sources (e.g., feed, water), among which feed is the main source, and the females can transfer heavy metals to their eggs (12).Normally, farmed poultry eat a fixed recipe of feed, while free-range poultry mainly ingest local crops (12).However, most studies on health risk assessments of heavy metals focus on commercial (13), seleniumenriched (14), and free-range chicken eggs (15), with limited research on duck eggs.It is worth noting that China is the largest producer and consumer of duck eggs in the world, with an output of ~4 million tons annually (16,17).A meta-analysis indicates that duck eggs contain higher levels of potentially toxic elements compared to chicken eggs (18).Due to the high Hg level in poultry eggs from Wuchuan compared to other areas, and the total Hg concentration in duck eggs exceeds that of chicken eggs (19), we hypothesize that concentrations of other heavy metals could be high in local poultry eggs (e.g., duck eggs).However, heavy metal (such as Zn, Cr, Sr, Ba and Pb) levels in local duck eggs and their potential harm to consumers are unclear so far.Therefore, it is crucial to assess the potential health risks associated with the consumption of duck eggs contaminated with heavy metals.
Here, we conducted the research on heavy metal concentrations in duck eggs from Wuchuan (a Hg mining area), to understand the heavy metal levels in duck eggs and their potential health risks in Hg mining area.This study could provide insights into the current of heavy metal levels in Wuchuan duck eggs and to assess the potential human health risks from consuming such duck eggs from the Hg mining area.
Study area
Wuchuan is located in Guizhou Province, southwestern China (Figure 1).Wuchuan Hg mine is one of the largest Hg mines in Guizhou Province, with a Hg production history of about 400 years (20,21).Despite the cessation of mining for 20 years, local environmental Hg levels (in soil, water, and rice) are still evidently higher than the standard limitation (6).The concentrations of Hg are 1.3 ~ 360 mg/kg in local topsoil variation, 13 ~ 2,100 ng/L in water, and 6.0 ~ 113 in rice ng/g (22,23).Different from Wuchuan, Hg and other heavy metals are within safe thresholds in Anshun (24,25).In Anshun, Hg concentration in rice and other crops is about 0.01 mg/ kg, significantly below the limit set by the "National Food Safety Standard for Contaminants in Foods" (24).Additionally, the level of heavy metal contamination in the soil is low (25).
We collected ten duck eggs each from Laohugou, Wuchuan County, Zunyi (28°60′ N, 108°01′ E) and Chuanshi Village, Yangchang Town, Anshun (26°35′ N, E 106°32′ E).As the ducks raised by surrounding residents in the mining area have similar feeding methods and feed, random sampling of purchased duck eggs was conducted.The eggs were from ducks raised by local households with free-range.Sampling locations, quantities, and times were shown in Table 1 (26).The collected duck eggs were brought to the laboratory within 24 h and stored at 4°C.
Pretreatment of samples
The duck eggs were washed with 18.2 MΩ water, followed by separation of egg yolk and egg white, and then freeze-dried and mixed.Each sample (0.500 g) was digested with 5 mL of nitric acid (process ultrapure) and 1 mL of hydrogen peroxide at 160°C for 8 h.After cooling to room temperature, the inner chamber of the digestion tank was removed and the inner lid was rinsed with a small amount of ultrapure water, and inner chamber placed on a hot plate at 90°C, 2% nitric acid fixed volume to 10 mL and stored at 4°C before measurement.
Determination of metal concentrations and quality control
The heavy metals Cr, Zn, Sr, Ba, and Pb were determined by ICP-MS (Thermos Scientific iCAP RQ) at Guizhou University.Quality control was performed by standard reference materials (VAR-CAL-2 for trace elements; CLMS-1 for rare earth elements).The relative standard deviations of the metals were all below 10%, and the recoveries were 80% ~ 110%.
Evaluation of heavy metal pollution of duck eggs
According to the single factor pollution index, heavy metal contamination in duck eggs is evaluated as following: Where P i is the single-factor pollution index, C i is the concentration of metals in duck eggs (μg/g, DW), and S i denotes the evaluation standard value of the five heavy metals in duck eggs (μg/g).Pb is the standard value according to the Chinese national standard for food safety (GB 2762-2022).Given the lack of standard limits for metals in duck eggs, the corresponding metal levels in the background area are used as the standard limit for other metals in this study.As shown in Supplementary Table S1 (27), are the grading standards.
Health risk assessment
The US Environmental Protection Agency (USEPA) health risk assessment model was used to assess non-carcinogenic and carcinogenic risks based on exposure parameters in the Chinese population (28).The noncarcinogenic risks for Cr, Zn, Sr, Ba, and Pb and the carcinogenic risks for Cr and Pb are calculated according to the International Agency for Research on Cancer (the International Agency for Cancer, 2020) carcinogen classification.
Heavy metal chronic daily intake can be calculated as following: Where EDI is the estimated daily intake of heavy metals (mg/kg/ day), C i is the concentration of metals in duck eggs (μg/g), IR is the dietary intake (kg/d), ED is the exposure time (a), EF is the exposure frequency (d/a), AT is the average exposure time (d), and BW is the body weight (kg).IR is 0.15 kg/d and 0.10 kg/d for adults and children, respectively; ED is 30 years and 10 years, respectively; EF are 365 d/a and 365 d/a, respectively; AT is 10950 d and 3,650 d, respectively; and BW are 70 kg and 16 kg, respectively (29)(30)(31)(32)(33).
The noncarcinogenic risk of consuming contaminated duck eggs is calculated and assessed by the health risk quotient (HQ) and the health risk index (HI, the sum of the HQ values of different metals, used to calculate the noncarcinogenic risk caused by multiple heavy metals).Heavy metal HQ and HI can be calculated as follows: Where HQ is the one-factor noncarcinogenic risk index, EDI is the chronic daily intake of heavy metals (mg/kg/day), RfD is the reference consumption of heavy metals (0.003, 0.300, 0.600, 0.200, and 0.0035 mg/kg/day for Cr, Zn, Sr, Ba, and Pb, respectively), and HI is the total noncarcinogenic risk index for the five elements.HQ or HI > 1 indicates a potential non-carcinogenic risk, while HQ or HI < 1 indicates no potential risk (34-36).
The total carcinogenic risk (TCR) is the sum of the carcinogenic risk (CR) values of different metals.Heavy metal CR and TCR can be calculated as follows: Where CR is the heavy metal carcinogenic risk index, EDI is the chronic daily intake of heavy metals (mg/kg/day), SF is the slope factor of carcinogenic heavy metals (0.005 and 0.0085 for Cr and Pb, respectively), and TCR is the total heavy metal carcinogenic risk index.When CR or TCR ≤ 1 × 10 −6 , the carcinogenic risk is considered negligible, while CR or TCR < 1 × 10 −4 indicates a low risk and is considered acceptable, and CR or TCR ≥ 1 × 10 −4 indicates a potential carcinogenic risk (37,38).
Data analysis
ArcGIS is used to plot the distribution of sampling points, and SPSS 26.0 is used to analyze the data.In this study, we applied the Wilcoxon rank sum test uniformly to analyze the significant differences in metal concentrations and pollution index.Moreover, the mean, median, minimum, and maximum values of metal concentration and pollution index were shown.
Metal pollutions
The mean Cr, Zn, Sr, Ba, and Pb concentrations of duck eggs from the Hg mining area are 0.38, 63.06, 4.86, 10.08, and 0.05 μg/g, respectively, whereas those from the background area are 0.21, 24.65, 1.43, 1.05, and 0.01 μg/g, respectively.The concentrations of Cr, Zn, Sr, and Pb in duck eggs from the Hg mining area are significantly higher than those from the background area (p < 0.05; Figure 2A).Specifically, the mean concentrations of Cr, Zn, Sr, Ba, and Pb in egg yolk are 0.41, 79.90, 6.44, 17.98, and 0.08 μg/g, respectively, while the mean concentrations of Cr, Zn, Sr, Ba, and Pb in egg white are 0.35, 47.73, 7.62, 2.18, and 0.02 μg/g, respectively (Figure 2B; Table 2).In the Hg mining area, the mean concentrations of Sr, Ba, and Pb in the yolk are much higher than those in egg white (p < 0.05).In the background area, the mean concentrations of Cr, Zn, Sr, Ba, and Pb in egg yolk are 0.24, 48.94, 1.51, 2.01, and 0.01 μg/g, respectively.The mean concentrations of Cr, Zn, Sr, Ba, and Pb in egg white are 0.36, 3.75, 1.35, 0.10, and 0.01 μg/g, respectively.The mean concentration of Zn in egg yolk is also much higher than that in the duck egg whites from the background area (p < 0.05) (Figure 2C; Table 2).Overall, the concentrations of Cr, Zn, Sr, Ba, and Pb in duck eggs in the Hg mining area are higher than those in the background area, and the metal concentrations in yolks are higher than those in egg whites.
The duck eggs of single factor pollution index of heavy metals
The single factor pollution index is an indicator to evaluate a single factor in heavy metal pollution, which has already been used in evaluations of heavy metal pollution in various environmental media and materials, including water, soil, crops, etc.Given the lack of standards for the five metals in this study, we take Anshun as the background area and assess the heavy metal pollution in duck eggs in the mercury mining area based on the related calculation (Figure 3).The single factor pollution index in egg yolk declines in the following order: Pb (9.83) > Ba (7.87) > Sr (4.46) > Cr (2.07) > Zn (1.26), and in egg white is characterized as Zn (58.12) > Pb (1.31) > Ba (1.01) > Cr (0.90) > Sr (0.83) (Table 3).Except for the Cr The pollution index of duck egg whites and yolks in the Wuchuan (Hg mining area).Carcinogenic and noncarcinogenic health risk estimates are widely recognized as important parameters for human health risk assessment.Table 4 shows the heavy metal intake and noncarcinogenic risk in duck eggs from the Hg mining area.The mean daily intake (EDI) of Cr, Zn, Sr, and Pb in egg yolk for adults and the Cr, Zn, and Pb in egg white are less than the reference exposure dose (RfD), while EDI for Ba in egg yolk and Sr, Ba, and Pb in egg white are greater than RfD.Children EDI is less than the RfD for Cr and Pb in egg yolk and Cr, Zn, Sr, Ba, and Pb in egg white, while EDI for Zn, Sr, and Ba in egg yolk and Pb in egg white are greater than RfD.Furthermore, the total health risk index of egg yolk consumption is >1 for adults, while the index of both egg yolk and egg white are >1 for children, higher than that for adults.
Carcinogenic risk assessment of heavy metals in duck eggs
Due to the lack of carcinogenic slope factors for Zn, Sr, and Ba, the carcinogenic risk assessment is only conducted for Cr and Pb.When the carcinogenic risk is greater than 1.00 × 10 −6 , it indicates a certain carcinogenic risk (39,40).In this study, we observed that the carcinogenic risk of Cr and Pb in both yolk and white of duck eggs is >1.00 × 10 −6 .Cr has a greater carcinogenic risk than Pb (Table 5).It is noteworthy that the total carcinogenic risk of duck egg intake in children is greater than adults, and that of egg yolk is greater than egg white for both adults and children.
Analysis of heavy metal pollution
In this study, preliminary results indicate that duck eggs in Hg mining areas are contaminated with Cr, Zn, Sr, Ba, and Pb.The higher metal concentration of duck eggs in the Hg mining area than in the background area is related to the high metal levels in the mining area environment.The tailings left after the cessation of mining impact the local soil and water, and the soil in the mining area is contaminated to varying degrees with Cd, Pb, Zn, Cr and As (7,41,42).Cr and As exceed the standard in the water of the mining area (8,43).Heavy metal pollution causes elevated concentrations of heavy metals in crops (44).As and Ni levels in vegetables are higher than normal values, and the estimated mean daily intakes of As and Pb in vegetables are above the permissible limits (45).In addition, corn kernels of Zn, Pb, Cd, Cr, and Ni exceeded the limits of China's food hygiene standards (30).Thus, ducks live in the mining area with prolonged metal exposure inducing contaminated duck eggs.Particularly, differences in metal concentrations in duck egg yolks and whites are observed.Differences in the metal levels in duck egg yolks and whites might be related to their formation mechanisms.Once the female ducks absorb higher heavy metal levels they subsequently transfer to the embryo.Duck eggs are formed in the reproductive system of the duck and minerals are deposited into the eggs by two pathways, including ovary to yolk and oviduct to egg white (46,47).The yolk precursor molecule of egg yolk protein could transfer minerals to the yolk, and the yolk component is synthesized in the liver, which is the main organ enrich heavy metals in the body (39,40,48).Therefore, most heavy metal levels are higher in egg yolks than in egg whites.We also measured Cr, Zn, Sr, Ba, and Pb concentrations in chicken eggs from the Wuchuan Hg mine, and the concentrations of Cr, Zn, Sr, Ba, and Pb in chicken eggs from the mine are not statistically different from those of the background area (p > 0.05; Figure 4A).Meanwhile, Cr, Zn, Sr, Ba, and Pb concentrations in duck eggs from the mining area are slightly higher than those in local chicken eggs (Figure 4B).Ducks belong to the waterfowl category of poultry, which eat not only crops, grasses earthworms but also fish and shrimps (49).Therefore, ducks are exposed to heavy metals from multiple sources with a possible higher heavy metal level compared to chickens and once ingested, heavy metals are enriched in the embryo then transferred to duck eggs, which may explain the higher heavy metal levels in duck eggs compared to chicken eggs (41,49).Concentrations of Cr, Zn, Sr, Ba, and Pb in chicken eggs from the mine region do not differ from those in the background, indicating that chicken eggs are less contaminated than duck eggs in Hg mining region.However, the concentrations of Cr, Zn, Sr, Ba, and Pb in duck eggs in Hg mining area are higher than those in background area, suggesting that we should be more concerned about the potential risk of heavy metal contamination in duck eggs.
In addition, the sampling was performed in July, the rainy season in Guizhou.The heavy metals exposure to ducks are different between the dry and rainy seasons (50).Heavy metals accumulate in the environment during the dry season as a result of evaporation.Conversely, during the rainy season, the heavy metal exposure could be decreased due to dilution effects (51).Thus, high heavy metal levels in duck eggs during the rainy season in this study suggest even higher levels during the dry season.Notably, there is an ecological risk of heavy metal exposure in duck eggs.The duck eggs have been heavily contaminated with Ba, Pb, and Zn, with extremely strong potential ecological risk.Consistent with previous results on single factor pollution indices for crops in mining areas, the rice collected in the vicinity of the mining area is more severely contaminated by As, Sb, Cd, Cu, and Zn (52).Potatoes are heavily contaminated by heavy metals while cabbage and radish are lightly polluted (53,54).The single factor of Pb, Cd, Cr, and Ni in maize seeds are all greater than 1, suggesting that all heavy metal contamination in the edible part of the crops has reached heavy contamination levels (52,54).This result illuminates that mining area duck eggs, like crops, are ecologically risky.
Health risk of heavy metals to local residents
The noncarcinogenic risk results suggest that non-age-specific, the total health risk index of egg yolk intake is >1.The contribution of the five heavy metals to the noncarcinogenic risk is illustrated in Figure 5A, Cr and Zn are the main noncarcinogenic risk metals for the inhabitants in the area, more significantly in children.The results indicate that noncarcinogenic health risks are associated with the consumption of duck eggs by both adults and children, and higher in children than in adults.Therefore, the noncarcinogenic risk of consuming duck eggs from Hg mining areas should not be ignored.Our results are consistent with crops in Hg mining areas, which indicate a health risk (55).Cr and Ni health risks are highest in maize from mining areas, and children are most sensitive to maize heavy metal exposure (55).The higher health risk of duck egg consumption in children than in adults suggests that children are more sensitive to environmental pollutions.Liver is the main organ that enriches and metabolism heavy metals (43,53).However, children's metabolic organs, such as the liver and kidney, are not yet well developed and have weaker detoxification functions for toxic and harmful substances (56).Whereas the health risks of egg yolks are greater than egg whites may be since the yolk protein precursor molecule in egg yolk can transfer minerals to the yolk, and the yolk component is synthesized in the liver (57).Therefore, concerns should be raised about the potential noncarcinogenic health risks to children from the consumption of mining area duck eggs, especially the yolks.We find that the TCRs of Cr and Pb in duck eggs are greater than 1.00 × 10 −6 when consumed by adults and children, indicating both have a certain carcinogenic risk from the intake of egg yolk and white.CR combined with TCR shows that Cr is the main contributing factor, indicating that Cr is the most significant carcinogenic risk metal in Hg mining area (Figure 5B).Long-term consumption of brown rice poses potential noncarcinogenic and carcinogenic health risks to the local population (43,58).The same goes for long-term consumption of duck eggs.Although the Hg mining area is dominated by Hg pollution, the carcinogenic risk of Cr in local duck eggs should not be ignored.To sum up, duck eggs from Hg mining areas are contaminated with heavy metals and may pose a potential health risk to local residents who consume them.
Previous studies have reported high levels of Hg in the hair, blood, and urine of people living near the Wuchuan Hg mines (59,60).It is suggested to be related to Hg pollution in this region.Except for Hg, high levels of other heavy metals have been observed in the mining areas, such as soil and vegetables (41,44).According to our results in this study, high heavy metal concentrations in duck eggs indicate high levels of heavy metals in the environment and crops and further illustrate that local residents could possibly be exposed to high levels of heavy metals via poultry products (e.g., eggs) and environmental materials.Thus, the risk of heavy metal pollution posing to the residents is non-negligible.
Analysis of heavy metal concentrations in free-range and caged eggs
Eggs as the paramount source of protein consumption for humans, which could be roughly categorized into free-range eggs and caged eggs (61).Investigations have revealed a gradual increase in the overall consumption of eggs, with a growing preference for free-range eggs among consumers (62).During the same period, sales of free-range eggs in the Australian egg industry surge by 21.7%, while caged eggs show a decline of 12.5% (61).Similar preferences for free-range eggs have also been observed among consumers in various countries, including Canada and China, who perceive them to possess higher nutritional value and safety (10, 63).Based on comprehensive avian egg research (Figure 6), we categorized poultry eggs (duck egg and chicken egg) into caged eggs, background free-range eggs, and mining area free-range eggs.Interestingly, for duck eggs and chicken eggs, the concentrations of Cr, Zn, Sr, Ba, and Pb in background free-range eggs are found to be lower than those in caged eggs, which also elucidates why consumers favor free-range eggs.However, for free-range eggs from mining areas, the concentrations of Cr, Zn, Sr, Ba, and Pb are notably higher than those in caged eggs and background free-range eggs.In addition, consistent with previous studies, the heavy metal concentration in mining area of duck eggs is higher than chicken eggs (64,65).
Disparities in heavy metal concentration among distinct egg types could be attributed to poultry rearing practices.Caged poultry are commonly provided with formulated feed, restricting their environmental exposure (78).On the other hand, free-range poultry predominantly feed on substances present in their surroundings, including insects and grains (63).Free-range poultry in mining areas feed on substances from their surrounding environment.It is widely recognized that mining areas face severe heavy metal pollution, with long half-lives and prolonged presence in the environment.Through the food chain, free-range poultry in mining areas are exposed to environmental heavy metal contamination, accumulating in their eggs.Therefore, the ingestion of mining area freerange eggs can pose a potential threat to human health.When choosing free range eggs, consumers should identify the producing areas.
Conclusion
In this study, we measured concentrations of five metals (Cr, Zn, Sr, Ba, and Pb) in duck eggs and chicken eggs from the Hg mining area and the background area, and found that duck eggs from the Hg mining area contained higher concentrations than those from the background area.Duck egg yolks contain higher concentrations than whites, which is related to the presence of yolk precursor proteins in the liver which is the main organ enrich heavy metals in the body.There is no difference in those metal concentrations between chicken eggs from Hg mining areas and background areas, which indicates that duck eggs are more susceptible to heavy metal contamination than chicken eggs.Duck eggs are contaminated by heavy metals to varying degrees, especially for Ba, Pb, and Zn, which have an extremely strong potential ecological risk.In view of different types of eggs from different areas, the concentration in free-range duck eggs and chicken eggs from mining areas are higher than that in farm and free-range duck eggs and chicken eggs.Therefore, when choosing free-range duck eggs as daily food, attention should be paid to identifying the producing regions, with a knowledge about the health risks of duck eggs from heavy metal contaminated areas, such as mining regions.Nevertheless, this is a preliminary study with limited number of duck egg samples.Further studies with increasing numbers of eggs and environmental (soil, water) and crop samples need to be performed to gain a better understanding of the sources of heavy metal pollution in duck eggs from Hg mining areas.
FIGURE 2
FIGURE 2 Heavy metal concentrations in duck egg yolk and egg white from the background area.(A) Cr, Zn, Sr, Ba, Pb concentrations in duck eggs from Wuchuan (Hg mining area) and Anshun (background area); (B) Cr, Zn, Sr, Ba, Pb concentrations in duck eggs from Wuchuan; (C) Cr, Zn, Sr, Ba, Pb concentrations in duck eggs from Anshun."*" represents significantly different (p < 0.05).
FIGURE 5
FIGURE 5Total noncarcinogenic risk and total carcinogenic risk of heavy metals in adults and children.(A) Total noncarcinogenic risk of heavy metals in adults and children; (B) Total carcinogenic risk of heavy metals in adults and children. )
TABLE 1
Duck egg sampling location, time and number.
FIGURE 1Distribution of duck egg sampling sites in the study area.
TABLE 2
The concentrations of Cr, Zn, Sr, Ba, and Pb in duck egg yolk and egg white at the Hg mining area and the background area.
TABLE 3
The single factor pollution index of Cr, Zn, Sr, Ba, and Pb in duck egg yolk and egg white at the Hg mining area.
Pi is the single-factor pollution index, which is one of the indicators of ecological risk assessment.
TABLE 4
Noncarcinogenic risk assessment of heavy metals in duck eggs in the study area.
TABLE 5
Carcinogenic risk assessment of heavy metals in duck eggs from Hg mining areas.Heavy metal concentrations in duck eggs from mercury mining areas versus duck eggs from background areas (A), and metal concentrations in eggs from mercury mining areas and duck eggs (B). | 2024-03-03T17:45:59.106Z | 2024-02-28T00:00:00.000 | {
"year": 2024,
"sha1": "a9a560c98531c3b52e845d91960937f189577ff2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2024.1352043/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "30f558b0904d5e1237fd0a32e32b05d3b68f58c9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119210448 | pes2o/s2orc | v3-fos-license | Superconductivity in electron-doped arsenene
Based on the first-principles density functional theory electronic structure calculation, we investigate the possible phonon-mediated superconductivity in arsenene, a two-dimensional buckled arsenic atomic sheet, under electron doping. We find that the strong superconducting pairing interaction results mainly from the $p_z$-like electrons of arsenic atoms and the $A_1$ phonon mode around the $K$ point, and the superconducting transition temperature can be as high as 30.8 K in the arsenene with 0.2 doped electrons per unit cell and 12\% applied biaxial tensile strain. This transition temperature is about ten times higher than that in the bulk arsenic under high pressure. It is also the highest transition temperature that is predicted for electron-doped two-dimensional elemental superconductors, including graphene, silicene, phosphorene, and borophene.
I. INTRODUCTION
Recently there has been a surge of interest in the investigation of two-dimensional (2D) superconductors, partially due to their potential application in nanosuperconducting devices 1,2 . A pure 2D electronic system can be obtained by growing a single layer graphite or other materials on a proper substrate. The interplay between a 2D superconductor and the substrate has proven to be an efficient way to enhance superconductivity. For example, the superconducting transtion temperature, T c , of a single layer FeSe film deposited on the (001) surface of SrTiO 3 is greatly enhanced 3,4 in comparison with the bulk FeSe superconductor 5 , resulting from the coupling between electrons and the phonon modes in the substrate 6 . Superconductivity in graphene, which is the first 2D compound synthesized at laboratory, was extensively explored. Through a plasmon-mediated mechanism, Uchoa et al. discussed properties of superconducting states in several metal coated graphenes 7 . Firstprinciples density functional theory (DFT) calculations predicted that the monolayer LiC 6 and CaC 6 are phononmediated superconductors with T c of 8.1 K and 1.4 K, respectively 8 . Later, the superconducting phase was observed experimentally below 7.4 K in a Li-intercalated graphite thin film 9 and 6 K in a Ca-decorated graphene 10 . Calculations of electron-phonon interactions suggested that T c of graphene can reach 23.8 K or even 31.6 K upon heavy electron or hole doping under 16.5% biaxial tensile strain (BTS) 11 , but experimental evidence for such high-T c superconductivity in doped graphenes is still not available.
In recent years, several new 2D materials, like silicene, phosphorene, and borophene, were synthesized experimentally. Similar to graphene, phosphorene is obtained by mechanically exfoliating layered black phosphorus 12 . Silicene 13,14 and borophene 15,16 were epitaxially grown on Ag(111) surfaces. It was reported that there is a charge transfer from the Ag(111) substrate to silicene 17 or borophene 16 . Furthermore, the substrate imposes strain to these single-layer materials, due to the lattice mismatch 16,[18][19][20] . These effects of the substrate should be taken into account in the investigation of superconducting properties in these materials.
The DFT calculation also showed that the superconducting transition temperature can reach 16.4 K in silicene upon electron doping of n 2D = 3.51×10 14 cm −2 and 5% BTS 21 . T c of phosphorene was predicted to be 12.2 K under the doping of 2.6×10 14 cm −2 and 8% uniaxial tensile strain along the armchair direction 22 . Applying 4% BTS to phosphorene can further increase T c to 16 K 23 .
Unlike silicene and phosphorene, borophene is intrinsically a metal 16 . A free-standing pristine borophene is predicted to superconduct around 20 K 24,25 , especially for the χ 3 -type borophene whose T c can be 24.7 K 25 . This is the highest superconducting transition temperature among predicted or observed 2D elemental superconductors without doping. Unfortunately, the charge transfer and tensile strain imposed by the Ag substrate suppress the superconducting order dramatically 25,26 .
Recently, a buckled single-layer honeycomb arsenic, i.e. arsenene, was proposed 27,28 . Unlike the semi-metallic bulk gray arsenic, arsenene is a semiconductor with an indirect energy gap of 2.49 eV. More importantly, arsenene undergoes an intriguing indirect-to-direct gap transition by applying a small BTS. This makes arsenene a promising candidate of transistor with high on/off ratios, optoelectronic device working under blue or UV light, and 2D-crystal-based mechanical sensor 27 . Furthermore, it was predicted that arsenene has a higher electron mobil-ity than that of MoS 2 29 , and can become a unique topological insulator under suitable BTS without considering any spin-orbit coupling 30 .
In this work, we employ the first-principles DFT and the Wannier interpolation technique to accurately determine the electron-phonon coupling (EPC) properties of electron-doped arsenenes. The phonon-mediated superconducting T c is evaluated based on the McMillian-Allen-Dynes formula. Without applying BTS, 0.1 electrons/cell (hereafter e/cell) doping can already turn arsenene into a superconductor with a superconducting temperature above the liquid-helium temperature. T c increases to 10 K at 0.3 e/cell doping (i.e. n 2D =2.76×10 14 cm −2 ). Moreover, we find that the A 1 phonon mode at the K point contributes mostly to the EPC constant. From the electronic point of view, the p z -like electronic orbitals of arsenic atoms couple strongly with phonons. By applying a BTS, the A 1 phonon mode is softened and the EPC matrix elements are enhanced. This enlarges both the EPC λ and the superconducting transition temperature. Under a 0.2 e/cell doping and 12% BTS, T c of arsenene is predicted to be about 30.8 K. To the best of our knowledge, this is the highest T c predicted for 2D elemental superconductors upon electron doping.
II. COMPUTATIONAL APPROACH
In the DFT-based electronic structure calculation, the plane wave basis method is adopted 31 . We calculate the Bloch states and the phonon perturbation potentials 33 using the local density approximation and the normconserving pseudopotentials 32 . The kinetic energy cutoff and the charge density cut-off are taken to be 80 Ry and 320 Ry, respectively. A slab model is used to simulate arsenene, in which a 12Å vacuum is added to avoid the interaction between the neighboring arsenic sheets along the c-axis. Electron doping is simulated by adding electrons into the system with a compensating background of uniform positive charges. For each doping concentration, the atomic positions are relaxed but with fixed in-plane lattice constants which are obtained by optimizing the lattice structure of arsenene without doping.
The charge density is calculated on an unshifted mesh of 60×60×1 points, with a Methfessel-Paxton smearing 34 of 0.02 Ry. The dynamical matrix and perturbation potential are calculated on a Γ-centered 12×12×1 mesh, within the framework of density-functional perturbation theory 35 . Maximally localized Wannier functions 36,37 are constructed on a 12×12×1 grid of the Brillouin zone, using ten random Gaussian functions as the initial guess. Fine electron (600×600×1) and phonon (200×200×1) grids are used to interpolate the EPC constant using the Wannier90 and EPW codes 38,39 . Dirac δ-functions for electrons and phonons are replaced by smearing functions with widths of 15 and 0.2 meV, respectively.
The EPC constant λ is determined by the summation of the momentum-dependent coupling constant λ qν over the first Brillouin zone or the integration of the Eliashberg spectral function α 2 F (ω) in the frequency space 40,41 , in which N q represents the total number of q points in the fine q-mesh. The coupling constant λ qν for mode ν at wavevector q is defined by 40,41 , Here ω qν is the phonon frequency and g ij k,qν is the probability amplitude for scattering an electron with a transfer of crystal momentum q. (i, j) and ν denote the indices of energy bands and phonon modes, respectively. ǫ i q and ǫ j k+q are the eigenvalues of the Kohn-Sham orbitals with respect to the Fermi energy. N (0) is the density of states (DOS) of electrons at the Fermi level. N k is the total number of k points in the fine Brillouin-zone mesh. The Eliashberg spectral function is determined by 40,41 , We calculate the superconducting transition temperature using the McMillian-Allen-Dynes formula 41 , f 1 and f 2 are the correction factors, which equal 1 when λ < 1.3, and The effective screened Coulomb repulsion constant µ * is set to 0.1, and
A. EPC in electron doped arsenene
For a free standing arsenene without doping, we find that the optimized lattice constant is 3.5411Å, in agreement with the result, 3.5408Å, obtained by Zhang et al. 27 . Our calculation shows that arsenene is semiconducting with an indirect band gap of 1.42 eV. The valence band maximum is at the Γ point, while the conduction band minimum is located on the line between Γ and M . Although the energy gap is underestimated in comparison with the HSE06-level result 27 , it does not affect our EPC results since the shapes of conduction band given by LDA and HSE06 are concordant. Figure 1 shows the band structures with the corresponding Fermi surfaces for four electron-doped arsenenes. At the doping of 0.1 e/cell, there are six elliptical electron pockets surrounding Γ. With the increase of the doping level, the area enclosed by each pocket expands. Moreover, six small electron arcs emerge at the Fermi level around the six zone corners when the doping level reaches 0.3 e/cell or above [ Fig. 1(g)]. At higher doping, an electron pocket centred at the Γ point appears at the Fermi level [ Fig. 1(h)]. The influence of electron doping on fully occupied energy bands is negligible. Figure 2 shows the phonon spectra of electron-doped arsenenes. At the doping of 0.1 e/cell, there is no imaginary phonon frequency, indicating that this system is dynamically stable. Moreover, there is a gap between the acoustic and optical phonon modes. With the increase of doping, the optical phonon modes are gradually softened. At higher doping, imaginary frequencies are found in the lowest acoustic band near the Γ point [ Fig. 2(b)-2(d)]. This kind of imaginary frequencies were also found in the phonon spectra of borophene 25 , arsenene 28,30 , germanene 42 , and other binary monolayer honeycomb materials 43 . It is not a sign of structure instability. Instead, it results from the numerical instability in the accurate calculation of rapidly decreasing interatomic forces 43 . The largest contribution to λ comes from the A 1 mode in the lowest optical phonon mode around the K point. A real-space picture of the eigen-vector of this mode is shown in Fig. 2(e) and Fig. 2(f).
The Eliashberg spectral function α 2 F (ω) shows two main peaks [Fig. 3]. The lower-frequency peak results from the A 1 phonon mode around the K point. In comparison with the peak at the 0.1 e/cell doping, the peak position shifts towards lower frequency at higher doping. The higher-frequency peak is also mainly the contribution of the lowest optical phonon excitations, especially those between Γ and M . Even though the EPC λ qν of these phonons along Γ-M is smaller than that of the A 1 mode at the K point, the density of states is higher.
Based on the above results, we calculate the superconducting transition temperature T c using the McMillian-Allen-Dynes formula 41 . The results, together with other key parameters, are presented in Table I. Without BTS, T c shows a maximum at 0.3 e/cell. λ at 0.2 e/cell is larger than that at 0.3 e/cell due to the considerable contribution from the acoustic phonon modes at the former doping [ Fig. 2(b)], but the corresponding T c is lower than the latter case.
The EPC constant λ is determined by the DOS, the phonon frequency ω qν , the EPC matrix element |g ij k,qν |, and other parameters. In order to determine which effect has the largest contribution to λ, we calculate the following two quantities ξ(q) is a modified Fermi surface nesting function, in which N (0) is also included. γ(q) is the nesting func- tion weighted by the EPC matrix element |g ij k,qν |. Figure 4 shows ξ(q), γ(q), and λ(q) (calculated through ν λ qν ) for the four electron-doped arsenenes. At each doping level, the similarity between ξ(q) and λ(q) indicates that the strong EPC of the A 1 phonon mode mainly comes from the peak in the nesting function ξ(q) around the K point. The relatively lower vibrational frequencies of strongly coupled phonon modes at the 0.2 e/cell doping lead to the sharp peaks between Γ and M and the largest λ. λ(q) at 0.1 e/cell is comparable to that at 0.2 e/cell [ Fig. 4(c)]. However, λ is a summation over the whole Brillouin zone. For the cases of 0.2, 0.3, and 0.4 e/cell doping, there is a certain amount of q points which have substantial contribution to λ(q) [Fig. 4(d) and Fig. 4(e)] but are not located along the Γ-M -K-Γ high-symmetry line.
B. EPC under BTS
The BTS is measured by ε = (a − a 0 )/a 0 × 100%, where a 0 and a are the lattice constants without and with strain, respectively. Without doping, arsenene remains in the semiconducting phase under BTS up to 12% 27 . In order to study the BTS effect on the superconducting properties, we calculate the EPC of biaxial strained arsenenes under 0.2 e/cell doping. At 4% BTS, as shown in Fig. 5(b), the conduction band minimum moves to the Γ point. This leads to a direct band gap of 1.67 eV, resembling the indirect-to-direct band-gap transition in the undoped arsenene under 4% BTS 27 . With the increase of BTS, the conduction band along the Γ-M line becomes less dispersive [ Fig. 5(c-d)], which enlarges the DOS at the Fermi level [see Table I]. Furthermore, the applied BTS lowers the conductonband energy at the Γ point, which enables an electron pocket around the Γ point to emerge at the Fermi surface and reduces the volume of the six elliptical Fermi surface sheets. Figure 6 shows the phonon spectra of strained arsenenes. By applying the BTS, the phonon frequencies are softened, especially for the A 1 phonon mode around the K point. Meanwhile, the EPC from the lowest acoustic phonon band between the Γ and M points becomes stronger and stronger with the increase of BTS. Again, the phonon frequency in the lowest phonon band becomes imaginary around the Γ point. Further calculation, however, suggests that the 0.2 e/cell doped arsenene remains dynamically stable under 14% BTS, slightly smaller than the critical strain, 18.4%, as obtained in the undoped arsenene 30 .
When the applied BTS is less than 4%, the twopeak structures of α 2 F (ω) are preserved [ Fig. 7(a) and Fig. 7(b)]. With the increase of BTS, α 2 F (ω) around 5 meV is enhanced. This enhancement arises from the softening of the A 1 mode and the enhancement of the EPC in the lowest acoustic phonon band between Γ and M . We also calculate ξ(q), γ(q), and λ(q) for arsenene at different BTS [ Fig. 8]. Similar to the case of the BTSfree arsenene, ξ(q) is not a hegemonic factor that determines λ. In contrast, an obvious separation among the four curves in γ(q) is observed. This separation results from the matrix element |g ij k,qν | around the Fermi level [ Fig. 8(b)]. It is further enhanced by the softening of strongly coupled phonon modes [ Fig. 8(c)], giving rise to a T c as high as 30.8 K at 8% BTS. To determine which electron band contributes most to the EPC, we calculate λ ki defined by where i is the band index of electrons. λ ki describes the scattering process of an electron from the i-th band to other bands by a phonon with momentum q and branch ν. It represents the contribution of electrons with momentum k at the i-th band to the EPC. For all the cases we have studied, we find that λ ki behaves similarly. Here we only show the results for the 0.4 e/cell doping without BTS [ Fig. 9(a)] and for the Fig. 9(c)]. By comparing the contour picture of λ ki with the Fermi surface shape shown in Fig. 1(h) and Fig. 5(h), we find that the main contribution to the EPC comes from the elliptical Fermi surface sheets. In particular, the points near M contribute most to the EPC. The charge density of these electrons shows a p z -like character [ Fig. 9(b) and Fig. 9(d)], indicating that it is the p z orbital of arsenic atom that couples most strongly with phonons.
The maximal superconducting T c of 23.8 K is predicted for 4.65×10 14 cm −2 electron-doped graphene under 16.5% BTS. Realistically, realizing such requirements in graphene may be very difficult in experiment. The advantage in the case of arsenene is that the high-T c superconductivity above 30 K may be obtained under a relatively smaller doping density (1.28×10 14 cm −2 ) and BTS (12%). Recently, multilayer arsenenes were successfully grown on InAs using the plasma-assisted process 44 . The bulk gray arsenic, which is the most stable phase among all arsenic allotropes 45 , could be used as a precursor to prepare arsenene 27,28 . By growing arsenene on a piezoelectric substrate, one can control BTS by applying a bias voltage to elongate or shorten the lattice constants 19 . The electron doping can be achieved either by chemical doping or substitution, or by liquid or solid gating 46,47 . Thus it is feasible to verify our prediction experimentally.
IV. CONCLUSION
Based on the first-principles DFT electronic structure calculation, we predict that the semiconducting arsenene can become a phonon-mediated superconductor upon doping of electrons. The maximal superconducting transition temperature is found to be around 10 K in the doped arsenene. It can be further lifted to 30 K by applying a 12% BTS. The superconducting pairing results mainly from the A 1 phonon mode around the K point and the p z -like electrons of arsenic atoms. | 2018-01-02T04:12:44.000Z | 2018-01-02T00:00:00.000 | {
"year": 2018,
"sha1": "7d12e404b318a3b7fa609a4ceffd457a68239e04",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1801.00545",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7d12e404b318a3b7fa609a4ceffd457a68239e04",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
236552069 | pes2o/s2orc | v3-fos-license | ETHICAL DILEMMA OF CESAREAN SECTION ON MATERNAL REQUEST (CSMR)
Objective: To study the demographic characteristics of pregnant ladies and factors contributing towards rise in cesarean section on maternal request to aid the obstetricians in decision making. Study Design: Cross sectional analytical survey. Place and Duration: Gynecology Department of Pak Emirates Military Hospital, Rawalpindi, from Nov 2019 to Mar 2020. Methodology: One hundred and fifteen women of child bearing age requesting cesarean section were included in the study. Demographic details were noted. A study proforma was filled for determinants of primary and secondary tocophobia and factors that may be improved for vaginal delivery. Results: A total of 115 patients with mean age of 27.99 years were included. Amongst them, 88 (76.5%) were Punjabi with 92 (80%) living in rural area. Primigravida were 11 (9.6%), 83 (72.2%) had previous lower segment cesarean section and 3 (2.6%) had vaginal delivery. For primary tocophobia, 22 ( 24.4%) experienced anxiety. Fear of labor pains was seen in 20 (19.2%) and lack of control in 27 (26%). For secondary tocophobia, 15 (37.5%) were fearful of prolonged labor and 5 (22.5%) of sub optimal birth outcome. In women with previous one cesarean section, 13 (14.8%) correlated negatively with birth experience and 20 (22.7%) found timed cesarean section convenient. For vaginal delivery, pain relief was preferred by 19 (20.2%) and 31 (33%) wanted pain relief and attendant. Conclusion: Better understanding of fears behind maternal request for cesarean section can lead to improved attitudes towards vaginal delivery. The negative perceptions of pregnant ladies should be addressed in antenatal visits.
INTRODUCTION
Rising cesarean section rate is a globally debatable issue. There are serious concerns regarding woman's obstetric future after cesarean section delivery and neonatal respiratory distress and prematurity. As per World Health Organization (WHO) recommendations, medical interventions must be kept to minimum in maternal and child health care. The cesarean section rate should not rise above 10-15% 1 . According to WHO, Cesarean section rates have risen from 2.7% in 1990-91 to 15.8% in 2012-13 with around 9700 mortalities due to maternal complications in 2015 in Pakistan 2,3 . A significant rise of up to 40% has been seen in rich, educated and urban living ladies wishing for a reduced family size, avoidance of pain and unpredictability of normal labor and damage to pelvic floor. In Pakistan, medically non-indicated cesarean sections have risen in the last decade. The education provides higher autonomy to the females to take their own decisions regarding child birth. Cesarean section is a key predicator of accessibility of health care services to women. About 11.5% of rural women had cesarean section as compared to 26.5% of urban ladies. WHO released a new statement in 2015 stating that the rate of cesarean section should not exceed 10% and should not be less than 5% as both extremes can have adverse impact on maternal health and quality of life 4 .
Cesarean section on maternal request (CSMR) is defined as a planned cesarean section performed on maternal request in the absence of indications for cesarean section and contraindications to vaginal delivery 5 . The causes for increased cesarean sections are multi factorial. Traditionally it was considered inappropriate to perform cesarean section without a clinical indication. However, a rising trend has been observed in cesarean sections performed on demand or for nonevidence based reason. CSMR is not a well recognized and investigated entity. It is affected by a complex labyrinth of health care providers, health system, patients, culture, beliefs, fashion and social media. Most obstetricians have faced this request from pregnant women in their clinical practice. There is a lack of explicit data and surveys regarding the incidence and impact of rising rate of CSMR. In USA, the exact incidence is not known but it is estimated that CSMR occurs in less than 3% of all the deliveries. ACOG Committee Opinion on CSMR recommends that women requesting CSMR should undergo through assessment in terms of risk factors, future pregnancy plans, social and cultural context 6 . If no other indications are present, then CSMR should not be performed before 39 weeks of gestation 7 . The rise in cesarean section rate in Pakistan is similar to other developing countries and is primarily due to ceserean sections performed for non-evidence based reasons like professional convenience and maternal request 8 .
Tocophobia is defined as an intense fear of childbirth and is the leading psychological cause of CSMR. Primary tocophobia is morbid fear of childbirth in a woman, who has had no previous experience of pregnancy. Secondary tocophobia is experienced by women who had a previous traumatic birth experience leading to phobia of child birth. Women with tocophobia either avoid pregnancy or request cesarean section for child birth 9 . Proponents debate that elective cesarean section cannot guaranteenormality but it can avoid the expected morbidities related to vaginal birth. This has changed the trends in affluent strata of the societies. In London, 31% of the female obstetricians with uncomplicated pregnancies chose cesarean section over normal delivery for themselves 10 . These women are not the representative of the common population. Nowadays obstetricians are at a turning point because of the advancements that have made cesareans safe and the substantial morbidity of vaginal delivery that cannot be negated.
There is a need to look into the rationale and experiences behind the maternal request for cesarean section to understand and overcome the factors leading to rise in CSMR. There is a strong ethical dilemma behind rising CSMR. At one end of the spectrum is the unpredictability of normal labor surrounding the maternal fears. While at the other end is the rise in cesarean sections performed for non-medical reasons. The obstetrician has to balance the two ends and formulate a safe plan for delivery and health of the mother. Respectful maternity care demands provision of physical and mental support during labor to ease the process of natural childbirth.
This study was conducted to gather information to find out the perceptions of women of childbearing age regarding tocophobias leading to rise in CSMR. The purpose of this study was to estimate the burden of the problem so that proper guidelines can be set for identification and timely intervention to reduce the rate of CSMR and encourage natural childbirth.
METHODOLOGY
A cross sectional survey was carried out for duration of three months from November 2019 to February 2020. It was conducted at Obstetrics and Gynecology Department, Pak Emirates Military Hospital, Rawalpindi. The study was formally approved by the ethical research review committee (IERB certificate number A/28/EC/220/2020) and informed consent was taken from all theparticipants.
Non-probability consecutive sampling technique was used. A total of 115 woman of child bearing age willing to participate were included in the study. Sample size was calculated by using Open Epi calculator. The prevalence of tocophobia was found to be 7.5% in one of the studies 7 . Those women who had co morbidities affecting mode of delivery, had completed their families and were not of childbearing age were excluded from thestudy.
A study proforma was designed for the determinants of primary and secondary tocophobia. It also included suggestions for improving experience of normal labor like provision of pain relief and availability of a companion during labor. The demographic data of the women was noted including age, socio economic status, education, entitlement, ethnicity and past obstetric history.
The data was analyzed using SPSS-23. The counts with the percentages were given for baseline characteristics including entitlement or non-entitlement for army hospital, socioeconomic class, education and other studied factors. Descriptive analysis was done to find out the common factors of primary and secondary tocophobia among the samples.
RESULTS
The data of 115 women was analyzed. Out of these, 107 (93%) were entitled and 8 (7%) were private patients. Mean age in years was 27.99 ± 3.24. Lower middle class was the most common socio economic group 85 (73.9%) while 15 (13%) each belonged to upper and lower socio economic group. Ethnicity was predominantly Punjabi 88 (76.5%) with 92 (80%) living in rural and 23 (20%) in urban areas. The educational status ranged from middle 34 (29.6%) to matric 33 (28.7%). The bachelor's degree was held by 14 Secondary tocophobia was faced by multiparous women. The predominant fears in these women were prolonged labor in 15 (37.5%), fear of labor pains due to lack of pain relief in 11 (27.5%), birth trauma in 9 (22.5%) and sub optimal fetal outcome in 5 (12.5%). For secondary tocophobia with one previous cesarean section, 37 (42%) ladies preferred cesarean section in next pregnancy because of bearable post operative pain as compared to labor pains. Planned cesarean section was convenient for 20 (
DISCUSSION
Cesarean section on request is globally on the rise predominantly for social and psychological reasons. The term CSMR was adopted by National Institute of Health state-of-the-science conference in 2006 11 . They defined CSMR as the primary pre labor cesarean delivery performed on maternal request in the absence of fetal or maternal indications. They reflected that currently the data is not adequate to justify either mode of delivery. There is a complex plethora of reasons for which CSMR is performed. The obstetricians face a constant dilemma in decision making in this situation. It is difficult to refuse the request of the patient but at the same time, fetal and maternal risks due to anesthesia and surgery cannot be over looked. The women who undergo cesarean in their first pregnancy are more likely to have cesarean deliveries in subsequent pregnancies 12 . In our study, CSMR was expressed by primigravida 11 (9.6%). Maximum request was from patients with previous cesarean section 83 (72.2%). Main factors for CSMR in our study were fear of pains and loss of control. Fenwick et alalso found child birth fear and issues of control as the main reasons for CSMR 12 .
The Health Committee Maternity Services and the Changing Childbirth suggest a pivotal role of women indecision making 13 . This view has received criticism. The obstetric decisions should not be affected by maternal choice and fears. Our study aimed to high light the main tocophobic factors that force the women for CSMR.Our objective was to make obstetricians aware of the alarming rise in CSMR and factors that contribute towards it. A North Western Carolina survey concluded that primary reasons for maternal request were prevention of birth injury and existing medical conditions. The primary objective of these women was their infants' health rather their own 14 . In our study, sub optimal fetal outcome was feared by 15 (14.4%) in primary tocophobia and 5 (12.5%) in secondary tocophobia.
It is difficult to exactly gauge the incidence of tocophobia as women of different levels of tocophobia are usually included in the research. A meta analysis by Connell and colleagues showed the prevalence of 14%. They commented that more research is required to gain a better understanding of fear of child birth 15 . We found in our study that the pain relief 19 (20.2%) and presence of a partner 31 (33%) were the main requests from those who opted for vaginal delivery. Connell et al, also commented that anxieties, past sexual experience, negative information from friends or relatives, lack of self control were the main factors for primary tocophobia. Secondary tocophobia resulted from a traumatic birth experience, post traumatic stress disorder, birth trauma or sub optimal birth outcome 15 . In addition to these negative thoughts, elite societal and professionally committed ladies preferred to have control over their life events like planning mode and time of delivery. Our findings showed that in primary tocophobia recommendation from family or friends and anxiety were the main determinants. Both seen in 22 (24.7%) ladies each. For secondary tocophobia, 20 (22.7%) found timed cesarean delivery more convenient. Negative experiences from family and friends influenced 13 (14.8%) secondary tocophobic women.
A study from a tertiary care hospital in Sindh, Pakistan, observed CSMR as the fifth common reason for rise in cesarean section rate 16 . A Swedish registry based study showed that rate of CSMR has increased 3 fold in a ten year period but it did not significantly contribute to the overall cesarean section rate 17 . This study showed that primiparous women requesting CSMR had fear of birth and pain, safety issues, relatives' birth history and history of sexual harassment 17 . This was similar to the reasons expressed by the primary tocophobic women in our study. A Norwegian study documented 10% CSMR rate which was less than 1% of all the births at that time 18 . Emma and colleagues studied contributing factors for rising cesarean section rate and found that the rate of cesarean sections on maternal request has risen by 8% over time 19 . Another qualitative study from Norway found previous birth experience as the major determinant for fear of subsequent births 20 . This finding is consistent with findings of our study.
A cohort of six European countries was studied for preferences of women for mode of delivery. They concluded that medical and psychological concerns are the main determinants behind the maternal request 21 . A Cochrane database review highlighted that there is no substantial evidence for performing cesarean sections for non medical reasons. They have suggested a need for further research in thisregard 22 . Our objective was to high light the womens' choice and fears for mode of delivery and the factors that can modify them. We also feel a need for a further research in CSMR to formulate a plan for tocophobic women and reduce cesarean sections perfomed for non medical reasons.
In their commentary Dweik and Sluijs highlighted that promoting positive birth experience along with healthy mother and child should be the most important goals of the antenatal services to reduce the fear of birth 23 . In a Danish study, maternal request cesarean (MRS) are on the rise. Women who had perineal tears, emergency cesarean and perinatal death had 1.3, 3.8 and 2.0 times more MRS in their next delivery 24 . Prolonged labor and birth trauma were common secondary tocophobic factors in our study. The availability of anesthesia in labor room was the major concern and lack of pain relief was expressed by 27.5% ladies in our study. A study from Beijing Obstetrics and Gynecology Hospital reflected a rise in cesarean section rate in the last twenty years. The changing trend was the rise in cesarean sections for maternal request and previous cesarean delivery.
Our results were comparable with the results of various national and international institutes where rise in CSMR has been highlighted although not a major contributing factor for the total rise in cesarean section rate. Analysis of factors depicted that fear of childbirth, previous birth experience, social recommendations and pain were the main reason behind this rise. Pain relief and availability of an attendant in labor room were the confounding variables which can improve the patient's attitude towards vaginal delivery.
RECOMMENDATION
Larger studies in both private and public sectors are required to find out the prevalence of CSMR. The demographic, social and psychological reasons need to be evaluated to control the rising trend. Underlying anxiety and stress disorders should be addressed for improved perception of natural child birth.
CONCLUSION
CSMR has been labeled as an iatrogenic issue with a potential for improvement. A substantial rise has been seen in educated, wealthier, urban women who prefer small family size and convenience of planned delivery. These patients may benefit from more careful surveillance and counseling. Our findings can have significant health implications to control the factors and fears behind CSMR. Obstetricians, lady health workers, counselors and birth attendants need to play their role in alleviating tocophobias. The measures contributing towards acceptance of natural birth should be improved including provision of pain relief and presence of companion in labor rooms. | 2021-08-02T00:06:42.539Z | 2021-04-29T00:00:00.000 | {
"year": 2021,
"sha1": "8bea16fa4bf9b203fad83a07cec769358b7b828d",
"oa_license": "CCBYNC",
"oa_url": "https://www.pafmj.org/index.php/PAFMJ/article/download/5448/3317",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4c65e9bfb8eabd26e2523de8c0dc407f69478bed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119146710 | pes2o/s2orc | v3-fos-license | Analisis of the power flow in Low Voltage DC grids
Power flow in a low voltage direct current grid (LVDC) is a non-linear problem just as its counterpart ac. This paper demonstrates that, unlike in ac grids, convergence and uniqueness of the solution can be guaranteed in this type of grids. The result is not a linearization nor an approximation, but an analysis of the set of non-linear algebraic equations, which is valid for any LVDC grid regardless its size, topology or load condition. Computer simulation corroborate the theoretical analysis.
Motivation
Low voltage direct current (LVDC) is a promising technology for urban distribution systems, micro-grids, data centers, traction power systems and shipboard power systems [1]. It presents advantages in terms of reliability, efficiency, controllabiliy, power density and loadability [2,3].
An LVDC grid consists of a bidirectional AC/DC converter placed in the main substation to which it is connected different loads and generators as depicted in Fig 1. Different elements can be connected to an LVDC grid such as renewable energy resources, energy storage, electric vehicles and controlled loads. These elements are integrated to the grid through a power electronic converter (i.e a constant power terminal). Consequently, the model of the LVDC grid is non-linear and requires a power flow study.
The existence and uniqueness of the solution are, obviously, sine qua non conditions for rigorous analysis of the stationary state of a grid and for determining an equilibrium point in small signal stability studies [4,5]. These are characteristics of the set of algebraic equations and not of the method used to find a solution. However, it is often difficult to determine if a solution of a set of nonlinear equations, such as those of the power flow, is unique. A non-linear problem could give several solutions, and in some cases, the solution may not even exist. Uniqueness must not be taken for granted.
dc power flow vs power flow in LVDC grids
It is important to emphasize that power flow in LVDC grids is different from "dc power flow". The first is a power flow in a grid which is actually dc and incorporates constant power terminals; while the second is a linearization of the power flow equations in ac grids which, due to a pedagogic analogy, is named in this way.
Brief state of the art
There is an increasing interest in LVDC grids and related subjects such as dc microgrids and dc distribution. Several studies have been done about the feasibility of these technologies. For instance, [1] presented a complete description of the potentialities of LVDC grids as well as their challenges. Potential pathways for increased use of dc technology in buildings was considered in [3]. A more practical approach was presented in [2] where a case study for a large distribution network was considered.
Power flow analysis in LVDC grids has been presented as an extension to well known methodologies for ac grids such as Newton-Raphson or Gauss-Seidel [6]. Power flow sensitivities have been also studied in [7]. However, available studies in the literature are based on numerical perfor-D R A F T mance but there are no theoretical studies about uniqueness of the solution. In these studies, uniqueness is taken for granted without mathematical demonstration in spite of the fact that a non-linear problem could give several solutions. To the best of the author's knowledge, this problem has not been addressed in LVDC grids 1 .
Contribution and scope
This paper demonstrates the existence and uniqueness of the solution of the power flow in LVDC grids. This result is general since: 1) it is independent of the numerical method 2) it is independent of size and load condition of the LVDC grid and 3) it is valid for any topology of the LVDC grid. A computational simulation demonstrates the theoretical analysis using a successive approximation method.
Comparisons of the computational performance of different algorithms is beyond of the scope of this paper in order to maintain the generality of the main result. Computational performance depends on many factors such as the implementation of the algorithm, programming language and size of the grid.
Organization of the paper
The paper is organized as follows: Section 2 presents the basic formulation of the power flow in LVDC grids from a practical context. Next, Section 3 demonstrates the main theoretical result followed by simulations in Section 4. Finally conclusions and references.
Power Flow in LVDC grids
The lack of reactive power and angles in LVDC grids allows some simplifications of the mathematical formulation. Nodes are classified according to the type of control, namely: constant voltage, constant power and constant resistance. Constant voltage terminals include the main substation converter and any converter along the grid which can maintain the voltage. Other converters in the grid must be represented as constant power terminals. These include renewable energy resources, energy storage devices and controlled loads, among others. Constant resistance terminals are linear loads as well as step nodes (i.e. nodes without generation or load). Drop controls can be considered as a linear combination of a constant power and a constant resistance terminal.
Mathematical formulation
Let us consider an LVDC grid as a set of nodes represented by N = {1, 2, ..., N }, which in turns is subdivided into three nonempty and disjoint subsets N = {V, R, P} according to the type of terminal, namely: constant voltage (V), constant resistance (R) and constant power P. 1 The problem has not been fully addressed in ac grids either. A result for LVDC grids could give an insight about general ac grids There is usually only one constant voltage terminal but the methodology can be applied to a more general case with multiple voltage-controlled terminals. Branches are represented as a set E = N × N with an associated constant resistance.
Nodal voltages and currents are related by the admittance matrix G ∈ R N ×N as follows: In this case, V V is known and I R is given by (2) with D RR a diagonal matrix that includes admittances of constant power terminals. Notice this matrix can be singular (e.g. in the case of step nodes). Equation (2) is used to reduce the size of the set of algebraic equations: Power-controlled terminals are associated with the following non-linear equation Which in turn can be written as follows with Therefore, the state of the LVDC grid can be completely established by solving (6).
In order to analyze (6), let us define a map T : R P → R P as given in (7): Notice that T is a non-linear map and hence, uniqueness of the solution must not be taken for granted.
Practical considerations
Let us consider the following few practical assumptions A1 the graph is connected (i.e. there are no islands in the feeder).
A2 there is at least one constant power terminal and one constant voltage terminal D R A F T A3 feasible voltages remain in a given interval A4 short circuit currents are higher than normal operation currents for all constant power terminals.
Each of these assumptions is completely justified in real power system applications. (A1-A2) guarantee B PP is non-singular. (A3) is required for voltage regulation and for physical constraints in the converters. Finally (A4) is an obvious yet useful observation for any electric system.
Convergence Analysis
We now analyze (7) in order to determine the existence and uniqueness of the solution. To do this, we must demonstrate that T is a contraction mapping defined as follows 2 :
Definition 3.1 (contraction mapping).
Let B = {x : x ≤ r} be a closed ball in R n , and let T : B → R n . Then T is said to be a contraction mapping if there is an α such that Now we can present our main result: Proposition 3.1 (Contraction of the power flow). An LVDC grid represented by (6) with the assumptions (A1-A4) has a unique solution which can be obtained by the method of successive approximations with the map (7) and contraction constant given by (9) proff: In order to prof this proposition, we use the contraction mapping theorem (see [8] for details) which states that if T is a contraction mapping in B then there is a unique vector x 0 ∈ B satisfying x 0 = T (x 0 ).
Select two different values of voltages V P and U P in the constant power terminals as follows: where v min is the minimum voltage according to assumption (A3). It only remains to establish if the constant α is lower than 1. Let us use a matrix norm as where r kk is the Thevening impedance in each node (i.e the element k on the diagonal of the matrix B −1 PP ). Then α can be expressed as which is the ratio between operational and short circuit currents at minimum voltage. This value is lower than one due to (A4); hence, the prof is completed.
Remark 3.1: Notice that (9) can be directly evaluated before the power flow calculation. This condition is inherent in the system and not in the computational implementation.
Remark 3.2: The theorem guarantees uniqueness of the solution in B. As aforementioned, it does not depend on the implementation; any algorithm that achieves convergence will find a point in B.
Successive approximations
Several methodologies from ac systems can be adapted to LVDC grids. Many of these are based on the classic Gauss-Seidel and/or Newton-Raphson methods. No method is guaranteed to be faster than the other in every case. Here, a successive approximation is used since it can be directly obtained from the map T ; this method applies iteratively the map T until achieving convergence as follows: where the sub-index k represents the iteration. This methodology is classic in the power systems literature. In fact, the Gauss-Seidel method is just a small modification of this principle (in this method the values of V P(k) obtained in the k-th iteration remain unchanged during the entire iteration while the Gauss-Seidel method uses the new values as soon as they are obtained). In addition, the backward-forward sweep algorithm can be interpreted as a computationally efficient implementation of the same principle. Therefore, the analysis of these algorithms is basically equivalent.
Computational results
A power flow was evaluated in the medium voltage dc distribution system shown in Fig 1 with parameters given in Table 1. Convergence of the method was analyzed under different initial conditions for B = V ∈ R P : 0.55 < V i < 1.5 . The contraction constant was calculated using (12) as α = D R A F T 0.00475. The algorithm converged in less than 5 iterations regardless the initial point.
The power flow was also evaluated under high-load conditions. Power in each k ∈ P was increased until critical voltage was achieved, in the same manner as in the voltage stability studies for ac power systems. Results are depicted in figs 2 and 3. In normal operative conditions, the successive approximation algorithm converges in less than 5 iterations but the number of iterations increase as the system approaches the critical point. As P max increases, the contraction constant α tends to 1. Nevertheless, convergence is guaranteed in a finite number of iterations even in these extreme conditions. Notice that ac systems require other methodologies such as the continuation power flow for calculations close to maximum load limit. | 2017-02-03T16:16:42.000Z | 2017-02-03T00:00:00.000 | {
"year": 2017,
"sha1": "b22e17b3d84ae7350a5f202d6e8ad07c9ee9fa91",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "903d1cdf074004717943fca3e2022ecd8090a7c8",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
37287211 | pes2o/s2orc | v3-fos-license | Medical journals, impact and social media: an ecological study of the Twittersphere
Background: Twitter is an increasingly popular means of research dissemination. I sought to examine the relation between scientific merit and mainstream popularity of general medical journals. Methods: I extracted impact factors and citations for 2014 for all general medical journals listed in the Thomson Reuters InCites Journal Citation Reports. I collected Twitter statistics (number of followers, number following, number of tweets) between July 25 and 27, 2015 from the Twitter profiles of journals that had Twitter accounts. I calculated the ratio of observed to expected Twitter followers according to citations via the Kardashian Index. I created the (Fifty Shades of) Grey Scale to calculate the analogous ratio according to impact factor. Results: Only 28% (43/153) of journals had Twitter profiles. The scientific and social media impact of journals were correlated: in adjusted models, Twitter followers increased by 0.78% (95% confidence interval [CI] 0.38%–1.18%) for every 1% increase in impact factor and by 0.62% (95% CI 0.34%–0.90%) for every 1% increase in citations. Kardashian Index scores above the 99% CI were obsverved in 16% (7/43) of journals, including 6 of the 7 highest ranked journals by impact factor, whereas 58% (25/43) had scores below this interval. For the Grey Scale, 12% (5/43) of journals had scores above and 35% (15/43) had scores below the 99% CI. Interpretation: The size of a general medical journal’s Twitter following is strongly linked to its impact factor and citations, suggesting that higher quality research received more mainstream attention. Many journals have not capitalized on this dissemination method, although others have used it to their advantage.
Background:
Twitter is an increasingly popular means of research dissemination. I sought to examine the relation between scientific merit and mainstream popularity of general medical journals.
Methods: I extracted impact factors and citations for 2014 for all general medical journals listed in the Thomson Reuters InCites Journal Citation Reports. I collected Twitter statistics (number of followers, number following, number of tweets) between July 25 and 27, 2015 from the Twitter profiles of journals that had Twitter accounts. I calculated the ratio of observed to expected Twitter followers according to citations via the Kardashian Index. I created the (Fifty Shades of) Grey Scale to calculate the analogous ratio according to impact factor.
Interpretation:
The size of a general medical journal's Twitter following is strongly linked to its impact factor and citations, suggesting that higher quality research received more mainstream attention. Many journals have not capitalized on this dissemination method, although others have used it to their advantage.
Abstract to the meteoric unmeritocratic rise of social media celebrities via Twitter, @neilhall_uk developed the playfully dubbed Kardashian Index (K-index) to address these issues in an academic context. 12 The K-index quantifies the discrepancy between mainstream popularity and scientific merit by examining one's social media profile in relation to one's citations in peer reviewed works.
Continuing in this vein, I propose the (Fifty Shades of) Grey Scale for use with medical journals (in reference to the book, which has sold more than 125 million copies to date, despite being critically lambasted). 13 Using a similar equation to the K-index, the Grey Scale calculates the ratio of the number of actual to expected followers using journal impact factor (rather than citations, as in the K-index) as the predictor variable. Journal impact factor and total citations are closely related. Impact factor is the ratio of total citations to the number of articles published by the journal, which adjusts for journals that have many more, or fewer, citable publications (e.g., weekly or bimonthly journals). 14 Unpacking the mechanisms of Twitter celebrity is difficult. Personal Twitter profiles often include humour, wit and other attributes not normally attributed to the reporting of a new paper, as per general medical journal Tweets. By eliminating the individuality of the Tweet, looking only at medical journals' Twitter profiles rather than individual researchers', a more direct examination of the relation between Twitter celebrity and scientific merit is possible. Although Tweets linking to papers have been associated with greater citations than non-Tweeted papers, 15 whether or not this translates into greater Twitter followings for the authors and the journal in which the paper was published has yet to explored. The relation between the number of Twitter followers and impact factor scores has recently been investigated in urology journals, where nonsignificant correlations between the number of Twitter followers and the impact factor of the journal were found. 16 The current study seeks to examine whether scientific merit (captured by journal impact factor and citations) translates into Twitter celebrity (i.e., number of followers) in general medical journals.
Data source
The Thomson Reuters InCites Journal Citations Report (https://jcr.incites.thomsonreuters.com) is a platform used to compare statistics on peerreviewed journals, notably impact factor and citations. Journals in the "Medical, General & Internal" Web of Science category schema were selected and their 2014 data extracted.
Twitter profiles for all of the journals identified in the Journal Citations Report were searched between July 25 and 27, 2015, and data extracted on number of followers, number following and tweets sent.
Statistical analysis
All procedures were carried out in Stata 13. Given the nonnormal nature of the data, nonparametric and log-transformed procedures were used throughout.
Spearman correlations were conducted to examine the relation of number of followers with journal impact factor and citations.
Log-log regression models were conducted to examine the relation of number of followers with journal citations and impact factor in unadjusted and adjusted (for number of tweets) models. More active Twitter accounts are generally associated with greater numbers of followers; therefore, regression models were adjusted for number of Tweets.
Kardashian Index scores 12 were calculated for each journal using Hall's equation where F(a) is the actual number of Twitter followers and C is the number of citations.
(Fifty Shades of) Grey Scale scores were calculated using a log-adjusted regression equation for estimating the number of expected followers, derived from the dataset in the present study (i.e., the data for Twitter followers, tweets and impact factors that were collected on the general medical journals were used to find the best-fitting regression equation, which yielded the coefficients 0.79 for the observed association of tweets and 0.78 for the observed association of impact factor with followers) as follows: where F(e) is the expected number of Twitter followers, T is the number of tweets and I is the impact factor of the journal.
Thus, the Grey Scale is a measurement of the degree to which any given data point diverges from the observed average relation of tweets and impact factor with followers, analogous to the Kardashian Index: As per Hall, 12 K-index scores of more than 5 suggest a "Science Kardashian"; that is, a dispro-portionately high number of followers when compared with citations. In addition to Hall's threshold, journals that fell beyond 99% confidence intervals (CIs) were identified in the K-index and Grey Scale.
Results
The Thomson Reuters InCites Journal Citations Reports identified 153 journals in their general and internal medical categories, of which 43 had Twitter accounts. Twitter accounts for publishers of the journals were not included (Appendix 1, available at www.cmaj.ca/lookup/suppl/doi: 10.1503/cmaj.150976/-/DC1).
Journal characteristics and Twitter characteristics varied greatly among the 43 journals with Twitter accounts. Impact factors ranged from 0.24 to 55.87, with a mean of 6.56 (SD 11.82) and median of 2.26. Total journal citations for 2014 ranged from 1269 to 268 652, with a mean of 26 291 (SD 58 074) and median of 4327. Journals' total number of Twitter followers ranged from 4 to 277 451, with a mean of 24 065 (SD 59 805) and a median of 1427. Journals' total number of profiles followed on Twitter ranged from 0 to 5087 with a mean of 682 (SD 979) and a median of 289. Journals' total number of tweets ranged from 3 to 20 821 with a mean of 3027 (SD 4378) and a median of 994.
Significant positive Spearman correlations were seen between number of followers and journal impact factor (r = 0.68, p < 0.001) and journal citations (r = 0.69, p < 0.001).
Unadjusted log-log regression models showed significant relations between number of followers and journal impact factor (p < 0.001; Figure 1) and journal citations (p < 0.001; Figure 2). A 1.00% increase in impact factor was associated with a 1.46% (95% CI 1.00%-1.93%) increase in Twitter followers. A 1.00% increase in journal citations was associated with a 1.09% (95% CI 0.75%-1.44%) increase in Twitter followers. In log-log regression models adjusted for number of tweets, a 1.00% increase in impact factors was associated with a 0.78% (95% CI 0.38-1.18) increase in Twitter followers, and a 1.00% increase in journal citations was associated with a 0.62% (95% CI 0.34%-0.90%) increase in Twitter followers. The variance in number of followers explained was roughly equal in the adjusted citations (R 2 = 0.76, p < 0.001) and impact factor models (R 2 = 0.75, p < 0.001).
The average Grey Scale score was 25.62 (SD 52.64). Five journals fell beyond the upper 99% CI (i.e., Grey Scale score ≥ 46.30) and 15 below (i.e., Grey Scale score ≤ 4.94). More highly ranked journals were more likely to have high K-index ( Figure 3) and Grey Scale (Figure 4) scores.
Interpretation
A strong and independent relation was shown between general medical journals' number of Twitter followers, the journal's impact factor and total number of citations.
Owing to the ecological nature of the data, establishing a causative relation is not possible. Furthermore, the journal citation and impact factor data were collected for 2014, whereas the Twitter data were collected in July 2015. The directionality of the influence of impact factor and followers cannot be established conclusively because of these limitations. Only 28.1% (43/153) of journals indexed by the Thomson Reuters InCites Journal Citations Report in the "Medical, General & Internal" Web of Science category had Twitter profiles. Several journals had links to their publisher's Twitter profile, such as all of the BioMed Central journals, but these were excluded from the current study; therefore, some journals with social media profiles were excluded from these analyses.
The application of the K-index to journals, rather than to individuals, is beyond the scope of the original metric. 12 The number of followers for the individuals captured in the original K-index paper is much lower than those of the journals included in these analyses. Therefore, the K-index scores may have been unrepresentatively high. The Grey Scale is a rudimentary metric for the examination of impact factor and Twitter followers based on the available general medical journals; therefore, it may not be generalizable to other journal types.
There were a number of outlying journals that showed disproportionately high numbers of fol-lowers despite their comparatively low impact factor and citations -Hall's so-called Science Kardashians. 12 This moniker may be somewhat harsh; despite their discrepant mainstream popularity, these journals are generally regarded as reputable. For this reason, the K-index threshold of 5 or more may be misleading. The present study provides an alternative and data-driven means with which to examine outliers, such as by using a 99% confidence interval. Further refinement of the K-index and the Grey Scale is warranted. Cognisant that these metrics have been proposed in jest, the K-index and Grey Scale do, however, prod at the tender underbelly of science's unspoken popularity contest.
The demonstration of a positive relation between mainstream popularity and scientific merit is encouraging. It must, however, be acknowledged that the measures used to capture this relation -Twitter followers and journal impact and citations -use proxies. Although social media use is incredibly pervasive, less than a third of general medical journals have used Twitter, so this may not be the best proxy for mainstream media popularity. There has been some controversy over the use (and abuse) of impact factor as a meaningful metric for capturing scientific merit, 17,18 suggesting that further refinement is warranted. High-quality journals are garnering the greatest online followings, which hopefully will translate into a greater number of people absorbing high-quality, evidence-based research. These results stand in contrast to a recent study of urology journal impact factors and Twitter followings, where no relation was seen. 16 That study included just 8 journals compared with 43 in the current study, which may have resulted in reduced statistical power concealing the relation. 16 Further investigation into the mechanisms and direction of causality in this relation is warranted.
The original application of the K-index was to individuals, identifying discrepant social media popularities. 12 Many people captured in the Hall study have media profiles that expand beyond their scientific work; for example Neil deGrasse Tyson (@neiltyson) has, among other public appearances, hosted the science-based radio program "StarTalk." In the current study, these methods have been applied to medical journals, eliminating variables beyond scientific merit, such as humour, which people often show in their Twitter posts. A metric specific to the examination of journals has been developed in the current study, highlighting the association between a journal's scientific merit and its mainstream popularity.
With only 28.1% of general medical journals hosting a Twitter profile, this means of dissemina- tion is greatly underused. These results are in line with the Twitter participation of urological journals (24.2%). 16 However, the uptake of Twitter among physicians is on the rise, 19 a trend that will hopefully extend to medical journals. Rather than identifying large numbers of Science Kardashians, the present study shows that many more journals are closer to "Popularity Franklins" -they have received disproportionately low levels of recognition and popularity than would be warranted by their scientific merit, as per Rosalind Franklin. One of the considerable barriers between medical research and the general public is the means of communication and knowledge translation. Twitter is a hospitable middle ground where the lay reader need not pore through a peerreviewed journal to extract its actionable pieces of information, 140 characters at a time. 9,19
Conclusion
There is a positive relation between the scientific merit of general medical journals and the journal's Twitter following. Twitter is an underused means of communication, with less than a third of medical journals hosting a profile. With the exception of a few outliers, most journals had Twitter followings that corresponded with their impact factor and citations. These results suggest that higher impact science is reaching a greater proportion of the general public than lower impact science. In an era in which engaging in social media has become a part of medical research, the demonstration of Twitter as an effective means of research dissemination may make Beliebers of us all. | 2018-04-03T04:00:57.941Z | 2015-12-08T00:00:00.000 | {
"year": 2015,
"sha1": "8e4584a09b23fd173c45927e84221be7b6a2ae4b",
"oa_license": null,
"oa_url": "https://www.cmaj.ca/content/cmaj/187/18/1353.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "1d8879f8d790eb3eedf68275e74ba403f709e623",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44072751 | pes2o/s2orc | v3-fos-license | Increased Frequency of Immune Thrombocytopenic Purpura in Coeliac Disease and Vice Versa: A Prospective Observational Study
Introduction Coeliac disease (CD) and immune thrombocytopenic purpura (ITP) are immune conditions, often associated with other immune disorders. In recent years, increasing attention has been directed towards the association between ITP and CD. Aim To investigate the frequency of ITP in CD patients and vice versa and to assess the risk of their association. Patients and Methods This was a prospective observational study. All consecutive patients with CD or ITP attending our department were enrolled between January 2016 and December 2017. All patients with CD were screened for ITP and patients with ITP for CD. Odds ratios (ORs) were calculated based on the prevalence in the general population. Results Two hundred sixty-one CD patients (212 female, mean age 47 ± 16.1 years) and 32 ITP patients (17 female, mean age 57.8 ± 17.4 years) were enrolled. In the CD cohort, two patients (2/261; 0.8%) reported a previous diagnosis of ITP, compared to the general population; OR was 15.3 (95% CI, 3.82–61.73; p < 0.0001). Similarly, in the ITP cohort, two patients (2/32; 6.3%) had a previous diagnosis of CD (OR: 9.89, 95% CI, 2.27–43.16; p = 0.0002). Discussion A greater frequency of ITP in coeliac patients and vice versa was observed in our study, suggesting an increased risk for patients of developing both disorders.
Introduction
Coeliac disease (CD) is an immune disorder affecting the small intestine, triggered by the ingestion of gluten in genetically susceptible individuals [1]. The prevalence of CD worldwide has been estimated between 0.5% and 1% [2,3], in particular, a recent study based on data from the Italian general population reported a prevalence of 0.7% [4]. Coeliac disease is often associated with other autoimmune disorders, maybe through a shared immune-related pathogenesis [5,6]. We have recently observed an increased prevalence of chronic autoimmune disorders in coeliac patients and more specifically a trend for polyautoimmunity [7].
In the last years, an increasing attention has been directed towards the association between immune thrombocytopenic purpura (ITP) and CD. ITP is a rare acquired thrombocytopenia caused by autoantibodies against platelet antigens [8]; the estimated prevalence in the general population is 0.005% [9].
Several case reports described the coexistence of CD and ITP [10][11][12][13][14][15][16][17][18][19]. The first large study designed to investigate this association confirmed an increased risk to develop ITP in coeliac patients and vice versa [20]. Afterwards, two smaller observational studies showed contradictory results. For example, in a cohort of 21 children with ITP from Switzerland, Rischewski et al. were not able to find any case of CD [21], while in a more recent study, Sarbay and colleagues observed one case of CD in a group of 29 ITP patients [22].
The aim of this study was to evaluate the frequency of ITP in a cohort of coeliac patients and conversely, the occurrence of CD in patients affected by ITP.
Study Design.
This was a prospective observational study. Consecutive patients with a diagnosis of CD attending the Gastrointestinal Service at the Clinica Medica, and patients followed for ITP attending the Hematological Service were enrolled in the study from January 2016 to December 2017. Coeliac patients were accurately investigated for a current or previous diagnosis of ITP [23,24]. Moreover, patients with ITP were studied for CD according to the current guidelines. Briefly, patients were evaluated for serological markers of CD; and if indicated, they were tested for genetic susceptibility and eventually underwent upper endoscopy [25,26]. Data including age, sex, and body mass index (BMI) were collected from each study subject. In addition, information regarding previous or concurrent illnesses was retrieved by directly questioning and by screening charts. For patients who underwent multiple visits within the study period, only information from the last visit was included.
Eligibility Criteria.
Adult patients with a definite diagnosis of CD or ITP available to give their written informed consent were eligible for the study. The diagnosis of CD was established on the basis of a combination of clinical features, biochemical testing, serology markers, and histopathological alterations of duodenal mucosa according to the current international guidelines [25,26]. The diagnosis of ITP was established according to the international consensus report [23].
Statistical Analysis.
Patients included in the study were stratified into two independent cohorts (CD patients and ITP patients). In case of coexisting disease, patients were classified according to the disease that appeared earlier. The frequency of ITP was calculated in the CD cohort; similarly, the frequency of CD was estimated in ITP cohort. Odds ratios (ORs) with their 95% confidence intervals (CIs) were calculated to estimate the strength of associations between CD and ITP; the results were considered significant when p values were less than 0.05. ORs were calculated based on the prevalence in general population [4,9]. All the statistical analyses were performed using SPSS statistical package (version 16.0, Chicago, IL).
Results
A total of 293 patients were enrolled in the study. More specifically, 261 were CD patients (212 female, mean age 47 ± 16.1 years) (Table 1), and 32 were ITP patients (17 female, mean age 57.8 ± 17.4 years). In the CD cohort, two patients were previously diagnosed with ITP (2/261; 0.8%). New cases of ITP were not observed during the study period. Assuming a 0.005% prevalence of ITP in the general population, the calculated OR was 15.3 (95% CI; 3.82-61.73; p < 0 0001).
In the group of ITP patients, for two of them (2/32; 6.3%), a diagnosis of CD was made before the study period; the remaining 30 patients were investigated for asymptomatic CD and new cases of CD were not found. The final prevalence of CD in patients with ITP was 6.3%, assuming a 0.7% prevalence of CD in the general population; the OR was 9.89 (95% CI; 2.27-43. 16; p = 0 0002).
Coexistence of CD and ITP was observed in 4 subjects (3 female and 1 male) ( Table 2). Two female developed CD earlier; only one had a history of other autoimmune disorders (rheumatoid arthritis). The last one female was 19 years old, who had a history of chronic ITP in childhood and at the time of an ITP flare, the patient developed CD; furthermore, this patient had concurrent Hashimoto's thyroiditis and a family history of CD and type 1 diabetes. Finally, the only male subject who developed CD had a previous diagnosis of ITP and type 1 diabetes.
Discussion
In this study, a statistically significant association between CD and ITP was observed. In order to calculate the risk in our studied cohort, Italian data for CD (prevalence 0.7%) [4] and European data (there are no official data on ITP prevalence in Italy) for ITP (0.005%) [9] were used as reference. The risk to develop both diseases in CD or ITP patients was greater than expected in the general population.
Previous case series by Sarbay et al. [22] reported a lower prevalence (3.4%, 1/29), and Rischewski et al. reported the absence of CD (0/21) among ITP patients [21]. The discrepancy observed between our study and the two case series analyzed by Sarbay et al. and Rischewski et al. may be partially explained by sampling bias given the small size of cohorts and the rarity of the disease.
On the other hand, our findings confirmed previous observations reported by Olen et al. [20] who found a positive association between CD and ITP, without difference considering the time of onset. They reported that patients with ITP had a 2.96 risk to develop CD, and similarly, patients with CD had an increased risk to further develop ITP (HR 1.91) [20].
The trend towards polyautoimmunity is a characteristic feature among case reports; more specifically, an association of Hashimoto's thyroiditis or other autoimmune diseases with CD and ITP was reported [11,12,16,17]. Several autoimmune conditions are frequently associated with CD, including autoimmune thyroiditis and type 1 diabetes [7]. Our study confirmed the increased risk to develop CD in patients with ITP and vice versa.
In conclusion, based on our results, the chance to develop ITP in patients with CD or the other way around should be taken into account, especially if additional autoimmune diseases are present.
These observations suggest the need to perform larger studies, in particular to assess environmental and genetic risk factors that predispose to polyautoimmunity. | 2018-06-05T03:54:07.660Z | 2018-04-23T00:00:00.000 | {
"year": 2018,
"sha1": "87d45bf1b490485a9e9223d011c604ab73e8b4fb",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/grp/2018/4138434.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7a3719b1e913f92b90d680924d33757093de46e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259272090 | pes2o/s2orc | v3-fos-license | Sequential Application and Post-Test Probability for Screening of Bladder Cancer Using Urinary Proteomic Biomarkers: A Review based Probabilistic Analysis
Background: Bladder cancer is one of the most common cancers in the world, with men being affected more than women. Diagnosis by cystoscopy, cytology and biopsy is invasive. Urine cytology, a non-invasive modality is not sensitive. This study is undertaken to evaluate whether non- invasive urinary proteomic profiling is more sensitive, specific for bladder cancer. Objective: To evaluate the sensitivity and specificity of various urinary proteomic biomarkers as a screening tool for bladder cancer. Methods: PubMed database was searched from 4th December 2011 to 30th November 2021 using Mesh terms and n = 10,364 articles were found. PRISMA guidelines were followed and Review articles, animal studies, Urinary tract infections, non-bladder cancer and other irrelevant articles were excluded. All studies who have reported mean/median (SD/IQR), sensitivity, specificity, cut off values (ROC analysis) were included (n=5). Post-test probability of various biomarkers was calculated using sequential approach. Pooled analysis was depicted using Forest plot. Results: Analysis of diagnostic studies of bladder cancer showed the post-test probability of CYFRA21-1 was 36.6%. Using sequential approach, the panel of biomarkers CYFRA 21-1, CA-9, APE-1, COL13A1 has post-test probability of 95.10% to diagnose bladder cancer. Analysis of two observational studies with APOE (n= 447) showed non-significant increase of APO-E levels in bladder cancer cases (WMD: 66.41with 95% CI 52.70-185.51; p=0.27, I2 92.4%). Conclusion: In patients presenting with hematuria, a panel of CYFRA 21-1, CA-9, APE-1, COL13A1 markers can be considered for screening of bladder cancer.
Introduction
Recurrence in most cases is seen within five years, and tumor progression is commonly seen in patients with higher-grade lesions (Jordan et al., 1987).
Cystoscopy and cytology are mainly relied upon for the diagnosis of BC. Most papillary and solid lesions are detected by cystoscopy, but it is invasive (Jordan et al., 1987). Urine cytology is non-invasive with reasonable specificity and sensitivity for the detection of high-grade BC; however, for detecting low-grade tumors, its sensitivity ranges from only 4% to 31% (Lotan and Roehrborn, 2003). Because of these limitations for clinical detection, there arises a need for the development of non-invasive urinary biomarkers for the diagnosis of BC.
Early detection remains one of the critical issues in BC research. The probability of successful patient treatment largely depends upon the stage of detection of BC. The development of a non-invasively obtained urine biomarker assay would be of great help not only for the diagnosis but also for screening asymptomatic populations at risk.
So this review was undertaken to find better and more promising urinary proteomic markers for the screening of bladder cancer.
Materials and Methods
The literature search was done in the NCBI PubMed database using the MESH search strategies from 4th December 2011 to 30th November 2021. PRISMA guidelines were followed.
Search strategy
The following search string has been developed (((((("urinary
Inclusion criteria
a)Study subjects are humans b) articles written in English, c) studies where urinary proteomics in bladder cancer are compared with controls; d) provided the biomarker data on mean/median values, Standard deviation/Inter quartile range; e) provided method description for urinary proteomic markers estimation; f) All studies irrespective of the staging of bladder cancer.
Exclusion criteria
a) Animal studies b) Urinary tract infections c) studies with no control group e) Studies where proteomic analysis was done in serum and tissues; F) Reviews, meta-analysis, commentaries, and letter to the editor.
A total of 10,364 articles were obtained with a basic search using the key terms urinary proteomics and cancer in the PubMed database. Out of them, before the screening, 1226 review articles, 5040 animal studies, 442 articles on urinary tract infections, 451 articles published prior to 4th December 2011, and 2019 non-bladder cancer articles were excluded. All the authors have screened and extracted the data from the articles independently. Duplicate articles were examined by all the authors and consensus was taken to include the original data. Title screening was done for the remaining 1186 articles, out of which 1,148 articles were excluded. After the abstract screening of the remaining 38 articles, 16 were excluded. PDF screening was done for the remaining 22 articles, following which 17 articles were excluded. A total of 5 studies were included in the review (Figure 1).
Data extraction
The following data was noted from the studies; first author name, place of study, study type and year of publication, number of cases and control subjects, male and female subjects in each group, mean/median values of age, protein marker studied, mean and SD metrics of protein markers. Other study characteristics like sensitivity, specificity, and positive and negative predictive values were also retrieved from the studies (Tables 1, 2).
Statistical analysis
Positive likelihood ratio (LR+) and Post-test probability were calculated for different biomarkers (Tables 2, 3). Pre-test odds were calculated as Pre-test probability/ (1-Pre-test probability).Post-test odds were calculated as Pre-test odds × LR+. Post-test probability was calculated as Post-test odds/ (Post-test odds+1) (Garudadri et al., 2011).The studies on sensitivity and specificity of biomarkers have used single biomarkers for reporting the accuracy. However, the sequential (adding more than one biomarkers in the panel of tests sequentially and calculating the accuracy) or simultaneous (adding more than one biomarker at the same time for calculating accuracy) approach might have better accuracy compared to using single biomarkers. Hence we did as a post laboratory sequential approach for recalculating the accuracy of a panel of biomarkers. The prevalence of any type of bladder cancer reported among individuals with hematuria as in Khadra (2000) study being 12% , this was considered as pre-test probability. A marker with highest specificity was taken as the first test in the sequential panel, because, higher specificity ensures less false positive cases. The biomarker with highest specificity (cytokeratin 19 fragment (Cyfra21-1)) was considered as first biomarker for calculation of post-test probability. The biomarker with next highest specificity (Carbonic anhydrase 9 (CA-9)) was considered for calculation of post-test probability of screening for bladder cancer. We considered the post-test probability of Cyfra21-1 as pretest probability of Cyfra21-1+CA-9. Similarly further calculation was done. Further, the levels of urinary APO-E (Apolipoprotein E) in patients with and without bladder cancer have been compared with WMD. Results were graphically depicted as forest plots.
Discussion
The study was undertaken mainly to identify non-invasive urinary proteomic biomarkers that are more sensitive and specific, for early detection of bladder cancer. Sequential analysis of biomarkers with highest specificity, yielded a four biomarker panel (CYFRA21-1, CA-9, APE-1, COL13A1) with high post-test probability of 95.10%. The included studies had patients from low grade to high grade cancer. Thus the four biomarker panel can be used for screening patients presenting with hematuria for early detection of bladder cancer.
Demographic and clinico-pathological characteristics of patients in different studies are illustrated in table 1 and the diagnostic specifications of different biomarkers in studies are illustrated in Table 2.
A combination of four biomarkers: CYFRA 21-1, CA-9, APE-1, COL13A1 have shown high post-test probability of 95.10% for screening of bladder cancer [ Table 3]. However there was no much change in the post-test probability for the biomarkers IL-8, SDC-1, PAI-1, A1AT, COL4A1, CCL-18, MMP-10, VEGF, MMP-9, APO-E and ANG using sequential approach. Hence the panel of CYFRA 21-1, CA-9, APE-1/Ref-1, COL13A1 is a good model for screening of bladder carcinoma. Table 3. Post-test probability Calculation by Sequential Approach biomarkers, CYFRA21-1 had higher specificity of 82.4% when used as a single marker. CA-9 is a tumor associated, cell-surface glycoprotein induced by hypoxia, involved in adaptation to acidosis, and implicated in cancer progression via its catalytic activity and non-catalytic functions. Individually, increased expression of Urinary CA-9 levels in bladder cancer were reported in these studies Chen et al., 2014). As a single marker it has specificity 80.5% and sensitivity 68.85%, whereas in panel reported by , specificity and sensitivity increased to 97 % and 92 % respectively.
Apurinic/apyrimidinic endonuclease 1 (APE-1) is a multifunctional redox signaling and DNA repair protein increased with unregulated cellular proliferation. Studies (Shin et al.,2015;Choi et al., 2016) reported an increased expression in the serum and significantly higher urinary levels of APE-1 of BC patients as compared to healthy controls. The levels not only correlated with tumor grade and stage but also were higher in patients with a history of recurrence. Used singly, it has a good combination of high specificity (79.6%), and high sensitivity (81.7%).
Collagen type 4A1 (COL4A1) is predominantly localized in the stroma around the tumor cells promoting angiogenesis and tumor progression. Collagen type 13A1 (COL13A1), a transmembrane protein expressed at cell-matrix junctions, supports vital oncogenic properties of tumor invasion and is strongly associated with poor clinical outcomes in human BC (Hagg et al., 1998;. A study reported significantly higher levels of COL4A1 and COL13A1 in bladder cancer cases as compared to healthy controls . Among these two, COL13A1 had higher specificity (77.1%) than COL4A1 (68.9%).
Numerous biomarkers (IL-8, MMP9, SDC1, CCL-18, A1AT, ANG, MMP-10, APO-E, PAI-1, VEGF) have been reported by various studies with the varying specificity ranging from 51.9% to 75.2% and sensitivity from 68.2% to 88.5%. Using a panel with combination of markers will gives better specificity and sensitivity for screening as seen by . We have analyzed a novel panel of four biomarkers (CYFRA 21-1, CA-9, APE-1, COL13A1) of which CYFRA 21-1, APE-1, and COL13A1 have never been studied as a panel. Inclusion of these biomarkers in analysis as a panel increased the post-test probability to 95.10%.
Significantly high levels of urinary Apolipoprotein E (APO-E) were reported in bladder cancer and it significantly differentiated high grade and low grade bladder cancer Chen et al., 2014). APO-E being a consistent marker in the panels studied by , and widely studied individually also, we did a pooled analysis which showed higher non-significant levels in bladder cancer cases. However, data should be interpreted with caution due to high heterogeneity between the studies. Though APO-E had good weight in the forest plot, it was not included in the panel of biomarkers because of its low specificity.
Using a panel of biomarkers is of greater utility than individual marker in screening of bladder cancer. The panel of four urinary biomarkers-CYFRA 21-1, CA-9, APE-1, COL13A1 with a high post-test probability of 95.10 % can be considered for screening of bladder cancer in patients presenting with hematuria. Our study has few limitations. There was high heterogeneity among different studies, due to differences in the study population, sample size and different number of patients in different stages of disease. Further validation has to be done in large sample size. The post-test probability using sequential panel approach was calculated using cumulative data available. However there may be differences in the post-test probabilities if the sequential panel approach is explored in laboratory settings.
Author Contribution Statement
Study concept and design: Aparna Varma Bhongir, Sangeetha Sampath, Gomathi Ramaswamy; Data acquisition: Rohit Kumar Bonthapally, Aparna Varma Bhongir, Sangeetha Sampath, Gomathi Ramaswamy, Figure 2. Meta-Analysis of Association between Levels of Urinary APO-E and Bladder Cancer. Forest plot detailing weighted mean difference (WMD) and 95% confidence interval for the association between levels of urinary APO-E and bladder cancer. Weights of the study are from random effect model. APO-E: Apolipoprotein E | 2023-06-29T06:15:56.333Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "396370f919a618999d7223b3b27644978c2c5629",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c8307cedaa22a286e4f061fabbf64d2443831fd5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265305536 | pes2o/s2orc | v3-fos-license | Metabolomics analysis of pathways underlying radiation-induced salivary gland dysfunction stages
Salivary gland hypofunction is an adverse side effect associated with radiotherapy for head and neck cancer patients. This study delineated metabolic changes at acute, intermediate, and chronic radiation damage response stages in mouse salivary glands following a single 5 Gy dose. Ultra-high performance liquid chromatography-mass spectrometry was performed on parotid salivary gland tissue collected at 3, 14, and 30 days following radiation (IR). Pathway enrichment analysis, network analysis based on metabolite structural similarity, and network analysis based on metabolite abundance correlations were used to incorporate both metabolite levels and structural annotation. The greatest number of enriched pathways are observed at 3 days and the lowest at 30 days following radiation. Amino acid metabolism pathways, glutathione metabolism, and central carbon metabolism in cancer are enriched at all radiation time points across different analytical methods. This study suggests that glutathione and central carbon metabolism in cancer may be important pathways in the unresolved effect of radiation treatment.
Introduction
Over 54,000 new cases of head and neck cancer (HNC) are estimated in the United States by the American Cancer Society annually, with approximately 5% of these cases resulting in death [1].Radiotherapy, combined with chemotherapy and surgery, is the dominant treatment for HNC and is effective in eliminating tumors and preventing cancer recurrence [2,3].Due to the proximity of the salivary glands to HNC tumors, they are indirectly damaged by radiotherapy and lose secretory function due to their high level of radiation sensitivity [4][5][6].Acutely, oral mucositis and loss of saliva production begins to occur within the first week of radiotherapy as loss of acinar cells and glandular shrinkage occurs [6,7].Hyposalivation continues to persist chronically in over 80% of HNC patients [8][9][10].Failure of the acinar cells to functionally repair and regenerate contributes to chronic salivary gland dysfunction, with the level of acute damage reflecting the level of chronic complications, such as dental caries, periodontitis, and nutritional deficiencies [11][12][13][14].The underlying mechanisms regulating chronic radiation-induced salivary gland dysfunction are not well understood [6].Current treatments to temporarily relieve symptoms include sialagogues without restoring function to the damaged salivary gland, and are associated with unpleasant side effects and inconsistent results [15,16].Therefore, the underlying mechanisms of radiation-induced salivary gland damage need to be discovered to identify methods for restoring function to the damaged gland.
Metabolomics is a powerful platform that reflects the effects of disease or damage in cells, tissues, or biological fluids since the metabolic phenotype is the interaction of multiple environmental factors in combination with genetic factors [17][18][19][20].Small changes in gene expression or protein levels are magnified at the metabolite level, which suggests that changes in metabolite levels may provide a clearer reflection of an organism's phenotype compared to changes at the gene and protein level [21].Metabolomics has been used to identify mechanisms underlying the radiation-damage response in multiple tissue types.The effects of different radiation dosages (0, 2, and 20 Gy) on the intestinal tissue metabolite profile in C57BL/6 mice revealed increases in amino acid levels that correlated with increased radiation dosage, which the authors suggest might be reflective of oxidative stress and pose amino acid metabolism as a possible therapeutic target mechanism to alleviate radiation-induced intestinal toxicity [22].The effect of 12 Gy partial body radiation on liver, kidney, heart, lung, and small intestinal tissue of non-human primates at acute and intermediate damage time points revealed temporal changes in both citrulline and branched chain amino acids in all tissue types [23], which may underlie chronic damage.
Our previous work integrated metabolites and transcripts altered in response to radiation treatment at five days post-IR in mouse parotid salivary gland tissue and identified joint pathway enrichment for glutathione metabolism, energy metabolism (TCA cycle and thermogenesis), bile acid production, and peroxisomal lipid metabolism [24].Decreases in apical/ basolateral polarity and increases in compensatory proliferation in the acinar cell compartment have been demonstrated at five days post-IR, with continued increases in compensatory proliferation correlated with chronic loss of secretory function [25,26].Since loss of secretory function has been previously demonstrated to begin as early as three days post-IR and continue for at least 90 days to one year in rodent models, these data suggest that the aforementioned metabolic pathways may drive chronic salivary gland dysfunction [24][25][26][27][28].
The purpose of this study is to identify metabolic changes that occur at acute, intermediate, and chronic damage time points in the salivary gland due to radiation treatment.A summary of previously identified phenotypes underlying the radiation-damage response over time in a mouse model is presented for context (Fig 1).Apoptosis of salivary acinar cells has been reported between 8-72 hours following radiation [29,30] and increased levels of reactive oxygen species (ROS) have been detected as early as 24 hours post-IR and persist for at least ten days [31].Loss of secretory function has been reported as early as three days post-IR [27,32], therefore, three days post-IR was chosen as a representative acute damage time point.Decreases in acinar cell differentiation markers (e.g.amylase) have been reported as early as ten days post-IR and persist for at least 90 days post-treatment [27,[33][34][35].Fourteen days post-IR was selected as a representative intermediate damage time point as it exhibits increased compensatory proliferation, decreased differentiation and decreased function and we hypothesize may be a pivotal time point in the wound healing response [36].We have previously demonstrated that radiation-induced loss of secretory function at 30 days post-treatment is similar at 60 and 90 day post-treatment [27], and other rodent models have confirmed chronic loss of function up to one year post-treatment [28,37].Therefore 30 days was chosen as a representative chronic damage time point as it exhibits increased compensatory proliferation, decreased differentiation and decreased function.To increase the robustness of the analysis, we are using metabolite annotation, metabolite structure, and metabolite levels to identify metabolic pathway-level enrichment.We articulated the metabolic changes that are correlated with the different stages of the radiation-induced damage response in the salivary gland and placed them in a biological context through network analysis.
Mice and radiation treatment
Mice were housed and treated following protocols approved by the University of Arizona Institutional Animal Care and Use Committee (IACUC).All experiments were conducted using female FVB mice obtained from Jackson Laboratories (Bar Harbor, ME).At 4-6 weeks of age, mice were treated with a single five Gy radiation dose using a 60 Cobalt Teletherapy Instrument from Atomic Energy of Canada Ltd Theratron (80-cm distance from source).Prior to radiation treatment, mice were anesthetized with an intraperitoneal injection of ketamine/xylazine (70mg/kg-10 mg/mL) and placed in a 50 mL conical tube.To target the head and neck region for radiation treatment, the rest of the body was shielded with >6mm thick lead during radiation exposure.
Tissue preparation and metabolomics processing
Parotid salivary glands were extracted from mice at 3 days post-IR (N = 8), 14 days post-IR (N = 8), 30 days post-IR (N = 8), and from untreated mice (N = 4).The sample size of the untreated group decreased from 8 to 4 due to severe dehydration occurring in 4 of the mice due to a failure in the automatic water system.10-20 mg of parotid salivary gland tissue sample was subjected to a methanol (1 mL) extraction of both polar and nonpolar metabolites.Samples were homogenized in a MPbio FastPrep 24 bead beater utilizing 2 mL homogenization tubes, Glass beads (Sigma 425-600 μm acid washed, G8772-100 G), and 1 mL methanol spiked with internal standard mix (10 μL Deuterated Amino Mix, 10 μL of SPLASH LipidoMIX Internal standard mixture, Avanti, Al; Product number 330707) for additional semiquantitative analysis.Sample were homogenized under the following Fast prep conditions: twice at 6.5 m/s for 20 Sec.Tissue precipitant was pelleted through centrifugation (10 min, 10,000 RPM, 4˚C), supernatant was transferred and dried under nitrogen and further stored at -20˚C until resuspended in 100 μL methanol/0.1% formic acid.
Ultra-high performance liquid chromatography-mass spectrometry (UHPLC-MS) analysis
One μL of sample extract was injected onto a Thermo Vanquish Duo UHPLC system in randomized order and separated using a Thermo Scientific Accucore 150-Amide-HILIC (250 x 2.1 mm 2.6 μ) and Hypersil GOLD (150 x 2.1mm 1.9μ) columns for hydrophilic interaction liquid chromatography (HILIC) and reverse phase (RP) chromatography, respectively, as described by Najdekr et al. [38].The HILIC solvent system included a gradient from solvent A (95% acetonitrile/ 5% water with 10 mM ammonium acetate and 0.1% formic acid) to solvent B (50% acetonitrile/ 50% water with 10 mM ammonium acetate and 0.1% acetic acid) over 12 mins at 500 μL/min, and column re-equilibration occurs during RP analysis.The RP chromatography included a gradient from solvent A (0.1% formic acid in water) to solvent B (0.1% formic acid in methanol) over 12 minutes at 300 μL/min, and column re-equilibration occurs during HILIC analysis.Column temperatures were maintained at 50˚C.Mass spectrometry (MS) detection was performed using a state-of-the-art Thermo Exploris 480, utilizing default lipidomic and metabolomic acquisition settings optimized by Thermo unless otherwise stated.These settings include: 3.4kV spray voltage in positive ion mode, 45 AU sheath gas, 10 AU auxiliary gas, and 325˚C ion transfer tube, 350˚C vaporizer temperature.The Orbitrap mass analyzer scanned from 67-1000 m/z at 120,000 resolution for full scans and 15,000 for MS/MS scans.Samples were run in MS mode only, while Pooled QC's were run in data-dependent acquisition (20 scans) mode and dynamic exclusion was set to 8 sec utilizing higher-energy collisional dissociation (HCD).Thermo AquireX platform was utilized on the pooled QC sample to achieve optimal feature annotation.
Metabolite ID annotation
Compound Discoverer version 3.3 provided Compound Names, SMILES IDs and KEGG IDs for a portion of the detected metabolites as part of its output.A single Pooled QC was created as a composited from all sample groups (aliquots were combined post extraction) and run between every 15 samples during LC/MS analysis, then utilized post processing to remove features with greater that 30% variation in the QC samples.The QC sample was also utilized for peak area and retention time normalization over the entire run.Annotations were assigned in Compound Discoverer utilizing the following data bases: MZcloud, Metabolika, ChemSpider, and MassList.Features with predicted formulas (only) based on accurate mass were also exported as part of the output.PubChem ID annotation was carried out by querying the Pub-Chem database primarily from compound names and secondarily from compound formulas using the function get_cid from the webchem package.KEGG IDs that were not supplied by Compound Discoverer were annotated primarily by querying the Chemical Translation Service (CTS) database from compound names using the cts_convert function from the webchem package and secondarily by querying the KEGG database from compound formulas using the keggFind function from the KEGGREST package.SMILES IDs that were not supplied by compound discoverer were annotated by querying the PubChem database from PubChem IDs using the pc_prop function from the webchem package.In HILIC mode, 1,540 compounds were detected and 598 metabolites (39%) were annotated at each timepoint.In RP mode, 2,852 compounds were detected and 1,172 metabolites (30%) were annotated at each time point.
Metabolite class annotation
Metabolite class annotation was performed using the HMDB database (HMDB Version 5.0) first, and if the compound was not identified in HMDB then the PubChem database (National Library of Medicine) was used.The compound names were entered into the metabolite search engine and when the correct metabolite was identified, the class information was found under "Chemical Taxonomy" in the HMDB database or under the description of the metabolite in the PubChem database.If the metabolite could not be identified, "NA" was recorded.
Metabolomics statistical analysis
HILIC and RP peak files were obtained through the Compound Discoverer software version 3.3 (CD3.3).Metabolite intensity counts were loaded with the Read.TextData function.Data quality assessment was carried out first with the SanityCheckData function to evaluate the accuracy of sample and class labels and data structure, and to identify non-numeric values and groups with a variance of 0. RSD's of the QC samples were calculated and metabolite features with greater than a RSD of 30% were not included.CD 3.3 utilized gap filling node to identify features that were not identified in the first pass, then applied random forrest to impute missing values by following compound profile patterns.Data was normalized with the Normalization function (parameters: rowNorm = "NULL", transNorm = "LogNorm", scaleNorm = "ParetoNorm", ratio = FALSE, ratioNum = 20) to perform row-wise normalization, log transformation, and pareto scaling of the metabolomics data.All metabolite data processing steps were carried out using the MetaboAnalystR package.Principal components for partial least squares-discriminant analysis (PLS-DA) were provided with the opls function from the ropls package.Normalized metabolite data distribution was visualized with PSL-DA using the ggplot2 package.
Pathway enrichment analysis
Metabolite set enrichment analysis was carried out with fora (over representation analysis) and fgsea (pre-ranked set enrichment analysis) functions from the fgsea package (parameters: minimal metabolite set size 5, maximal metabolite set size 500).Pathway enrichment was performed against KEGG pathways from the ConsensusPathDB (CPDB) database for metabolites.
MetaMapp network analysis
Metabolite PubChem and SMILES IDs were annotated primarily from compound names and secondarily from compound formulas.The metabolite annotation was used as input to obtain the structural data file (sdf) information and compute the metabolite structural similarity network, using the Metamapp R package (https://github.com/barupal/metamapp).The generated network files (sif) were used to detect communities with the Louvain algorithm, and overrepresentation analysis (ORA) was applied to each detected cluster with more than 50 metabolites to identify enriched pathways.Enrichment analysis was carried out with the fora function from the fgsea package (parameters: minimal metabolite set size 5, maximal metabolite set size 500) using all KEGG pathways from the ConsensusPathDB (CPDB) database for metabolites.
Weighted correlation network analysis (WGCNA)
Normalized metabolite levels were used for assessing correlation patterns using weighted correlation analysis (WGCNA).Briefly, the scale free topology fit R 2 was computed for a range of values of the soft threshold power (β).We applied an R 2 threshold cutoff of 0.9 to obtain the minimum corresponding soft threshold power and used this value of β to calculate the adjacency matrix and subsequent topological overlap matrix.The corresponding dissimilarity matrix (1-TOM) values were hierarchically clustered, and modules were detected using the criteria of a tree height of 0.995 and a minimum of 50 metabolites.Pathway enrichment overrepresentation analysis (ORA) was applied to each detected module, using the fora function from the fgsea package and CPDB KEGG pathways as described above.
Metabolomic profiling reveals separation between all radiation time points
The IR time points chosen for this study reflect the acute, intermediate, and chronic damage responses in the salivary gland to identify kinetic changes in metabolites.Selection of radiation dose (5 Gy) was based on consideration of the clinical exposure in patients (2 Gy/day), comparison to other papers in the field (10-40 Gy), and data from our previous studies (2-10 Gy) [39][40][41][42][43][44].Metabolomic profiles in each phase were compared between treated and untreated groups at days 3, 14 and 30 post-IR to determine differences at each IR time point.HILIC chromatography was used to separate hydrophilic metabolites while RP chromatography was used to separate polar and aromatic metabolites as well as organic acids, thus improving the scope of metabolites detected [38].The number of detected features in HILIC phase was 4,600 and the number of detected features in RP phase was 8,525.There were 1,534 metabolites identified by Compound Discoverer in HILIC phase at day 3 IR, 1,531 at day 14 IR, and 1,535 at day 30 IR (S1 Table ).There were 2,844 metabolites identified in RP phase at day 3 IR, 2,840 at day 14 IR, and 2,841 at day 30 IR (S1 Table ).The partial least squares discriminant analysis reveals distinct metabolite profile separation patterns between the three IR time point groups and the untreated group in both HILIC and RP phases (Fig 2A and 2B).Cross-validation was used to assess the performance and generalizability of the models for both HILIC (S1 The overall assessment of the model's performance was good and not overfitted, as the R2 approached 1 for both HILIC and RP, while the Q2 exceeded 0.7 in both models.The heatmaps of all identified metabolites at each time point in HILIC and RP phases display a pattern of increased metabolite levels for each IR group compared to untreated (Fig 2C and 2D).Differential intensity analysis identifies 507 metabolites in HILIC phase and 976 in RP phase with P adj < 0.05 between the day 3 IR group and the untreated group, 407 metabolites in HILIC phase and 901 metabolites in RP phase between the day 14 IR group and the untreated group, and 276 metabolites in HILIC phase and 504 metabolites in RP phase between the day 30 IR group and the untreated group (S1 Table ).The significant differentially altered metabolites with P adj < 0.05 are shown in the heatmaps for each IR group compared to untreated in HILIC and RP phases (Fig 2E and 2F).
We next annotated the top 100 significant metabolites by class for each IR time point (S2 Table ).Table 1 presents a summary of the most commonly observed metabolite classes for all IR time points compared to untreated in HILIC and RP phases.We observe across all IR time points that the majority of significant metabolites (P adj < 0.01) are upregulated, with the highest percentage of upregulated metabolites observed at day 14 IR and the lowest percentage of upregulated metabolites observed at day 30 IR (Table 1 and S2 derivatives are upregulated in response to IR, while most of the significant downregulated metabolites are unidentified, with the few identified downregulated metabolites annotated to amino sugars and organonitrogen compounds (Table 1).
Pathway enrichment analysis identifies significant enrichment at days 3 and 14 following radiation
We performed enrichment analysis on the metabolites with associated KEGG IDs using gene set enrichment analysis (GSEA) with KEGG pathways and the top 10 enriched pathways at each IR time point are presented in Table 2.We observe 16 significant pathways using P adj < 0.25 at day 3 IR in HILIC phase and 3 significant pathways in RP phase (full set of enriched pathways is presented in S3 Table ).The top 3 enriched pathways in day 3 IR HILIC phase are glutathione metabolism, aminoacyl-tRNA biosynthesis, and protein digestion and absorption, and the 3 significant pathways in RP phase are cysteine and methionine metabolism, central carbon metabolism in cancer, and glutathione metabolism (Table 2).At day 14 IR, 11 pathways are significantly enriched in HILIC phase and 10 pathways are significantly enriched in RP phase (S3 Table ).The top 3 enriched pathways in day 14 IR HILIC phase are ferroptosis, aminoacyl-tRNA biosynthesis, and protein digestion and absorption, and the top 3 enriched pathways in RP phase are central carbon metabolism in cancer, phenylalanine metabolism, and aminoacyl-tRNA biosynthesis (Table 2).At day 30 IR, no pathways reached significance at P adj < 0.25 in HILIC phase, but the top enriched pathways are ferroptosis, diabetic cardiomyopathy, and glutathione metabolism (S3 Table ).In RP phase at day 30 IR, no pathways reached significance at P adj < 0.25, with the top listed pathways being tryptophan metabolism, glutathione metabolism, and ubiquinone and other terpenoid-quinone biosynthesis (Table 2).The Heatmap of differentially altered metabolites in HILIC phase (P adj <0.05).F) Heatmap of differentially altered metabolites in RP phase (P adj <0.05).Abbreviations: IR = radiation, HILIC = hydrophilic interaction liquid chromatography, RP = reverse phase chromatography.https://doi.org/10.1371/journal.pone.0294355.g002 .The leading edge metabolites are the metabolites with the highest contribution to the enrichment signal for the enriched pathway, which is comparable to leading edge genes in gene set enrichment analysis (GSEA) [45].
MetaMapp network analysis identifies groups of structurally similar metabolites altered following radiation
To further investigate metabolite interactions in the context of biological reactions, we created a network of the metabolites based on structural annotation information and subsequently integrated metabolite level information to identify significantly altered modules.MetaMapp creates networks based on chemical and biochemical similarity between metabolites, thus providing improved functional characterization of all metabolites, not just the small fraction that have KEGG IDs [46].Communities are defined as clusters of metabolites detected using the Louvain method, and significant communities are defined as those that contain more differentially altered metabolites than expected by chance [47].Most of the metabolites annotated to Community 2 are upregulated at day 3 IR compared to untreated.This is the only significant community using a hypergeometric p-value cutoff of 0.05 at this time point in HILIC phase (see S5 Table for full list of identified metabolite communities, see S6 Table for full list of pathways annotated to significant metabolite communities).At day 14 IR arginine biosynthesis remains the only significant pathway.At day 30 IR, however, Community 1, consisting of 65 metabolites (25 are differentially altered) is annotated to purine metabolism as the top pathway (Table 3 and S6 Table).In summary, for HILIC phase network analysis across the 3 IR time points, arginine biosynthesis is enriched in the significant metabolite community at day 3 and day 14 IR but not at day 30 IR, whereas lysine degradation and purine metabolism are enriched in significant metabolite communities at day 30 IR.
For the RP phase, we observe two significant communities identified as Communities 6 and 14 at day 3 IR (Fig 3B).Community 6 is a cluster of 138 metabolites with 30 significant metabolites and it is enriched in phenylalanine metabolism; Community 14 is a cluster of 124 metabolites with 31 significant metabolites, and is enriched for alanine, aspartate, and glutamate metabolism (Table 3 and S6 Table).Again, most of the significant metabolites are upregulated at day 3 IR compared to untreated.At day 14 IR RP phase, Community 14 remains significant, but not Community 6. Community 2, however, is detected as significant, consists of 182 metabolites (30 are significant) and is enriched for tyrosine metabolism as the top annotated pathway (Table 3 and S6 Table).Community 2 remains significant at day 30 IR RP phase.Additionally, Communities 5 and 9 are significant (Table 3).Community 5 consists of 123 metabolites (30 are significant) and is enriched for tryptophan metabolism.Community 9 consists of 50 metabolites (25 are significant) and is enriched for diabetic cardiomyopathy (Table 3).In summary for RP phase network analysis across the 3 IR time points, the community enriched for alanine, aspartate, and glutamate metabolism is significant at day 3 and day 14 IR but not at day 30 IR.The communities enriched for tryptophan metabolism, diabetic cardiomyopathy, and tyrosine metabolism are significant only at day 30.
Weighted correlation network analysis (WGCNA) shows amino acid metabolism enrichment at all radiation time points
We used a complementary approach, weighted correlation network analysis (WGCNA), to investigate correlations between metabolites based on changes in their abundance after radiation.WGCNA does not use information about chemical structure or KEGG pathways, but instead is a completely data-driven method to identify networks of correlated metabolites, the functional modules active in the network, and the "hub" nodes (metabolites in our study) that may drive those biological functions [48].Thus, WGCNA can incorporate all metabolites from the platform, regardless of whether they have known chemical structures or pathway annotations.
We performed WGCNA analysis in two ways: (1) incorporating all time points together to identify a single set of modules that may change in expression over time, and (2) creating a different network and set of modules for each time point separately.The former can identify groups of metabolites that co-vary over all four stages, whereas the latter can identify groups of metabolites that are only correlated at one specific stage but show unrelated behavior at the other stages (e.g. because they are regulated by an active process during the chronic stage but not at the other stages).After identifying the WGCNA modules, over-representation analysis was performed to obtain pathway enrichment results using KEGG IDs (see S7 Table ).
First, we applied WGCNA analysis incorporating all three time points together (S5 Fig and S7 Table ).In HILIC phase, we observe significant enrichment of neuroactive ligand receptor interaction (P adj <0.05) in the green module (S5A Fig and S7 Table ).We note downregulation in the untreated group, upregulation in both the day 3 and day 14 IR groups, and downregulation in the day 30 IR group.The rest of the modules also showed distinct patterns of up and down regulation across different stages, but they were not significantly enriched in any known pathways (S5A Fig and S7 Table ).When we applied WGCNA analysis incorporating all three time points together in RP phase, we did not observe significant enrichment of KEGG pathways (P adj <0.05) (S5B Fig and S7 Table).As in HILIC phase, the rest of the modules showed unique patterns of up and down regulation across different stages, but they were not significantly enriched in any known pathways (S5B Fig and S7 Table).
When applying WGCNA separately at each stage, we found greater enrichment in KEGG pathways, suggesting that new processes are activated at each stage of salivary gland dysfunction, leading to emergent groups of correlated metabolites.At day 3 IR HILIC phase, 6 metabolite correlation-based modules are identified (S8 Table ) and using a P adj <0.25 cutoff, we observe significant enrichment of purine metabolism (Table 4).The hub, or central metabolite, in purine metabolism is 6-Methoxyquinoline (Table 4).At day 14 IR HILIC phase, 8 metabolite correlation-based modules are detected (S8 Table ) and we observe significant enrichment for histidine metabolism and mineral absorption (Table 4).At day 30 IR, 8 modules are identified in HILIC phase (S8 Table) with significant pathway enrichment only observed for arginine biosynthesis (Table 4).At day 3 IR RP phase, 9 modules are identified (S8 Table ) and we observe significant enrichment of glyoxylate and dicarboxylate metabolism and purine metabolism with the metabolite hub 3'-Adenosine monophosphate (3'-AMP) (Table 4).At day 14 IR RP phase, 13 modules are identified (S8 Table ) and we observe significant enrichment for phenylalanine metabolism and steroid hormone biosynthesis (Table 4).At day 30 IR, 12 modules are identified in RP phase (S8 Table ) and no significant enrichment is observed (Table 4).When comparing the top pathway enrichment results for the detection phases across IR time points regardless of statistical significance, phenylalanine metabolism and lysine degradation are common to all IR time points (S8 Table ).Glyoxylate and dicarboxylate metabolism is specific to day 3 IR, sphingolipid signaling pathway and thermogenesis are specific to day 14 IR, and ABC transporters and biosynthesis of unsaturated fatty acids are specific to day 30 IR (S8 Table ).From the stage-specific WGCNA analysis, we cannot directly conclude whether these modules are up-or down-regulated, only that the upstream signaling that drives these correlations are actively regulating the metabolites in the module.
Intersection of amino acids within mitochondrial metabolic pathways
S9 Table presents a summary of the significant metabolite classes, GSEA pathways and their associated leading-edge metabolites, MetaMapp community pathways, and WGCNA module pathways observed at day 3, day 14, and day 30 IR to compare the findings across the different analytical methods.We observe the highest number of significant metabolites and the highest number of significantly enriched pathways at day 3 IR, and this significance decreases with increasing time as zero significant enriched pathways were observed at day 30 IR from the GSEA analysis.Common enriched pathways across GSEA and MetaMapp network analysis at day 3 and day 14 IR are glutathione metabolism, aminoacyl-tRNA biosynthesis, central carbon metabolism in cancer, and several types of amino acid pathways.Due to the high prevalence of enrichment for amino acid metabolism and significance for amino acids at all IR time points across the three analytical methods (Fig 4 ), we decided to further investigate amino acid metabolism as an application of this data by interpreting the results in a biological context (see Discussion).
Discussion
This study utilized a metabolomics approach to identify metabolites and pathways altered in the salivary gland in response to IR at acute, intermediate, and chronic damage time points.
The results from the current study provide mechanistic insight into the different stages of salivary gland dysfunction following IR.While we identified the greatest number of significantly enriched pathways at the acute damage stage, the metabolites and pathways still altered at the chronic damage time point most likely reflect the pathways of interest to develop targeted interventions against persistent xerostomia.We observed conservation of enriched pathways using different analytical methods across the three time points: glutathione metabolism, aminoacyl-tRNA biosynthesis, central carbon metabolism in cancer, ferroptosis, and various amino acid metabolism pathways.These findings suggest targeting the enriched metabolic pathways conserved across the acute and chronic damage response stages may ameliorate chronic loss of salivary gland function following radiation treatment.Our group's previous study [24] combined a transcriptomics and metabolomics approach to identify metabolic changes in the salivary gland at day 5 IR, which is when increased compensatory proliferation and decreased apical/basal polarity are observed as the damage state transitions from the acute to the chronic responses (Fig 1) [25,49].Our previous study identified coordinated changes in glutathione metabolism, peroxisomal lipid metabolism, bile acid production, and energy metabolism pathways (TCA cycle and thermogenesis) at day 5 IR [24].In the current study, peroxisomal lipid metabolism, bile acid production, and the TCA cycle and thermogenesis were not identified as significant pathways, and this is most likely due to the MS detection settings not selecting for bile acids and certain TCA cycle intermediates in positive ion mode.
Another factor contributing to this discrepancy might be the absence of combinatorial transcriptomic analysis in the present study [24].
In our current study, glutathione metabolism was identified as a significantly enriched pathway at day 3 and day 14 IR using GSEA, and as a significantly enriched pathway at all three IR timepoints using MetaMapp network analysis (S8 Table ).Interestingly, reduced glutathione levels were lower in IR vs control at day 5 IR in our previous study [24] while reduced glutathione levels were significantly higher in IR vs control at day 3 in RP phase (log2 fold change (logFC) = 0.902) and at day 14 in HILIC (logFC = 0.976) and RP (logFC = 1.044) phases in the current study (S1 Table ).At day 30 IR, reduced glutathione levels were higher (logFC = 0.370) compared to control in RP phase although statistical significance was not reached (S1 Table ).A metabolomics analysis of liver tissue in mice exposed to 3 Gy and 11 Gy gamma IR to compare metabolic effects of low versus high dose IR exposure showed significantly higher levels of reduced glutathione levels at both day 4 and day 11 IR compared to untreated tissue, with the highest reduced glutathione levels observed at day 4 compared to day 11 in both the 3 Gy and 11 Gy treatment groups [50], thus supporting the results from our current study.It is well established that glutathione is one of the most prominent radioprotectors in cells as it scavenges free radicals produced from DNA damage [51], which would be a possible explanation for why we observed decreased reduced glutathione at day 5 IR previously as it was utilized to scavenge reactive oxygen species (ROS) that remain elevated in the salivary gland following IR (Fig 1) [31].The elevation of reduced glutathione observed in the current study and in the metabolomics analysis of IR liver tissue could occur due to the increased synthesis of reduced glutathione from glutamine to fuel protection of the salivary gland tissue from the increase in ROS, which has been previously demonstrated in cancer cells as a protective mechanism against increased oxidative stress [52,53].Thus, the mixed results for reduced glutathione observed in our current and previous study may be attributed to flux of the metabolite as it scavenges ROS in the damaged salivary gland tissue over time.
Amino acid metabolic pathways were prominently enriched in the present study across all analytical methods and the three IR timepoints in the GSEA, MetaMapp network modules, and WGCNA module results and include arginine biosynthesis, lysine degradation, histidine metabolism, cysteine and methionine metabolism, phenylalanine, tyrosine and tryptophan biosynthesis, and alanine, aspartate, and glutamate metabolism (S9 Table ).We investigated the levels of the detected amino acids that compose these pathways and discovered upon further analysis that all are significantly increased in at least one IR time point compared to untreated (Fig 4A).Multiple studies observed increases in amino acid metabolites in response to IR acutely and chronically.Twelve hours following exposure to total body 6 Gy gamma IR in a mouse model, glutamate and glutamine were increased in the ileum, glutamate was increased in the liver, and phenylalanine in the muscle [54].At 1, 2, and 3 days following exposure to 2 or 6 Gy x-ray IR in a rat model, phenylalanine was increased in the jejunum, phenylalanine, glutamine, serine, and lysine were elevated in the spleen, and glutamine and serine were increased in the liver [55].At two months following exposure to 1.6 or 2 Gy gamma IR in a mouse model, histidine, glutamine, and phenylalanine were elevated in intestinal tissue at both IR doses [56].Thus, it is established that amino acids increase in tissue exposed to IR, but the involvement of these amino acids in various metabolic pathways is not clearly understood.
Amino acid metabolism feeds into many different pathways, several of which were identified as significantly enriched pathways in the current study: central carbon metabolism, glutathione metabolism, and purine metabolism.Central carbon metabolism in cancer refers to various pathways that increase energy production and macromolecule synthesis to sustain tumor growth [57,58].There are two main ways that cells accomplish this increase in energy and growth.One method is increasing glucose uptake and the rate of glycolytic flux to increase ATP production quickly, which also increases nucleotide production through the pentose phosphate pathway [57,58].A second method is increasing glutamine uptake, which is converted to glutamate and subsequently to other amino acids to synthesize proteins or it feeds into the TCA cycle to increase ATP production [57,58].Our previous study identified increased levels of the glutamine transporter transcript SLC38A1 as well as increased levels of glutamine at day 5 IR in the salivary gland [24].We observed an increase in glutamine levels at day 3, day 14, and day 30 IR in our current study, and a significant increase in glutamate (glutamic acid) levels at day 3 IR (S1 Table ).The leading-edge metabolites in GSEA results for central carbon metabolism in cancer (across all three IR time points) include glutamate, asparagine, aspartate, methionine, phenylalanine, proline, serine, tryptophan, and tyrosine, which suggests increased glutamine levels fueling synthesis of these other amino acids (S4 Table ).These results support the hypothesis that IR increases glutamine levels, and subsequently other amino acid levels, potentially to rebuild the damaged salivary gland tissue, similar to what is observed in cancer cells by increasing central carbon metabolism to support tumor growth (Fig 4B).However, the accumulation of amino acids following IR may have a detrimental effect on the function of the salivary gland.Autophagy maintains cellular homeostasis by recycling damaged proteins and is necessary for normal salivary gland function.Morgan-Bathke and colleagues previously demonstrated that administration of the rapalogue CCI-779, an autophagy activator, post-IR treatment significantly improved salivary flow rates at days 30, 60, and 90 post IR compared to mice that only received IR [35].Therefore, the observed increases in amino acid levels in our current study may be reflective of impaired autophagy in the salivary gland following IR treatment.Amino acid metabolism is closely linked to glutathione metabolism and purine metabolism through one-carbon metabolism.One-carbon metabolism refers to various anabolic reactions in both the cytoplasm and mitochondria that are responsible for nucleotide synthesis, methylation reactions that affect gene expression, amino acid homeostasis, and reductive/oxidative (redox) defense (Fig 4B) [59,60].The methionine cycle is a component of one-carbon metabolism [59], and in our present study we observed significant increases in methionine at all 3 IR timepoints and significant increases in an intermediate in the methionine cycle, S-Adenosylhomocysteine (SAH), at day 3 and day 14 IR (S1 Table ).The methionine cycle synthesizes cysteine and reduced glutathione, and our study identified significant increases in reduced glutathione at day 3 and day 14 IR (S1 Table ).A leading-edge metabolite annotated to glutathione metabolism, which was significantly enriched in our GSEA output at all IR time points, was spermidine (S4 Table ), which is a by-product of the methionine cycle and was increased at all IR time points (Fig 4B), (S1 Table ).We observed increases in glycine, serine, and threonine levels at all IR time points in our study, and these three amino acids feed into the folate cycle, which generates purines and pyrimidines (Fig 4A and 4B).Upon further investigation, the detected leading-edge metabolites (S4 Table ) for purine metabolism were increased at all IR timepoints (hypoxanthine was the one exception and was slightly downregulated at day 14 IR, although not significantly) and the leading-edge metabolites for pyrimidine metabolism were also increased at all IR time points (S1 Table ).Collectively, the increase of cysteine, methionine, glycine, serine and threonine observed in response to IR over time may be linked to the increase in reduced glutathione and purine/pyrimidine levels in the salivary gland through elevated one-carbon metabolism.Previous research has demonstrated that 2, 5, and 7 Gy gamma IR alters one-carbon metabolism in murine liver at days 1, 2, 3, 4, 5, and 8 by shifting priority to nucleotide synthesis at the expense of transmethylation reactions, which can further exacerbate DNA damage [61].Further investigation of transmethylation products and enzyme levels and activity in the folate and methionine cycles (specifically S-adenosylmethionine) would need to be evaluated to test if the results from our study in the salivary gland support the onecarbon metabolism alteration observed in previous IR research in the liver.Prior research has manipulated the supply of methyl group donors to mitigate the IR response in different model systems, with animal studies displaying significant increases in animal survival, bone marrow health, and intestinal integrity following low-and high-dose IR after receiving glycine betaine as a pre-treatment [62,63], providing support for further pursuit of this mechanism in IRinduced salivary gland dysfunction as a possible therapeutic target.
A major strength in this study is the use of multiple analytical methods and multiple IR time points, thus increasing the confidence in our identified pathways.To achieve a deeper understanding of the metabolome, multiple analytical techniques are often employed for many reasons.Currently, no single method detects and annotates all features observed in a metabolomics run, but different techniques tend to be more sensitive to different classes of molecule, thereby increasing coverage of the metabolome and sensitivity of the assay.This paid dividends in the MetaMapp analysis and WGCNA, which were more sensitive to HILIC chromatography, while GSEA displayed sensitivity at mapping RP features into pathways.As many of the metabolite annotations were achieved at the MS level, the use of multiple analytical methods also served to validate the results and reduce false positives, increasing confidence in the overall analysis.Using metabolite annotation (GSEA), metabolite structure (MetaMapp), and metabolite levels (WGCNA), we observed conservation of various amino acid metabolism pathways across the different IR time points, which increases the reproducibility of our findings.The network-based analysis in our study aided in creating a biological context for the metabolomics output.The animal model is also a strength as it has previously been demonstrated that FVB mice treated with 5 Gy radiation experience a 40-50% reduction in stimulated salivary flow rates beginning at day 3 and continuing through days 30, 60, and 90 following IR treatment [27].The use of salivary gland tissue instead of serum, blood, or urine for the metabolomics analysis was a strength as it reduced the influence of metabolites released from other tissues and organs on the metabolomics output.Future studies can use isotope tracing to confirm the directionality of metabolic reactions of interest.This data set is a useful resource for salivary gland biologists investigating mechanisms correlated with radiation-induced disfunction as they can mine the data for metabolic pathways related to their studied mechanism.
An inherent challenge in metabolomics is metabolite annotation.Metabolites vary in structure and lack a common building block, making the synthesis of reference standards for identification challenging despite utilization of several databases including KEGG, HMDB, ChemSpider, or PubChem [64,65].Thus, the majority of detected signals in untargeted mass spectrometry runs remain unidentified [66].Despite the limitations regarding identification, a major strength of this study is the use of an Orbitrap mass spectrometer which allows for subppm confidence in metabolite identification whereas other methods (e.g.linear trap, time-offlight) only allow for 10-20 ppm confidence [67].Another limitation of using solely metabolomics analysis is the biological interpretation of the alterations in individual metabolite levels because the metabolites are not identified as reactants or products in a biochemical reaction.For example, increased levels of glutamine could indicate increased output of glutamine, decreased consumption of glutamine, or increased synthesis of glutamine via shunting from another pathway.Due to the complex interplay of metabolic reactions, metabolic flux cannot be simplified to a unidirectional linear model, thus several biological interpretations of the data are possible.
Conclusions
This study incorporated structural annotation and metabolite level data to identify metabolic pathways altered in the salivary gland at acute, intermediate, and chronic time points following radiation treatment, with the highest number of metabolic changes observed at the acute damage stage.Further investigation of metabolic changes in glutathione metabolism, amino acid metabolism, and central carbon metabolism in cancer may yield promising therapeutic targets to restore loss of tissue function after radiation treatment during the acute damage response to prevent chronic loss of function in the salivary gland.
Fig 3 .
Fig 3. MetaMapp network analysis identifies significant metabolite clusters at day 3 IR.A) HILIC phase.B) RP phase.Rectangles correspond to individual metabolites, edges denote chemical reactions between the metabolites, and numbers denote the communities that clusters of metabolites belong to.Color indicates Log 2 (fold difference)-red denotes upregulated metabolite levels versus untreated and blue denotes downregulated metabolite levels versus untreated.Abbreviations: IR = radiation, HILIC = hydrophilic interaction liquid chromatography, RP = reverse phase chromatography.
Fig 4A displays individual amino acid level changes at each time point in response to radiation, which are further grouped together based on where they interact with mitochondrial metabolic pathways.Fig 4B shows where the grouped amino acids feed into the tricarboxylic acid (TCA) cycle, central carbon metabolism in cancer, and one-carbon metabolism.
Fig 4 .
Fig 4. Amino acids connected to mitochondrial metabolism.A) Individual amino acids analyzed by one-way ANOVA with post-hoc Tukey's test for multiple comparisons.Letters denote statistical significance between groups, P<0.05.Box plots display the upper quartile, median, and lower quartile with the maximum and minimum values denoted by the whiskers.N = 4 for untreated (UT), N = 8 for day 3 (D3) IR, N = 8 for day 14 (D14) IR, and N = 8 for day 30 (D30) IR.Alanine, cysteine and valine were not detected in our data set.B) Overview of pathways related to amino acid metabolism.Created in BioRender.com.Abbreviations: TCA = tricarboxylic acid.https://doi.org/10.1371/journal.pone.0294355.g004
Table ) .
Across the 3 IR time points and 2 detection phases, alpha amino acids, carboxylic acids, peptides, and alcohol
Table 2 . Gene set enrichment analysis (GSEA) of top 3 pathways enriched at days 3, 14, and 30 IR compared to untreated in HILIC and RP phases using (P adj <0.25) for statistical significance. Day 3 IR-HILIC Day 3 IR-RP Pathway Pval Padj log2err ES NES Pathway Pval Padj log2err ES NES
No significant enrichment was observed at day 30 IR. Abbreviations: IR = radiation, HILIC = hydrophilic interaction liquid chromatography, RP = reverse phase chromatography, Pval = an enrichment P-value, Padj = a Benjamini Hochberg adjusted P-value, log2err = the expected error for the standard deviation of the P-value logarithm, ES = enrichment score, NES = enrichment score normalized to mean enrichment of random samples of the same size. https://doi.org/10.1371/journal.pone.0294355.t002 | 2023-11-22T05:07:36.270Z | 2023-11-20T00:00:00.000 | {
"year": 2023,
"sha1": "8a616ef1eb4055b3e635b1429f805e980cd13321",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8a616ef1eb4055b3e635b1429f805e980cd13321",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219583646 | pes2o/s2orc | v3-fos-license | Minimal change disease with concurrent thin basement membrane disease: A case report
Minimal change disease (MCD) is a main cause of the nephrotic syndrome. Thin basement membrane disease (TBMD) is another disease characterized by microscopic hematuria. The present case is a young adult female who presented with classic nephrotic syndrome, but she had microscopic hematuria as well. Hematuria can be part of MCD in 21% of patients, but in this case, histopathological diagnosis confirmed MCD with concurrent TBMD. This was reported only in two cases, up to our literature review. Using steroids resulted in nephrotic syndrome improvement, but microscopic hematuria has persisted, which is mostly related to TBMD rather than a primary part of MCD. Up to our knowledge, this is the first case report of MCD with concurrent TBMD in Arab countries.
Introduction
Minimal change disease (MCD) is characterized by heavy proteinuria, leading to intravascular volume depletion and edema. Moreover, it is a major cause of nephrotic syndrome and it occurs in 15-40% of adults with nephrotic syndrome, but its incidence is more dominant in children. [1] Thin basement membrane disease (TBMD) is characterized by microscopic hematuria without additional symptoms or progression to renal impairment. [2,3] While hematuria could be part of MCD in 21% of patients, [4] searching the literature, only two reported cases have linked hematuria in MCD to the concurrence with TBMD rather than a part of MCD. [5,6] Here, we present the first case in Arab world.
Case Report
A 18-year-old female patient, previously healthy, presented with lower limb edema and puffy face that started 1 week before her presentation, associated with frothy urine. She reported another two previous similar attacks in the past 3 months, but with shorter duration and spontaneous recovery. This time, her symptoms did not improve spontaneously, for which she sought medical advice. She denied any history of recent upper respiratory infections, shortness of breath, chest pain, abdominal distention, or change in her urine color. There was no history for fever, skin rash, joint pain, hearing impairment, nonsteroidal anti-inflammatory drugs, or any new medications use. A family history of renal or hearing diseases, particularly Alport syndrome, was negative. Her surgical history was negative. Her vaccinations were up to date. She had no known allergy. She is a single, non-smoker, studying in the 12 th grade with good scholastic performance. On examination, her blood pressure and other vital signs were normal. Her face was puffy and she has bilateral pitting lower limb edema. There was no skin rash or active synovitis. Her cardiovascular, respiratory, and abdomen examinations were unremarkable. Laboratory tests showed normal complete blood count, blood urea nitrogen, creatinine, and electrolytes. Her albumin was 19 g/L (normal 40-50 g/L). Her urinalysis showed 3+ protein and 3+ blood and red blood cell (RBC) > 50/HPF, but it was negative for white blood cell. Her urinalysis results were confirmed twice. Her 24 h urine protein was 5.1 g/day (normal <150 mg). Her protein/creatinine random urine was 341 mg/mmol (normal< 15 mg/mmol). She had normal complements. Her low-density lipoprotein was 8.67 mmol/L. Her antinuclear antibody (ANA), anti-neutrophil cytoplasmic autoantibody (ANCA), cryoglobulins, hepatitis B virus, hepatitis C virus all were negative. Renal ultrasound showed mildly echogenic normal-sized kidneys. Considering significant microscopic hematuria, which is not classical in most cases of MCD, a renal biopsy was done. It showed features of MCD with normal light microscopy and kidney background [ Figure 1a and b] and negative immunofluorescence. Electron microscopy (EM) revealed diffuse foot processes effacement and glomerular basement membrane with areas of thinning with an average thickness of 218 nm with no immune deposits [ Figure 2a and b]. Hence, her biopsy EM findings explained her microscopic hematuria. She was started on prednisone 1 mg/kg. As a follow-up, 10 days after steroid, her symptoms resolved completely, her urinary protein became negative, and her protein/creatinine random urine was 21.2 mg/mmol (baseline 341 mg/mmol). She achieved clinical and biochemical complete remission of her MCD, but she continued to have persistent microscopic hematuria.
Minimal change disease (MCD) is a main cause of the nephrotic syndrome. Thin basement membrane disease (TBMD) is another disease characterized by microscopic hematuria. The present case is a young adult female who presented with classic nephrotic syndrome, but she had microscopic hematuria as well. Hematuria can be part of MCD in 21% of patients, but in this case, histopathological diagnosis confirmed MCD with concurrent TBMD. This was reported only in two cases, up to our literature review. Using steroids resulted in nephrotic syndrome improvement, but microscopic hematuria has persisted, which is mostly related to TBMD rather than a primary part of MCD. Up to our knowledge, this is the first case report of MCD with concurrent TBMD in Arab countries.
International Journal of Health Sciences
Vol. 14, Issue 3 (May -June 2020)
Discussion
It is known that all MCD patients present with nephroticrange proteinuria, while microscopic hematuria appears only in 21% of patients. [4] In addition, MCD generally is characterized by normal-appearing glomeruli on light microscopy, absence of immune deposits, yet with diffuse foot processes effacement on electron microscope and negative staining on immunofluorescence. [7] On the other hand, TBMD is characterized by microscopic hematuria without additional symptoms or progression to renal impairment. [2,3] In addition, TBMD is diagnosed by EM by the presence of thinning in the basement membrane. [3] Our patient had a puffy face with lower limb edema. Laboratory tests showed nephrotic range proteinuria and negative secondary workup. Her renal biopsy showed normal light microscopy, negative immunofluorescence, and diffuse foot processes effacement on EM, which are compatible with MCD. The present case had no family history of renal diseases, particularly Alport syndrome, but she had microscopic hematuria with urine RBCs of ˃50/(HPF). In addition, EM revealed glomerular basement membrane thinning to 218 nm in some areas instead of the normal range which was reported to be 330-460 nm in adults. [8] These features reflect the presence of TBMD in addition to her MCD. Based on the histopathological diagnosis, she has MCD with concurrent TBMD. As mentioned above, hematuria can be part of MCD in 21% [4] but also it can be related to concurrent TBMD. Based on our literature review, there were only two case reports reported the conincidence of MCD with TBMD, [5,6] and our patient was the first case in Arab world. The first case report [5] revealed elderly patients with combined MCD and TBMD in whom proteinuria responded to steroid. The second case report was for a 15-year-old boy who was having TBMD associated with MCD and had hematuria in urinalysis. He was treated with corticosteroid and complete remission was achieved, but hematuria was persistent during the follow-up period. [6] The primary drug used for MCD is steroid treatment with prednisone. [7] In our patient, we used prednisone. In 10 days follow-up, there was an improvement in both her proteinuria and her symptoms which resolved completely, but microscopic hematuria has persisted. This was in agreement with the findings of the previous case report. [6] This indicates that steroids could treat MCD-related proteinuria; however, hematuria was not affected by this management as it was caused by TBMD not MCD. Having a patient with a picture of MCD, but with microscopic hematuria, the differential diagnosis will include MCD itself in addition to concurrent TBMD.
Conclusion
In addition to the classic presentation with nephrotic syndrome, MCD can present also with microscopic hematuria as a part of the disease. Furthermore, hematuria can be related to a concurrence with TBMD. Treating condition will result in improvement in proteinuria but no hematuria. The persistence of hematuria after treatment in our case is related to the concurrence of TBMD rather than being a primary part of MCD.
Patient Consent
Written informed consent was taken from the involved patient. | 2020-06-12T05:03:06.341Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "680f974635a5c441dbe39fa55083717768ec392b",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "680f974635a5c441dbe39fa55083717768ec392b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257168251 | pes2o/s2orc | v3-fos-license | Constrained Self-Adaptive Physics-Informed Neural Networks with ResNet Block-Enhanced Network Architecture
: Physics-informed neural networks (PINNs) have been widely adopted to solve partial differential equations (PDEs), which could be used to simulate physical systems. However, the accuracy of PINNs does not meet the needs of the industry, and severely degrades, especially when the PDE solution has sharp transitions. In this paper, we propose a ResNet block-enhanced network architecture to better capture the transition. Meanwhile, a constrained self-adaptive PINN (cSPINN) scheme is developed to move PINN’s objective to the areas of the physical domain, which are difficult to learn. To demonstrate the performance of our method, we present the results of numerical experiments on the Allen–Cahn equation, the Burgers equation, and the Helmholtz equation. We also show the results of solving the Poisson equation using cSPINNs on different geometries to show the strong geometric adaptivity of cSPINNs. Finally, we provide the performance of cSPINNs on a high-dimensional Poisson equation to further demonstrate the ability of our method.
Introduction
Deep learning achieves breakthroughs in many scientific fields and impacts the areas of data analysis, decision making, and pattern recognition. Recently, deep learning methods have been applied to solve partial differential equations (PDEs), and physics-informed neural networks (PINNs) [1,2] used to solve the partial differential equations (PDEs). The main idea is to represent the solutions of PDEs using a neural network, and optimize them with the constraints of physics-informed loss using automatic differentiation (AD). In the last few years, PINNs have been employed to solve PDEs from different fields, including problems in mechanical engineering, geophysics [3], vascular fluid dynamics [4,5], and biomedicine [6].
To further enhance the accuracy and efficiency of PINNs, a series of extensions to the original formulation of Raissi et al. [1,[7][8][9][10][11] were proposed. For example, from the aspect of data augmentation, re-sampling methods are proposed to adaptively change the distribution of residual points during training [7,8], which could help to improve the accuracy of PINNs in solving stiff PDEs. The standard loss function in PINNs is the mean square error (MSE), which is not always suitable for training when solving PDEs [12,13]. In [9], an adjustment method among different loss terms was proposed to mitigate the gradient pathologies. Causal-PINNs [10] was proposed to solve the timedependent PDEs, by means of an adaptive adjustment scheme for loss weights in the temporal domain. Meanwhile, regularization on the differential forms of PDEs was also demonstrated to be an effective way to improve accuracy [14], compared to original PINNs. In addition, the architectures [9,11,15,16] of PINNs greatly influence the final prediction results, and some works [15,17] have focused on embedding methods, which are useful for features enhancement of PINNs and are even possibly enforceable on soft/hard boundary enforcement [18].
PINNs with fully connected neural networks are widely used to solve partial differential equations and the derivatives of PDEs could be directly computed by means of automatic differentiation (AD). There also exist various types of architectures to solve PDEs, e.g., CNN architecture [19] and UNet architecture [20]. However, CNN and UNet require a finite difference approach when calculating the derivatives of PDEs. Other architectures, such as Bayesian neural networks (BNNs) [21] and generative adversarial networks(GANs) [22], are also used to address PDE problems. Despite the development of different architectures, fully connected feed forward neural networks are still the most used architecture in PINNs and their hyper-parameters, such as depth, width or the connected way between hidden layers, greatly influence the final results. In [16], adjustments on width and depth of fully connected neural networks (FCNNs) showed the different accuracy of PINNs. In [11], a ResNet-block was used to enhance the connected way between hidden layers and performed better than FCNNs in the parameters identification of the Navier-Stokes equation. As in [9], modified neural network was also proposed to project the input variables to a high-dimensional feature space and fuse them as neural attention mechanisms. All these cases showed that the selected architecture was essential to the final predictive results of PINNs.
The main idea of such an adaptive scheme is to attach a bounded trainable weight for each single residual point in the residual loss function and adaptively update pointwise weights for each training point. As for the architecture, it influences the predicted results and is also essential for improving the accuracy of PINN methods. Our contributions in this paper are summarized below:
•
We develop a constrained self-adaptive physics-informed neural network (cSPINN), which had better accuracy in numerical results. Meanwhile, the dynamics of residual weights changed more steadily during the training process. • To better capture the solution with sharp transitions in the physical domain, we develop a ResNet block-enhanced modified MLP architecture, which also has the ability to tackle the vanishing gradient problem using identity mapping, even for deep architectures.
Related Works
In this section, we first introduce the model problem, and then provide a brief overview of physics-informed neural networks (PINNs) for solving forward partial differential equations (PDEs).
Model Problem
In this subsection, we first introduce the model problem given the spatial domain Ω, and the temporal t ∈ [0, T] domain, having the general form of a partial differential equation (PDE): where N t [·] and N x [·] are the general differential operator, which includes any combination of linear and non-linear terms of temporal and spatial derivatives. The corresponding initial condition at t = 0 is given by u 0 (x). In the above, B[·] is a boundary operator, which could be Neumann, Robin, or periodic boundary conditions, and enforces the condition g(x, t) at the boundary domain ∂Ω.
PINNs Formulation
To solve the PDEs via PINNs [1], we needed to construct a neural networkû(x, t; w), given the spatial x ∈ Ω and temporal t ∈ [0, T] inputs with the trainable parameters w, to approximate the solution u(x, t). Then, we could train a physics-informed model by minimizing the following loss function: here, L r , L ic and L bc are loss functions due to the residual in physics domain, loss in initial condition and boundary condition. We use u w to represent the output of neural network, which is parameterized by w. λ r , λ ic and λ bc are the weights that could influence the convergence rate of different loss components and the final accuracy of PINNs [9,12]. Hence, it was important to use an appropriate weighting strategy during training. Here are training points inside the domain, points on the initial domain and points on the boundary.
In this paper, we used L 2 error to define the relative error between the PINNs' prediction and the reference solutions, as where u(x k , t k ) is the reference solution andû(x k , t k ) is the neural network prediction for a set of testing points {(x k , t k )} N k=1 ((x k , t k ) ∈ Ω × (0, T]).
Formulation of Constrained Self-Adaptive PINNs (cSPINNs)
In this section, we first present the method of the constrained self-adaptive weighting scheme for PINNs, which could adaptively adjust the weights for residual points during training. Next, we propose a modified network architecture enhanced by ResNet block to further improve the performance of cSPINNs.
Constrained Self-Adaptive Weighting Scheme
In PINNs, we defined the residual loss L r (w) to enforce the network to satisfy the governing equation for any sample points inside the domain, i.e., {x i r , t i r } N r i=1 . However, we could find that equal weight was attached for all residual points in the above formulation of L r (w), with the result that PINNs could not focus on the area that was difficult to learn during training (as shown in Figure 1). One effective way to overcome this, was to attach individual adjustable weights for each residual point, according to the distribution of residual in the physical domain, and then automatically raise the weights of inner points with relatively higher loss value. Then, we could formulate such self-adaptive adjustment as a min-max optimization problem, i.e., , and C is a constant that could be used to constrain the range of weights. Here, we set C as the expectation of weights in PINNs, i.e., C = E(∑ N r i=1λ ri ) = N r . The formulation of loss function is as below: 2 is a weighted residual loss and the other terms in L w, λ r , λ ic , λ bc ,λ r are the same as in the original formulation. It was easy to solve the inner optimization of the min-max optimization problem above by selecting a residual point with the largest residual loss and attaching a weight with value C, while setting the weights of other points to zero. However, PINNs could only optimize one single point in every training iteration if we chose such a strategy, which went against the adjustment among different residual points. In [12], Mcclenny et al. proposed a self adaptive method to solve the above min-max problem by a step-forward optimization in the inner optimization using a gradient ascent procedure, which could approximately satisfy the inner maximization requirement. In this way, different residual points could be attached with appropriate weights during training. They updated theλ r during the training process as: where η k is the learning rate at iteration k. However, it was easy to see thatλ k r was an unbounded weights vector during the training, which mean that the training of PINNs would suffer an unstable state, caused by the rapidly changing weights. We modified the updating rules of theλ k r as: where we denote r i as ith residual points in {x i r , t i r } N r i=1 , k and k + 1 the training iteration numbers. λ k+1 r is a middle variable before normalization. In other words, we first normalized the λ k+1
ResNet Block-Enhanced Modified Network
Continuous improvement of network structure design is one of the drivers for the development of deep learning methods and its applications. For example, convolutional neural networks [23][24][25] were designed for, and have been widely used in, computer vision, such as image classification, image segmentation, and object recognition tasks. Similarly, recurrent neural networks and their variants [26][27][28] show great performance in natural language processing and sequential modeling because of their ability to capture longterm dependencies in the sequences. In [9], a modified MLP framework was proposed to correctly capture the solution of complex PDEs, which has been widely used in many cases. PINNs with ResNet blocks are also used to improve the representational capacity of the neural network when solving PDEs. Inspired by these ideas, we proposed a ResNet block-enhanced modified MLP framework to better represent the solution of PDEs. The ResNet block-enhanced modified network is as follows: here φ is an activation function, is an element-wise multiplication operation, f is the final output of network and the updating rule of Z (k) is similar to the residual learning, which was first proposed in [23] and achieved great success. We have denoted it above, i.e., More specifically, H (k) and Z (k) are the input and output of the ResNet block, respectively. F is an operation consisting of a fully connected layer and activation function, which could be defined as F(X) := φ(φ(XW 1,k + b 1,k )W 2,k + b 2,k ) here, with input X, hidden layers' parameters {W 1,k , b 1,k , W 2,k , b 2,k } and an activation function φ. As for the element-wise addition H (k) + F(H (k) ), it could be conducted by a shortcut connection. The effectiveness of such a block in PINNs was demonstrated in [11] and here we used it as a feature enhanced sub-structure of our network. The features could be fused by a shortcut connection and fed into updating the hidden layers by an element-wise multiplication operation with U and V, as shown in Figure 3. Compared to simple fully connected neural networks, our architecture enhanced the representative ability of hidden layers by ResNet blocks, which could make it easier for the network to learn the desired solution. Meanwhile, embedding of inputs from low-level space to higher dimensional feature space could also be considered here, and could be fused, using the attention mechanism during the forward process.
Numerical Experiments
We demonstrated the performance of our proposed cSPINN in solving several PDE problems. In all of the examples, we used the ResNet block-enhanced modified network with tanh function as our activation function φ. The proposed architecture had 2 input neurons and consisted of 4 ResNet blocks, each having a width of 64. The output layer contained only one neuron for the output/solution of the PDE.
1D Allen-Cahn Equation
The Allen-Cahn equation is a stiff PDE, which has a sharp interface and time transitions in its solution. We denote the 1D Allen-Cahn equation as below: We used the same physics parameters of the Allen-Cahn equation as in [7] to better compare the results. For the given problem, we used the ResNet block-enhanced modified network architecture mentioned above to better fit the sharp transition. In order to implement the cSPINNs for the Allen-Cahn equation, the following loss function was used: • The constrained self adaptive loss for the residual of the governing equation • Mean squared loss on the initial condition • Mean squared loss on the boundary condition whereû is the prediction of neural network and we sampled N r = 25,600 residual points, N b = 400 boundary points and N ic = 512 points on the initial condition.
Here, we used the Adam optimizer with 10,000 epochs and L-BFGS optimizer with 1000 epochs to optimize the network architecture. During the training process, we set the boundary weight and the initial weight w b = w i = 100, which could help expedite the convergence. Figure 4 shows the numerical results of constrained self-adaptive PINNs(cSPINNs) compared with the reference solution obtained through the Chebfun method [29] and the training loss history. The relative L 2 error was 1.472 × 10 −2 , which was better than the time-adaptive approach in [7] and the original PINNs [1].
1D Viscous Burgers' Equation
The Viscous Burgers' equation is widely used in various areas of applied mathematics, such as fluid mechanics, traffic flow, and gas dynamics. The 1D viscous Burgers equation could be denoted as below: To better compare the results, we used the same physics parameters of the Burgers equation as in [1]. In order to implement the cSPINN scheme for the Burgers equation, the following modified residual loss function was used as mentioned in the formulation of cSPINNs: • The constrained self adaptive loss for the residual of the governing equation • Mean squared loss on the initial condition • Mean squared loss on the boundary conditions Here, we trained the network with a constrained self-adaptive scheme andû was the prediction of neural network. In this case, we sampled N r = 25,600 residual points, N b = 256 boundary points and N ic = 512 points on the initial condition. We set the weights of initial condition and boundary condition as N ic = N bc = 100. Training was performed using 10,000 Adam iterations and 1000 L-BFGS epochs. The predicted solution of cSPINNs, and loss history are shown in Figure 5. Despite a sharp transition in the center of the domain, the solution of cSPINNs was still accurate in the whole domain, yielding a relative L 2 error of 4.796 × 10 −4 .
2D Helmholtz Equation
The Helmholtz equation is widely used to describe the behavior of wave propagation, which could be mathematically formulated as follows: ∆u(x, y) + k 2 u(x, y) − q(x, y) = 0 (16a) q(x, y) = − (a 1 π) 2 sin(a 1 πx)sin(a 2 πy) − (a 2 π) 2 sin(a 1 πx)sin(a 2 πy) is a forcing term that results in a closed-form analytical solution u(x, y) = sin(a 1 πx)sin(a 2 πy) (18) The exact solution above was as the same as in [9], which helped us better compare the results.
•
The constrained self adaptive loss for the residual of the governing equation • Mean squared loss on the boundary conditions Here, we solved the problem with a 1 = 1 and a 2 = 4 to allow a direct comparison with the results reported in [9]. In this case, the ResNet block-enhanced modified network was trained with 10,000 Adam and 1000 L-BFGS epochs. As for the training points, we sampled N r = 10,000 residual points and N b = 400 (100 points per boundary). We show the prediction results of the cSPINNs in Figure 6. Finally, we achieved a relative L 2 error of 1.626 × 10 −3 , which exhibited performance than the learning-rate annealing weighted scheme, proposed in [9], and self-adaptive PINNs, proposed in [12]. Meanwhile, our method required less computational cost, due to the stability of the design in the selfadaptive weights.
2D Poisson Equation on Different Geometries
Poisson's equation is an elliptic partial differential equation widely used in the description of potential fields. The 2D Poisson's problem could be denoted as follows: To further demonstrate the performance of cSPINNs, we used the exact solution with periodicity u(x, y) = 1 2(4π) 2 sin(4πx) sin(4πy) and obtained f (x, y) and g(x, y), according to the exact solution, directly as: Then, we had the following loss terms • The constrained self adaptive loss for the residual of the governing equation: R := −∆û(x i r , y i r ) + sin 4πx i r ) sin(4πy i r , x i r , y i r ∈ Ω (25a) • Mean squared loss on the boundary conditions (Taking the rectangular area as example): In this case, we first tested the performance of cSPINNs on a rectangular domain Ω 1 = [0, 0.25] × [0, 0.25]. We sampled N r = 10,000 residual points in the inner domain and N b = 1000 boundary points distributed on the boundary area. The ResNet blockenhanced modified network was trained with 10,000 epochs Adam and 1000 epochs L-BFGS. Meanwhile, different geometries, including circular, triangular, and pentagonal domains, were also tested to demonstrate the advantages of cSPINNs. The L 2 error between the predicted solution and the reference solution on different geometries is shown in Table 1. It is worth noting that we magnified the loss value by a constant number of c = 10,000, due to the relatively small true value (the maximum value of exact solution was about 0.003) in the solution, to ensure normal gradient backward during training. As for an irregular domain, we sampled the same number of points as for the rectangular domain. We found that the cSPINNs achieved good performance in this problem, as shown in Figure 7. The original PINNs failed, as shown in Figure 1. Moreover, we provided the comparison results between cSPINNs and reference solutions in the domain Ω 2 = [0, 1] × [0, 1], which was hard for PINNs to solve, due to the high frequency of the solution, as shown in Figure 8. We show the relative L 2 errors between the predicted and the exact solution u(x, y) using cSPINNs on the different geometries in Table 2. To further test the performance of cSPINNs, we provided the numerical results of cSPINNs on the L-shaped domain, a classic concave geometry. In this case, we set f (x, y) = 1 and g(x, y) = 0 to have a direct comparison with PINN, as in [30]. We had the loss terms as follow: • The constrained self adaptive loss for the residual of the governing equation • Mean squared loss on the boundary condition We tested the performance of cSPINN on the L-shaped domain and show the results in Figure 9. The maximum point-wise error was yielded at about 6 × 10 −3 and the relative L 2 error 4.257 × 10 −3 . In [30], PINNs achieved accurate results with about 0.02 maximum point-wise error in the same case on the L-shaped domain. In [31], hp-VPINNs also tested the performance and, in this case, achieved about 0.02 maximum point-wise error on the domain. Therefore, cSPINNs also performed well, even on such concave geometry.
The solution of this problem was u(x) = ∑ 5 k=1 x 2k−1 x 2k and we computed the error of cSPINN using this exact solution.
• Mean squared loss on the boundary condition: We computed the relative L 2 error between the solution of cSPINN and the exact solution. The relative L 2 error of cSPINN was 1.028 × 10 −3 , which was smaller than the Deep Ritz method [32] (about 0.4%). In this case, we sampled N r = 1000 residual points in the inner domain and N b = 100 boundary points distributed on the boundary area. The ResNet block-enhanced modified network was trained with 20,000 epochs Adam and 1000 epochs L-BFGS. The training loss history of cSPINN is shown in Figure 10. Finally, we also provide the relative computational cost between cSPINNs and PINNs in different cases in Table 3. When computing the cost, we set the cSPINNs and PINNs with the same network depth, width, and number of training epochs for fair comparison. At the end of this section, we also tested the impact of the following three architectures: the Multilayer Perceptron(MLP), the modified Multilayer Perceptron (MMLP), and the ResNet block-enhanced modified network(ResMNet). During the test, the depth and width of all networks were fixed at 6 and 128, respectively. Here, we also provided the results to demonstrate the effectiveness of our proposed constrained self-adaptive weighting scheme(cSA) compared to the L 2 loss function. Meanwhile, as can be seen in Table 4, we observed that, compared to the MLP and the MMLP, the ResMNet yielded the highest accuracy. Therefore, the constrained self-adaptive weighting scheme (cSA) and the ResNet block-enhanced modified network (ResMNet) were desired in the cSPINNs.
Conclusions and Future
In this paper, we proposed constrained self-adaptive PINNs (cSPINNs) to adaptively adjust the weights of individual residual points, which became more robust during training, due to the bounded weights. Meanwhile, a ResNet block-enhanced modified neural network was also proposed to enhance the predictive ability of PINNs.
We demonstrated the effectiveness of our method in solving various PDEs, including the Allen-Cahn equation, the Burgers equation, the Poisson equation and the Helmholtz equation. Our method showed good performance in all the cases mentioned and outperformed PINNs, especially in the Poisson equation with periodic solution, regardless of the geometries of the computational domain. Even with sharp transition in the physical domain, cSPINNs were also robust when solving the Allen-Cahn equation, which was difficult for the original PINNs to solve. Compared with the PINNs, cSPINNs could improve the accuracy and could be implemented with just a few lines of code, which made it possible to combine our method with other extensions of PINNs to further improve the performance. The usage of a constrained self-adaptive weighting scheme could attach higher weight values to difficult to learn regions during training, which made it possible to solve complicated problems. In this paper, we provided the numerical results of cSPINNs in solving the 10D Poisson equation and achieved better performance than the Deep-Ritz method. In the future, we will further generalize cSPINNs to solve higher dimensional PDEs and multi-physics problems. | 2023-02-25T16:11:25.885Z | 2023-02-22T00:00:00.000 | {
"year": 2023,
"sha1": "496aed0b1209f16f48b2860c53263b44476cb313",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7390/11/5/1109/pdf?version=1677589901",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "54621c87bbf1f46983780bdbc5cf308432f4075b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
209277721 | pes2o/s2orc | v3-fos-license | Preoperative echocardiographic evaluation of cardiac systolic and diastolic function in liver transplant recipients with diabetes mellitus: a propensity-score matched analysis
Background Diabetes mellitus (DM) increases risk of heart failure. It has been shown that diabetes leads to DM-cardiomyopathy, characterized by systolic and diastolic dysfunction. Pre-transplant diastolic dysfunction, has been associated with poor graft outcome and mortality. We assessed the hypothesis that end-stage liver disease (ESLD) patients with diabetes (DM-ESLD), have more advanced cardiac systolic and diastolic dysfunction, compared to ESLD patients without diabetes (Non DM-ESLD). Methods We retrospectively evaluated preoperative echocardiography of 1,319 consecutive liver transplant recipients (1,007 Non DM-ESLD vs. 312 DM-ESLD [23.7%]) January 2012–May 2016. Systolic and diastolic indices, such as left ventricular ejection fraction, transmital E/A ratio, tissue doppler s’, e’ velocity, and E/e’ ratio (index of left ventricular end-diastolic pressure), were compared using 1:2 propensity-score matching. Results DM-ESLD patients showed no differences in systolic indices of left ventricular ejection fraction and s’ velocity, whereas diastolic indices of E/A ratio ≤ 1 (49.0% vs. 40.2% P = 0.014), e’ velocity (median = 7.0 vs. 7.4 cm/s, P < 0.001) and E/e’ ratio (10.9 ± 3.2 vs. 10.1 ± 3.0, P < 0.001), showed worse diastolic function compare with Non DM-ESLD patients, respectively. Conclusions DM-ESLD patients suffer higher degree of diastolic dysfunction compared with Non DM-ESLD patients. Based on this, careful preoperative screening for diastolic dysfunction in DM-ESLD patients is encouraged, because poor transplant outcomes have been noted in patients with preoperative diastolic dysfunction.
The chronic cardiac dysfunction typically seen in ESLD is called cirrhotic cardiomyopathy. Increased baseline stroke volume (SV), decreased systemic vascular resistance, and increased heart rate are characteristics of this condition. The sympathetic nervous system becomes more active as baseline hepatic dysfunction worsens, thus increasing the baseline systolic function. However, collagen deposition within the myocardium causes left ventricular hypertrophy and increased myocardial stiffness, thereby decreasing the diastolic function [6]. Recently, the association between changes in cardiac function, especially diastolic dysfunction, and the prognosis of ESLD patients undergoing liver transplant has been actively investigated [6][7][8].
DM shows a high correlation with cardiac failure. DM-related cardiomyopathy (DM-CMP) is a known chronic cardiac disorder that characteristically occurs in patients with DM [9]. DM-CMP shows characteristics such as preceding diastolic dysfunction, systolic dysfunction, and left ventricular hypertrophy. DM-CMP is known to be caused by DM-related metabolic disorders, myocardial fibrosis, small vessel disease, cardiac autonomic neuropathy, and insulin resistance [10].
However, there is limited research on the difference in cardiac systolic or diastolic dysfunction between ESLD patients with DM (DM-ESLD) and ESLD patients without DM (Non DM-ESLD). Therefore, in this study, we aimed to evaluate the preoperative cardiac echocardiography data of liver transplant recipients to compare the systolic function and diastolic function in those with and without DM. 1). For the evaluation of more complicated left ventricular end-systolic function, end-systolic elastance was calculated as ESV / end-systolic pressure. End systolic pressure, a reflec-tion of aortic pressure, was calculated as noninvasive systolic pressure × 0.9. To evaluate vascular resistance, arterial elastance was calculated as end-systolic pressure / SV [11].
MATERIALS AND METHODS
All values were expressed as mean ± standard deviation, median (1st quartile, 3rd quartile), or number of patients (percent). Continuous variables were analyzed using the Shapiro-Wilk normality test and subsequent Student's t-test or Mann-Whitney test, as appropriate. Categorical variables were analyzed using the chi-square test or Fisher's exact test.
To minimize the difference in baseline characteristics of the 2 groups, a 1:2 matched propensity score analysis [12] was performed. The propensity score, which is the probability of each subject to be assigned to the treatment group according to a given covariance, was calculated using a propensity score model via logistic regression analysis with the patient's age, sex, body mass index, MELD score, hypertension, history of cardiovascular disease, history of beta-blocker use, and btype natriuretic protein, among the characteristics specified in
RESULTS
Of 1,572 patients planned for liver transplant, 1,319 met the inclusion criteria. Of these patients, 312 (23.7%) had diabetes. variables that showed a difference between the 2 groups before matching showed P > 0.05 after matching (Table 3).
On echocardiograms, systolic function indices that were significant before matching (LVEF, s' velocity; both P > 0.05) did not show a significant difference after matching (Fig. 2). Fig. 3).
DISCUSSION
The [18], similar to those of heart failure with preserved ejection fraction (HFpEF), which account for the majority of heart failure cases [19].
In summary, the recently emerging restrictive phenotype of DM-CMP tends to manifest similarly to HFpEF [9]. We pre- [20,21]. In the future, studies that compare surgical outcomes using these guidelines would be necessary. Fifth, this study showed that DM-ESLD patients have a higher incidence of diastolic dysfunction, but the mechanism and pathophysiology behind this observation have not been studied. More research is needed on this topic.
CONFLICTS OF INTEREST
No potential conflict of interest relevant to this article was reported. | 2019-11-07T14:28:56.300Z | 2019-10-31T00:00:00.000 | {
"year": 2019,
"sha1": "0a8752716c41def590c9ecd8e6362f5edb572ce0",
"oa_license": "CCBYNC",
"oa_url": "https://www.anesth-pain-med.org/upload/pdf/APM-14-465.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "599ca999230ae571cb7aa6d80b9bd719afa24d0c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
131902417 | pes2o/s2orc | v3-fos-license | Design and realization of RS application system for earthquake emergency based on digital earth
The current RS-based earthquake emergency system is mainly based on stand-alone software which cannot meet the requirements of massive remote sensing data and parallel seismic damage information extraction after a devastating earthquake. Taking Shaanxi Province as an example, this paper explored firstly the network-based working mode of seismic damage information extraction and data management strategy for multi-user cooperative operation based on analysing work flow of the RS application to earthquake emergency. Then, using WorldWind java SDK, the RS application system for earthquake emergency based on digital earth platform was brought out in CS architecture. Finally, spatial data tables of classification and grade of seismic damage were designed and the system was developed. This system realized functions including 3D display, management of seismic RS image and GIS data obtained before and after earthquake for different user levels and cooperative extraction and publish of such seismic information as building damage, traffic damage and seismo-geological disasters caused by earthquake in real time. Some application to earthquake cases such as 2014 Ms6.5 Ludian earthquake show that this system can improve the efficiency of seismic damage information interpretation and data sharing, and provide import disaster information for decision making of earthquake emergency rescue and disaster relief.
Introduction
As the vigorous development of domestic high satellite and UAV remote sensing technology, the remote sensing data source for earthquake emergency is more and more rich after destructive earthquakes occur. Therefore, how to efficiently use multi-source remote sensing data for earthquake emergency services becomes the one of the problems that need to be solved. There are some scholars began to research and develop earthquake disaster management and recognition system for rapid earthquake remote sensing information extraction. Turkera M and Sumerb E [1] developed the seismic damage evaluation system based on buildings by Matlab and proposed segmentation algorithm based on watershed to detect intact buildings and damaged building. Xiaoqing Wang, etc.
[2] developed earthquake disaster emergency and damage assessment system using ENVI/IDL which realized building seismic damage quickly identify and model management based on aerial and satellite images, and completed the development of earthquake emergency remote sensing (EERS) analysis and processing system on the basis of research of key techniques such as earthquake emergency disaster information extraction and damage assessment in 2008. After that, considering the characteristics of urban application, Aixia Dou [3] developed Tianjin Remote Sensing Earthquake Damage Analysis and Processing System (TJ-RSEDAPS), which realized many practical functions including image management, damage degree identification of buildings, key-objects and major facilities, losses assessment of life and economic, damage distribution mapping, and etc. With the development of 3D GIS technology, research on earthquake disaster management and recognition technique based on 3D digital earth is becoming a hot spot. Chongjun Yang [4] developed 3D geographic information system on wenchuan earthquake on account of GeoBeans platform, and realized the functions including 3D display of multi-source remote sensing damage images, seismic road interpretation, and etc. Based on discussing the technical methods on application of multi-sensor remote sensing technology for earthquake disaster management, Jixian Zhang, etc.
[5] built remote sensing monitoring of the Wenchuan earthquake disaster situation and the information service system which provide functions including disaster information management, visualization and statistical analysis. Jieping Zhou, etc.
[6] implemented a 3D visualization management system for UAV remote sensing images based on the VGE-3DGlobeEarth platform, which provided the quad-tree pyramid and LOD model for managing large volume remote sensing images from earthquake zone. There are many advantages on 3D digital earth system such as intuitive and convenient data sharing. However, the currently existing system focuses on the 3D data management and display, and earthquake disaster information extraction and evaluation function is weak. Based on the earthquake emergency prototype system of remote sensing digital earth [7], this paper analyse work flow of remote sensing earthquake emergency using network, and discusses the multi-user collaborative damage extraction mode on the basis of the network and data management strategy, proposed CS architecture of earthquake emergency digital earth platform system of RS applications based on Worldwind.
The overall design of the system
Usually EERS work based on single version software, the specific process includes pre-earthquake background remote sensing data preparation, extraction of seismic damage, seismic damage and loss estimation of earthquake damages and earthquake damage products production and display [3].In this situation, collaborative work and how to share the background data and the real-time extracted information of earthquake are questions need to be resolved. Figure 1. Network-based workflow of remote sensing application to earthquake emergency. In this paper, we based on the WorldWind digital earth platform to make sure full use of the network environment (the specific work process is shown in figure1), including network-based data preparation for the earthquake emergency response and the post-earthquake emergency work. Network-based data preparation before earthquake mainly contains slicing and storing of fundamental geographic information, elevation data, background of remote sensing images, population, and economic data through distributed server providing data support for earthquake emergency. After the earthquake, on the one hand, the server slice and import multi-source data from different stages. On the other hand, different users from different computers can use the earthquake emergency client calls for postearthquake and pre-earthquake disaster images to be displayed in 3D mode, then they can analyse and extract the seismic disaster information such as the building damage, road damage, geological disaster, at the same time the system will share the extracted information shows to different users in real-time (figure 1 shows the overall workflow). According to the business workflow above, the system is divided into the presentation tier, business logic tier and data service tier. The first tier is the presentation tier, whose mainly function is the system access for the three kinds of earthquake emergency users such as administrators, professional users, and emergency decision makers by the client. The second tier's function is logic control by the client-side, including web through the network environment, invoking geography information. The third tier is composed of spatial database and remote sensing database, storage the data produced in the process of earthquake emergency, and basic data provide requirements for server and client calls, as shown in figure 2.
The design of remote sensing earthquake emergency database
In order to make the management of EERS data standard, efficient, easy to use, database design (figure 3) can be classified into spatial database, remote sensing image database, earthquake professional database and system management database. According to the specific function, different database stores corresponding earthquake-related information. Details of the design are as follows. Pre and post earthquake Figure 3. Overview of database structure of RS application system. (1) Spatial database. It stores the emergency basic spatial data of the residential area, the key objects, administrative divisions and related provinces, cities, counties, townships and attributes of the standard geographic coding data in demonstration zone.
(2) Remote sensing image database. It stores the pre-earthquake background RS data of the Map world and the medium and high resolution in demonstration area, pre-earthquake high resolution data of key area and multi-source RS images acquired after earthquake such as UAV images and airborne high resolution images.
(3) Earthquake professional database. It stores the historical earthquake, basic information of current earthquake, rapid assessment of earthquake, seismic intensity map, classification of buildings and seismic geological hazard information. Table 1 shows the design of earthquake damage levels of different objects from RS images, which is a classification standard for different users sharing and updating the visual interpretation results to the server. (4) System management database. It stores the data of system operation related to the user, the organization and related metadata, etc.
The main functions and applications of the system
The system is CS architecture containing clients and servers. The Client developed by WoldWind SDK, whose integrated environment menus is shown in figure4. The server integrate Geoserver and Arcgis Server, as is shown in figure 5. After destructive earthquake occur, the client of the system (as is shown in figure 6) automatically load seismic RS images of arranged region from distributed servers, which provide 3D display of disasters for professional users. The extracted result can be stored automatically to servers for other users' reference. The main functions are introduced as below.
(1) Data management function. On the server side, it mainly contains users' data management of adding, deleting, modifying users of different roles, earthquake-related metadata management, slicing and updating of background RS images, DEM. On the client side, the user can upload and update specific earthquake-related seismic disaster images, vector data, GDP and population data for key areas.
(2) 3D browsing thematic information function. According to different requirements, the system carries out corresponding 3D rendering to the background information of the earthquake, damage images and thematic information, and elevation exaggeration rendering for special interpretation needs. For the key object in database, it can be displayed by serial number or name.
(3) Real-time damage interpretation and publishing function. For professional users, according to the needs of multi-source remote sensing data such as the domestic satellites and UAV, the client load the corresponding data for extraction of building damage, road damage and geological disasters caused by earthquake in real time. After extracting the damage vector information is automatically stored in the database through the server, another user on demand, automatic access to available earthquake damage information as a basis for collaborative task and working. In the case of M s 6.5 Ludian earthquake occurred at August 5 2014, figure 7-9 show the different disaster extraction interface.
Discussion and Conclusion
3D GIS is intuitive and realistic, which provides a good environment for the damage extraction users.
With the rapid development of remote sensing technology and network technology, multi-user collaborative disaster information extraction based on digital earth is an important direction of development of remote sensing. Analysing workflow of EERS using network, this paper realized CS architecture of RS application system based on WorldWind SDK. This system realized 3D display and management of remote sensing image and geographic information obtained before and after earthquake for different user levels. The system also implemented the functions of the cooperative extraction and seismic information publishing such as building damage, road damage and geological disasters caused by earthquake in real-time. Some applications to earthquake cases such as 2014 M s 6.5 Ludian earthquake show that this system can improve the efficiency of seismic damage information interpretation and data sharing, and provide import disaster information for decision making of earthquake emergency rescue and disaster relief. | 2019-04-26T14:24:22.235Z | 2016-11-01T00:00:00.000 | {
"year": 2016,
"sha1": "9f319c5fb61b71d0bb32675a92a204c1a670fc6d",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/46/1/012037/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "964e73ebdb10b340ac7784568976ac0845fa4d6c",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Geography",
"Physics"
]
} |
249938311 | pes2o/s2orc | v3-fos-license | Using Background Knowledge from Preceding Studies for Building a Random Forest Prediction Model: A Plasmode Simulation Study
There is an increasing interest in machine learning (ML) algorithms for predicting patient outcomes, as these methods are designed to automatically discover complex data patterns. For example, the random forest (RF) algorithm is designed to identify relevant predictor variables out of a large set of candidates. In addition, researchers may also use external information for variable selection to improve model interpretability and variable selection accuracy, thereby prediction quality. However, it is unclear to which extent, if at all, RF and ML methods may benefit from external information. In this paper, we examine the usefulness of external information from prior variable selection studies that used traditional statistical modeling approaches such as the Lasso, or suboptimal methods such as univariate selection. We conducted a plasmode simulation study based on subsampling a data set from a pharmacoepidemiologic study with nearly 200,000 individuals, two binary outcomes and 1152 candidate predictor (mainly sparse binary) variables. When the scope of candidate predictors was reduced based on external knowledge RF models achieved better calibration, that is, better agreement of predictions and observed outcome rates. However, prediction quality measured by cross-entropy, AUROC or the Brier score did not improve. We recommend appraising the methodological quality of studies that serve as an external information source for future prediction model development.
Introduction
A central aim for medical researchers is developing fit-for-purpose prediction models of prognosis outcomes [1][2][3]. With recent advances in prediction modeling, researchers now have available a wide range of approaches to choose from that can be grouped into two categories. For example, "data modeling" approaches set up a model structure describing the assumed data generating mechanism, using the observed data to estimate the parameters of that model-most of the traditional statistical modeling approaches belong to this category. In contrast, "algorithmic modeling" does not assume a stochastic data model, and comprises a variety of mostly nonparametric techniques, which use the observed data to find a function of the input to predict the output-many of these methods are referred to as machine learning (ML). The resulting function may be difficult to describe in terms of classical model structures and parameters but is designed to improve prediction accuracy compared to traditional models. This is often achieved using methods such as cross-validation (CV) to balance accuracy of predictions and overfitting [4]. While there are two distinct modeling cultures, there is no sharp distinction between them, and they share the common aim to minimize prediction error (comparing [2] and p. 223, [5]).
These methodology advancements have enabled the estimation of such prediction models in cases where the number of candidate predictor variables exceeds the number of observations. Efficient software is available to perform prediction modeling on large data sets within seconds using model fitting techniques based on data modeling or algorithmic modeling. Models are tailored to allow for direct variable selection (e.g., the Least Absolute Selection and Shrinkage Operator, Lasso, or statistical boosting [6,7]). These approaches select smaller variable subsets, improving the interpretability of results.
The traditional data modeling approach defines a suitable model structure that consists of: (1) a set of (candidate) predictor variables, (2) an often linear additive formula that weighs and combines the values of those predictors into a score and (3) an assumed distribution of the outcome given that score. Traditional inference techniques such as p-values or information criteria may then be used to iteratively reduce the set of predictor variables or extend the model with interactions or nonlinear terms [8][9][10]. Naturally, the assumed model structure represents a plausible mechanism that captures the outcome variability as observed in the data. Hence, model development is usually guided by the knowledge of domain experts (i.e., domain expertise). Also, the results from previous related studies (i.e., background knowledge) guide the selection of candidate predictors and their functional form in the model [11].
Algorithmic modeling approaches such as ML do not require similar assumptions as these approaches are designed to discover patterns from the observed data automatically. As a consequence, it has been argued that ML approaches require more training data to obtain prediction models that are as reliable as traditional modeling approaches [12]. Moreover, even if an ML algorithm is able to identify relevant predictor variables from a larger candidate set automatically, incorporating external information in the selection process may improve interpretability, variable selection accuracy, and consequently, prediction performance. This has been demonstrated using traditional modeling approaches, for instance, Bergerson et al. [13] proposed a weighted Lasso approach combining a penalty with weights based on external information. However, it is unclear to which extent, if at all, nonparametric algorithmic modeling approaches may benefit from using external information derived from prior studies, especially if those studies used traditional statistical modeling approaches, or even applied suboptimal methodology. Therefore, we investigated whether background knowledge generated by traditional modeling approaches, in particular a preselection of the predictor variables performed in previous studies by Lasso regression or by univariate selection, may help inform the design of ML models to obtain more accurate predictions. We focused on methods for binary outcomes in the setting where the number of candidate predictors is large but generally smaller than the number of observations. To investigate the benefit of background information applied with ML methods, we study the widely applied random forest (RF) [4,14].
The remainder of the manuscript is organized as follows. In Section 2, we briefly introduce the prediction methods considered in this paper and outline a typical prognostic research question arising in pharmacoepidemiologic research which motivated our investigation. We describe a large data set connected to the research question which has many common properties of Big Data. The data set is characterized by consisting of mainly binary predictor variables with extremely unbalanced distributions and two roughly wellbalanced binary outcome variables. This data set will be used as the basis of a complex plasmode simulation, which will be explained subsequently. After reporting the results of our simulation in Section 3, the final Section 4 concludes with a discussion. The logistic regression model predicts the expected value of a binary outcome variable Y ∈ {0, 1} by a linear combination of p predictor variables X 1 , . . . , X p as follows: The popularity of the logistic regression model may stem from the fact that the regression coefficients β 1 , . . . , β p can be interpreted as the log odds ratios with respect to the occurrence of an event (Y = 1) associated with a unit difference in X 1 , . . . , X p . The parameter β 0 corresponding to a constant X 0 = 1 serves to calibrate the model such that the sum of model predictions equals the sum of outcome events. The model parameters β 0 , . . . , β p are usually estimated by maximizing the logarithm of the likelihood, that is, the logarithm of the joint probability of the observed outcomes given the predictor variables and the model: whereπ i denotes the estimated probability of Y = 1 and y i the observed outcome for observation i. Predictions can be obtained by plugging in the estimates of β 0 , . . . , β p and the observed values of the predictors X 1 , . . . , X p for an individual into (1). In order to accommodate the nonadditive effects of predictors in the model, one can define further predictor variables as product terms of other predictors. Moreover, the nonlinear effects of continuous predictors can be considered by defining a set of nonlinear transformations of a continuous predictor as further predictor variables. Polynomial transformations, various types of spline bases and fractional polynomials are common choices for this task [11]. To include variable selection into the logistic model, we will contrast the following two approaches. Univariate selection: In real studies, it is not clear up front which variables should be used as predictors. Although not recommended, and despite there being no theoretical justification for it, univariate preselection is still a popular method to reduce the number of variables to include [15,16]. Here, each candidate predictor variable is evaluated in a univariable logistic regression model as the only predictor variable. Those predictors that exhibit a "significant" association with the outcome variable Y are considered for the multivariable model. The significance level α determines how many variables will be included and is often set to 0.05 or 0.20.
Lasso logistic regression: In order to avoid overfitting or overparametrization of a model by estimating too many parameters from a data set with a limited number of observations, regularization approaches were developed. One of the most popular approaches is the Lasso which is able to handle regression problems in which the number of predictors approaches or even exceeds the number of observations. Here a penalty term is subtracted from the log-likelihood, equal to a multiple of the sum of absolute regression coefficients, such that the penalized log-likelihood becomes * (β; x, y) = (β; x, y) The multiplier λ is a hyperparameter in model fitting which is often optimized by minimizing cross-entropy by CV.
Random Forest
An RF is an ensemble of classification or regression trees in which each tree is grown on a bootstrap resample of the data set [17]. The number of trees in the ensemble is a model parameter. Each tree is constructed recursively: at a given node observations are split into two distinct subsets resulting in two child nodes. This procedure is repeated as long as each node contains a minimum number of observations (in which case it is called a terminal node), constituting a user-chosen parameter of an RF. At each node, the split procedure considers a random subset of the variables in the data set, the size of which is another parameter of the model and is often set to √ p (rounded to the next largest integer value), where p is the number of input variables. For each candidate variable, an optimal split point is selected based on a loss function chosen by the user. Among all candidate variables, the one is chosen that optimizes the loss function at its optimal split point. To obtain predictions for an observation the predictions from each tree for that observation are averaged across the ensemble. While the procedure above is very generic, probability RFs according to Malley et al. [18] were designed to not only provide accurate discrimination but also consistently estimate probabilities of Y = 1. In the case of a binary outcome, they use the Gini index measuring node impurity as the splitting criterion, so that the distribution within the child nodes is more homogenous than in the parent node. Furthermore, the minimum node size is set to 10%, and no pruning of the individual trees is performed. A binary prediction in each tree is obtained as a majority vote in the terminal node, and these predictions are then averaged over the whole ensemble to obtain the final probability estimate.
An attractive feature of RFs is the estimation of variable importance using out-ofbag data. Since the variable importance based on the Gini index is known to attribute higher importance to predictors with more splitting points (e.g., continuous variables or categorical variables with more than two categories), alternative approaches, for example, based on permutations of the individual predictors, are preferred [19].
Methods to Evaluate Performance of Model Predictions
In order to evaluate how well a model predicts the outcome, several performance measures are available [20]. Given the model predictionsπ i , their logit transformationŝ η i = log[π i /(1 −π i )] and the true outcome values y i in a validation set; i = 1, . . . , N; we considered the following performance measures:
Motivating Study
As a motivation for our research, we considered a pharmacoepidemiologic research question in which data on prescriptions of medicines and on hospitalizations were used to prognosticate the adherence to the prescription of a particular blood-pressure-lowering medicine. The research question arose during the conduct of the study of Tian et al. [21] but was not further considered by the authors. In particular, out of a registry of prescriptions and hospitalizations run by the Austrian social insurance institutions, a data set with all new prescriptions of lisinopril, an angiotensin converting enzyme inhibitor, that were filled between 2009 and 2012 in Austria was assembled. Patients were included if they filled a prescription of lisinopril following a wash-out period of at least 180 days. After that index prescription, patients were followed until they filled another prescription of the same substance or until six weeks after the index prescription, whatever happened earlier.
Patients in whom the wash-out period of 180 days preceding the index prescription or the outcome assessment window of six weeks following the index prescription were not fully represented in the database were excluded, and also patients who died within 6 weeks from index prescription were excluded. The outcome of interest was treatment discontinuation: patients who did not fill a prescription within six weeks were considered as "discontinuing treatment" (Y = 1), and patients who filled a prescription within 6 weeks as "continuing" (Y = 0). The purpose of the prediction model was to identify patients who are at risk for discontinuation of the lisinopril therapy.
The full data set consisted of 198,895 index prescriptions from 198,895 different patients and 1151 candidate predictors. Demographic descriptors, "recent" prescriptions and hospitalizations (within 14 days before the index prescription) and "previous" prescriptions and hospitalizations (from 180 days to 14 days before index prescription) were available as predictors. Prescriptions were recorded on the basis of the anatomical-therapeuticalchemical classification of level 2 [22] and for hospitalizations, the discharge diagnoses were coded as ICD-10 codes. Moreover, the occurrence of any hospitalization in the "recent" and "previous" time windows and the occurrence of any hospitalization longer than 14 days in those windows constituted four further binary variables. A data dictionary for the data set is available in Section S1 of the supplemental material. Apart from age in years and year of prescription, all variables were binary and their levels were coded as 0 (absent) and 1 (present). The distribution of the proportions of level 1 for all binary variables is depicted in Figure 1. The majority of binary predictor variables were sparse; the median of their averages was 1/1530, and the first and third quartiles were 1/3371 and 1/112, respectively. The outcome status "discontinuation" occurred with a relative frequency of 0.558.
tients who are at risk for discontinuation of the lisinopril therapy.
The full data set consisted of 198,895 index prescriptions from 198,895 different patients and 1151 candidate predictors. Demographic descriptors, "recent" prescriptions and hospitalizations (within 14 days before the index prescription) and "previous" prescriptions and hospitalizations (from 180 days to 14 days before index prescription) were available as predictors. Prescriptions were recorded on the basis of the anatomical-therapeutical-chemical classification of level 2 [22] and for hospitalizations, the discharge diagnoses were coded as ICD-10 codes. Moreover, the occurrence of any hospitalization in the "recent" and "previous" time windows and the occurrence of any hospitalization longer than 14 days in those windows constituted four further binary variables. A data dictionary for the data set is available in Section S1 of the supplemental material. Apart from age in years and year of prescription, all variables were binary and their levels were coded as 0 (absent) and 1 (present). The distribution of the proportions of level 1 for all binary variables is depicted in Figure 1. The majority of binary predictor variables were sparse; the median of their averages was 1/1530, and the first and third quartiles were 1/3371 and 1/112, respectively. The outcome status "discontinuation" occurred with a relative frequency of 0.558. Fitting an RF with the default settings of the R function ranger::ranger on threefourths of the data and evaluating predictions in the remaining fourth we obtained an AUROC of 0.646 and a calibration slope of 1.11. Figure 2 and Table S1 show the distribution of Gini variable importance values across the variables. Fitting an RF with the default settings of the R function ranger::ranger on threefourths of the data and evaluating predictions in the remaining fourth we obtained an AUROC of 0.646 and a calibration slope of 1.11. Figure 2 and Table S1 show the distribution of Gini variable importance values across the variables. Supplemental Table S1.
Setup of the Plasmode Simulation Study
A plasmode simulation study based on the observed properties of the motivating study was performed. The description of the simulation study follows the structured reporting scheme ADEMP as recommended by Morris et al. [23].
Aims
The aim of our plasmode simulation study was to evaluate different strategies to incorporate background knowledge generated from "preceding studies" in a newly developed ML-based prediction model in a "current study" with respect to the performance of the resulting model in a validation set. In particular, we considered several strategies to preselect predictor variables for the prediction model.
Data Generating Mechanisms
Our motivating study was used as the basis of this simulation study. The study data set is considered as the "population" and serves to define training sets for two preceding studies and a current study, and a validation set is used for evaluating the performance of the models developed in the current study. The data set contains the outcome variable treatment discontinuation (Y). To also cover situations with a stronger association of the outcome variable with the predictors, we simulated a second outcome variable as follows. First, we divided the complete data set randomly into four approximately equally sized disjoint subsets. Second, we fit an RF using three subsets as the training data set with all available predictor variables. This step was repeated for all four combinations of subsets. Third, we predicted the outcome, that is, we computed the predicted probabilities of discontinuation based on the RF models in the respective remaining fourth of the data set, and transformed the predicted probabilities into log odds. Fourth, we multiplied the log Supplemental Table S1.
Setup of the Plasmode Simulation Study
A plasmode simulation study based on the observed properties of the motivating study was performed. The description of the simulation study follows the structured reporting scheme ADEMP as recommended by Morris et al. [23].
Aims
The aim of our plasmode simulation study was to evaluate different strategies to incorporate background knowledge generated from "preceding studies" in a newly developed ML-based prediction model in a "current study" with respect to the performance of the resulting model in a validation set. In particular, we considered several strategies to preselect predictor variables for the prediction model.
Data Generating Mechanisms
Our motivating study was used as the basis of this simulation study. The study data set is considered as the "population" and serves to define training sets for two preceding studies and a current study, and a validation set is used for evaluating the performance of the models developed in the current study. The data set contains the outcome variable treatment discontinuation (Y). To also cover situations with a stronger association of the outcome variable with the predictors, we simulated a second outcome variable as follows. First, we divided the complete data set randomly into four approximately equally sized disjoint subsets. Second, we fit an RF using three subsets as the training data set with all available predictor variables. This step was repeated for all four combinations of subsets. Third, we predicted the outcome, that is, we computed the predicted probabilities of discontinuation based on the RF models in the respective remaining fourth of the data set, and transformed the predicted probabilities into log odds. Fourth, we multiplied the log odds with a constant of 3.5 and back-transformed Entropy 2022, 24, 847 7 of 16 these "reinforced" log odds into "reinforced" probabilities. Fifth, for each observation, we sampled a new binary outcome variable based on Bernoulli distributions with parameters equal to the reinforced probability of that observation. This outcome variable, Y strong , was further considered as another target of prediction models. The relative frequency of the status "event" for Y strong was 0.634, and the RF achieved a cross-validated AUROC of 0.801 and a calibration slope of 1.17.
We defined 10 scenarios by varying the sample size (4000, 2000, 1000, 500, 250) and the predictability of the outcome variable (weak or strong). In each scenario, the same sample size was used for the two preceding studies and the current study. The validation set had a fixed size of 10,000 observations and was also redrawn at each iteration of the simulation. Due to the sparsity of most binary predictor variables, they were likely to comprise only a single value in the simulated samples. The likelihood for such degenerate distributions increased with decreasing sample size. Hence the number of useable candidate predictors grew with sample size, which is typical for Big Data problems.
We considered 1000 independent replications in each scenario.
Estimands and Other Targets
The estimands in this study were the predictions from the model trained with the data from the "current study" evaluated in the validation set. Moreover, for descriptive purposes, we also recorded the number of predictor variables selected in the preceding studies and the number of non-degenerate predictor variables.
Methods
The first preceding study was analyzed using Lasso logistic regression, selecting the penalty parameter by optimizing the 10-fold cross-validated deviance (the default in cv.glmnet of the R package glmnet [24]). The second preceding study was analyzed by fitting a logistic regression model with all predictors that were significant at a level of 0.05 in univariate logistic regression models.
The current study was analyzed using an RF with five different models (M1-M5) to consider background knowledge, depending on which set of variables was used as input for the RF:
•
All variables (M1, naïve RF); • Those variables that were selected by the Lasso in the preceding study 1 (M2); • Those variables that were selected by the Lasso in preceding study 1 and univariate selection in preceding study 2 (M3); • Those variables that were selected by the Lasso in preceding study 1 or univariate selection in preceding study 2 (M4); • Those variables that were selected by the "better performing" model. This model was determined by applying the model from preceding study 1 (Lasso) and the model from preceding study 2 (univariate selection) unchanged on the data of the current study and comparing the resulting area under the ROC. The model with the higher AUROC was considered the "better performing" model. (M5) If the set of input variables contained variables that were degenerate in the current study, then these degenerate variables were not used by the RF. The five RF models were then applied to the validation set to compute the performance measures.
In case Lasso or univariate selection selected no variables, we assumed that background knowledge was not available as if the corresponding preceding studies had never been published. Consequently, we proceeded as follows: • • If for M5 one of the models was empty, we used the predictors from the other model as input; if none of the preceding studies selected any predictors we used all predictors in the current study.
Performance Measures
As performance measures, we considered the AUROC, Brier score, calibration slope and cross-entropy introduced in Section 2.2. These measures were computed in each replication and were then summarized by their respective means and standard deviations or, in the case of the calibration slope, with the mean squared values of their logarithms across the replications of each scenario [25]. The latter quantity directly evaluates the deviation of the log calibration slope from its ideal value of zero. For the cross-entropy, we evaluated the maximum contribution in the validation set given by max
Pilot Study
The simulation study was first conducted with 100 replications for feasibility, then repeated with 1000 replications.
Software
We used the statistical software R (version 4.0.4) and the following software packages and functions to fit the models:
All remaining parameters were left at their default values.
Selected Predictors in the Preceding Studies
The number of selected variables by Lasso and by univariate selection depended on the number of candidate predictors but also on sample size. Table 1 provides descriptive statistics.
Comparison of Performance
Under weak predictability, the naive RF, that is, the RF not using any background knowledge (M1), resulted in the best performing models at validation for any sample size (see Tables 2-4 for mean cross-entropy, mean AUROC and mean Brier score and Tables S2-S10 for the numerical values of mean calibration slope, MSE of log calibration slope, maximum contribution to cross-entropy and standard deviations of all measures). Under strong predictability, naive RF not using background knowledge was among the optimal methods when evaluating AUROC, Brier score and cross-entropy (Tables 2-4). Considering the calibration slope, however, the naive RF produced the worst calibrated models (Figure 3). This was particularly evident when considering the MSE of log calibration slopes, which combines average performance with the variation of performance (Tables S4 and S5). For this measure, the RF fueled with predictor sets derived from the union of Lasso and univariate selection performed best at sample size 250. At higher sample sizes, the RF using the intersection of Lasso and univariate selection outperformed all others with one exception at sample size 1000, where method M5, the RF using the preselection based on the "better performing" model (among the Lasso and the univariately selected model trained in the preceding studies) performed equally well. weak and strong predictability to those of the Lasso models, and to those of the RF using background knowledge in various ways. Unlike the naive RF, mean calibration slopes of the Lasso were fairly constant and independent of sample sizes, yielding values between 1.02 and 1.22. The calibration slopes of the RF using background knowledge moved towards those of the naive RF with increasing sample sizes. This behavior led to an improvement of the calibration slopes at weak predictability but to a paradoxical shift away from 1 with increasing sample sizes at strong predictability. The mean calibration slopes of the RF based on the intersection of selected sets (M3) at weak predictability and smaller sample sizes were impeded by highly influential points, as also suggested by the high standard deviations of that measure (Table S3).
Discussion
This plasmode simulation study evaluated if RFs benefit from the preselection of predictors from previous studies that employed a different modeling method. In the simula- The non-optimality of the naive RF mainly originated in calibration slopes that were too high, that is, in predictions that were too narrowly scattered around the observed event rate. This behavior attenuated with increasing sample size, but the mean calibration slope was still close to 1.3 at N = 4000. With weak predictability, the opposite was observed: at N = 250, the mean calibration slope was 0.8, and it reached a value close to 1 at N = 4000. In Figure 3, we compared the mean calibration slopes of the naive RF under weak and strong predictability to those of the Lasso models, and to those of the RF using background knowledge in various ways. Unlike the naive RF, mean calibration slopes of the Lasso were fairly constant and independent of sample sizes, yielding values between 1.02 and 1.22. The calibration slopes of the RF using background knowledge moved towards those of the naive RF with increasing sample sizes. This behavior led to an improvement of the calibration slopes at weak predictability but to a paradoxical shift away from 1 with increasing sample sizes at strong predictability. The mean calibration slopes of the RF based on the intersection of selected sets (M3) at weak predictability and smaller sample sizes were impeded by highly influential points, as also suggested by the high standard deviations of that measure (Table S3).
Discussion
This plasmode simulation study evaluated if RFs benefit from the preselection of predictors from previous studies that employed a different modeling method. In the simulation, we varied two main drivers of predictive accuracy: the actual predictability of the outcome and the sample size. A special feature of our study was that the number of candidate predictors naturally increased with sample size, as the set of candidate predictors mainly consisted of sparse binary variables. While commonly used measures for predictive accuracy like the AUROC or the Brier score did not improve when restricting the set of candidate predictors, RF models achieved better calibration, that is, better agreement of predictions and observed outcome rates, when the scope of candidate predictors was reduced based on the knowledge created in previous studies. This was observed when the outcome was strongly predictable, but not under weak predictability, where the naïve RF was already the optimal method among the comparators and across all performance measures.
It is remarkable that under strong predictability, the calibration of the naïve RF could be improved by preselection given that the preselection was not based on measures compatible with the nonparametric nature of RFs, but rather based on a linear predictor model fitted with the Lasso, with or without combination with univariate selection. At smaller sample sizes, the RF benefitted from more generously sized candidate sets, while with larger samples, it performed best when based on smaller sets defined as the intersection of Lasso and univariate selection. Interestingly, enriching the candidate set or even blending the Lasso selection with results from univariate selections did not severely worsen the RFs. This is in contrast with results from traditional multivariable modeling, where it is known that because of correlations between predictors, univariate association with the outcome is neither a necessary nor a sufficient condition for a predictor to be important in a multivariable context [15]. When training RFs, for each split in each tree, only a random subset of the predictors is considered and the split is performed in the predictor with the strongest univariate association with the outcome. Hence, by its construction, a RF may benefit from choosing from candidate predictors with strong univariate associations.
Under weak predictability, there was no benefit in restricting the candidate set to predictors selected in preceding studies. Probably, the reason for this was the poor performance and high instability of the models from the preceding studies in these scenarios. At weak predictability, a model can hardly safely distinguish between real and irrelevant predictors as the true predictor-outcome associations are heavily overlaid with noise. Hence, when using predictors that were included in models derived in previous studies, one should also pay attention to the predictive performance of those models. Poorly performing models are probably not so relevant to consider.
The AUROC has several shortcomings because of its nonparametric construction and hence should not be used exclusively to evaluate the performance of prediction models. A high AUROC is achieved by a prediction model as soon as the predictions are correctly ordered, but for that ordering the absolute predicted probabilities are irrelevant. Hence, even a prediction model that provides predictions that are uniformly too high or too low, or are too far off or too close to the marginal outcome rate, may yield a high AUROC. Consequently, the AUROC may obscure problems with the calibration-in-the-large as well as calibration problems such as over-or underfit (see Figure 4 for exemplary calibration plots). However, in the application of a model, the actual accuracy of the predicted probability for an individual is important, and not so much whether it is larger or smaller than that of another individual. Also, the Brier score or the cross-entropy cannot reveal local biases of the prediction model as they average or sum up the prediction error (measured by different loss functions) over the observations of a validation set. For example, it is still possible that there are differences in bias and prediction quality between high-risk and low-risk individuals, which are averaged in these scores. Calibration as a measure of predictive performance has not yet received the attention it deserves in the evaluation of prediction models [26]. In particular, in this study, we Calibration as a measure of predictive performance has not yet received the attention it deserves in the evaluation of prediction models [26]. In particular, in this study, we focused on the calibration slope rather than on the calibration in the large. The latter measure evaluates if the mean prediction matches the observed frequency of the predicted outcome status, and is of importance particularly if prediction models are transferred to different target populations. This was not the case in our study and miscalibration in this sense was not expected. We concentrated our investigation on the calibration slope instead. Applying an ideally calibrated model for outcome prediction, one can expect that the predicted probability of the outcome of interest for a subject is similar to the actual relative frequency of the outcome status in similar individuals. Adequate calibration is thus related to the local unbiasedness of residuals, which can be easily checked by smoothed scatterplots of residuals against model predictions. Such plots are central diagnostics in traditional regression modeling but seem to have fallen into oblivion in many contemporary validations of prediction models, which are dominated by nonparametric discrimination measures like the AUROC [27,28].
Already, in his seminal paper on RFs [17], Breiman proposed the iterated estimation of a RF in order to reduce the number of candidate predictors and improve the performance of the forest. In particular, he suggested fitting a RF and evaluating the variable importance for each predictor. In a second run, only the most important predictors from the first run are included. However, Breiman suggested iterating the application of a RF within a single data set. In our study, we simulated the preselection of variables in preceding studies, a situation that prognostic modelers are often confronted with in practice if they screen the literature on similar preceding studies. Moreover, we mimicked another typical practical situation that modelers often face: the preselection of variables in preceding studies is often not compatible with the considered modeling algorithm in the current study, and may even lack methodological rigor. While the use of background knowledge for preselecting predictors has been recommended [29], previous studies on the same topic are often a questionable source if conducted with inferior methodological quality or if poorly reported [30,31].
Previous comparative studies that have evaluated performance at different sample sizes, in particular simulation studies, have often focused on sets of predictors that did not differ between sample sizes. Our study mimicked the typical situation when working with real data sets that the larger the sample size, the more potential predictors can be considered because sparse predictors are more likely to have degenerate distributions in smaller samples. As a consequence, there is a natural preselection towards more robust predictors in smaller samples, and hence some quantities such as the Brier score or crossentropy do not decrease proportionally to sample size as one might expect with a constant predictor set.
We assumed the same sample sizes for the preceding studies and the current study. This limitation kept the numerical results concise, and an extension is probably not necessary. In a real prediction modeling situation, a researcher might not rely on background knowledge from "smaller" studies than the current study. In contrast, if predictor sets have been generated by larger preceding studies, the observed effects on the performance of the RF will almost certainly amplify.
We used observational data as our "population", and hence we were unable to say which predictors were truly important and which were irrelevant for prediction. We tried to address this limitation by providing variable importance measures for the variables based on the RF fitted on the full data set. Still, since we do not know the data generating mechanism, we are unsure if ideally predictors should be combined additively or by means of more complex models such as those revealed by RFs. Because many predictors were binary and sparse, additivity is perhaps a plausible assumption. Nevertheless, there could be predictors that are optimally combined using logical OR links. In our study, the predictor variables had differing importance values, but one could also consider variations of this data-generating mechanism. First, there could be a group of only a few variables with high importance, while all other variables have low importance. Second, all variables could have very similar importance. In the first scenario, it is expected that preceding studies could also identify these important variables even if they used Lasso or univariate selection. In the main study, this would also be revealed by the RF, even if it was not fueled with the knowledge from the preceding studies. In the second scenario, when all predictors are similarly (un)important, the selection in preceding studies may be quite random and have no relevance for the main study apart from reducing the variable space. However, these conjectures cannot be proved with our study. By creating a second, artificial outcome variable in which the association of the predictors with the outcome was reinforced, we could study how the strength of predictability of the outcome impacts the benefits of using knowledge from preceding studies.
In our simulation, we assumed that preceding and main studies were sampled from the same population. This assumption is not always justified in practice, where the cohorts used in different studies are sampled from different populations, with different distributions and different predictors being important for predictions in those populations. In such cases, results from preceding studies will not be fully transferable to the main study. Similarly, preceding studies' results will be less informative if based on small samples, as then the distributions of predictors and their associations with the outcome will vary randomly between studies.
In our motivating example, many predictors indicated the prescription of particular substances, and it is plausible that patients with the same etiology may have received different substances depending on the preferences of the prescribers, leading to a negative correlation between the substance indicators. In such situations, the elastic net [32] might be preferred over the Lasso as it does not select only one predictor out of a group of correlated predictors but tends to include several predictors from such a group. This may provide more robustness against model instability. Although we did not include the elastic net as a method performed in preceding studies, our M4 (RF based on the union of Lasso and univariate selection) may be seen as an approximation to it. By ignoring correlations among predictors, univariate selection may select more than a single representative from a group of correlated predictors with similar associations with the outcome. Therefore, its selection path somehow approximates, for example, the elastic net or the group lasso. Under strong predictability, M4 indeed often outperformed M2 (RF based on Lasso selection). Under weak predictability, however, we could not observe any benefits from applying M4, and then we would also not expect any benefits from using the elastic net instead of the Lasso.
Previously, Heinze et al. [10] pointed out the importance of including predictors with proven strong associations with the outcome in regression models irrespective of their observed association with the outcome in the current study. RFs allow the prioritization of predictors in constructing trees, and a useful and practically relevant strategy may be to classify predictors by their previous relevance in prediction models. Predictors that were selected by preceding studies could be prioritized when constructing trees, while predictors that were not previously used in models may be assigned a lower probability to be considered for node splits. For example, in the ranger package the parameters split.select.weights and always.split.variables allow such a distinction between "strong" and "unclear" predictors, by assigning higher weights to predictors assumed to be strong, or requesting to always split these variables. It was out of the scope of this investigation to optimize and derive general recommendations on the use of these parameters to incorporate background knowledge into RF fitting.
Conclusions
Overall, we recommend the use of background knowledge generated by preceding studies when RFs are considered for developing prediction models. Such knowledge may greatly help to improve the calibration of RF predictions and particularly to avoid underfitting. Even preselection that is not fully compatible with the nonparametric RF construction may be beneficial. However, researchers are advised to critically evaluate the methodology in such preceding studies, to restrict their consideration to studies in which adequate modeling algorithms have been applied, and to focus on studies that could demonstrate a good performance of their proposed model. In particular, in line with Sun, Shook and Kaye [15] and Hafermann et al. [31] we advise against considering only predictors that proved "significant" in univariate selections in a preceding study that could not demonstrate an appropriate predictive performance of the multivariable model.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/e24060847/s1, Section S1: Data dictionary of the motivating study. Table S1: Motivating study: permutation based predictor importance for the 20 most important variables; Table S2: Simulation study, mean calibration slope; Table S3: Simulation study, standard deviation of calibration slope; Table S4: Simulation study, mean MSE of log calibration slope; Table S5: Simulation study, standard deviation of MSE of log calibration slope; Table S6: Simulation study: mean of maximum contribution to cross-entropy; Table S7: Simulation study: standard deviation of maximum contribution to cross-entropy; Table S8: Simulation study: standard deviation of crossentropy; Table S9: Simulation study: standard deviation of AUROC; Table S10: Simulation study: standard deviation of Brier score. The relevant references are [21,[33][34][35]. | 2022-06-23T15:21:43.597Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "cbb9070b4de955ade242fbf25a018c896fd0c180",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1099-4300/24/6/847/pdf?version=1655716423",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "65a7b8216357b8c4f070c41f74446968c2c2d8b5",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
232314688 | pes2o/s2orc | v3-fos-license | Use of Time-Series NDWI to Monitor Emerging Archaeological Sites: Case Studies from Iraqi Artificial Reservoirs
: Over the last 50 years, countries across North Africa and the Middle East have seen a significant increase in dam construction which, notwithstanding their benefits, have endangered archaeological heritage. Archaeological surveys and salvage excavations have been carried out in threatened areas in the past, but the formation of reservoirs often resulted in the permanent loss of archaeological data. However, in 2018, a sharp fall in the water level of the Mosul Dam reservoir led to the emersion of the archaeological site of Kemune and allowed for its brief and targeted investigation. Reservoir water level change is not unique to the Mosul Dam, but it is a phenomenon affecting most of the artificial lakes of present-day Iraq. However, to know in advance which sites will be exposed due to a decrease in water level can be a challenging task, especially without any previous knowledge, field investigation, or high-resolution satellite image. Nonetheless, by using time-series medium-resolution satellite images, combined to obtain spectral indexes for different years, it is possible to monitor “patterns” of emerging archaeological sites from three major Iraqi reservoirs: Mosul, Haditha and Hamrin lake. The Normalised Difference Water Index (NDWI), generated from annual composites of Landsat and Sentinel-2 images, allow us to distinguish between water bodies and other land surfaces. When coupled with a pixel analysis of each image, the index can provide a mean for highlighting whether an archaeological site is submerged or not. Moreover, using a zonal histogram algorithm in QGIS over polygon shapefiles that represent a site surface, it is possible to assess the area of a site that has been exposed over time. The same analyses were carried out on monthly composites for the year 2018, to assess the impact of monthly variation of the water level on the archaeological sites. The results from both analyses have been visually evaluated using medium-resolution true colour images for specific years and locations and with 3 m resolution Planetscope images for 2018. Understanding emersion “patterns” of known archaeological sites provides a useful tool for targeted rescue excavation, while also expanding the knowledge of the post-flooding impact on cultural heritage in the regions under study.
Introduction
Among the many threats to which archaeological sites are subjected in the Middle East and North Africa (MENA) region, dam constructions and artificial lake formations have been some of the most common over the last 50 years (Figure 1). During this time, hundreds of dams have been built in the Middle East, providing undeniable benefits but also having an impact at the socio-economic, environmental, and geopolitical level [1][2][3], [4] (pp. [13][14], [5] (pp. [4][5][6]. Archaeological sites were also impacted by the construction of dams and subsequent formation of artificial lakes, a threat soon recognised by national and international institutions. Several salvage projects, consisting of archaeological surveys and rescue excavations, have been carried out in different regions where cultural heritage was endangered by the construction of dams [6,7]. [8,9]. Some of the best known and best-published examples for Near Eastern archaeology are the salvage surveys and excavations carried out from the late 1960s until 2010-20112010s along the Euphrates and Tigris rivers (e.g., for the Euphrates, see refs [6,7,[10][11][12][13]. For the Tigris river see, e.g., refs [14][15][16][17]) ( Figure 2). However, two studies have pointed out that the general awareness of the potential loss of cultural heritage linked to dam construction has been less than optimal, even if archaeological survey projects were carried out in the past [18][19][20]. Using Turkey as a case study, Marchetti et al. [19] describe how heritage protection still lacks a systematic approach to the safeguarding of archaeological sites and features, both pre-and postflooding [19] (pp. [20][21], [21] (p. 13), [20] (pp. [30][31][32]. Regarding the Euphrates, they point out that only 12 out of the 23 areas that would be affected by dams along the river have After this brief overview, it is clear that there is evidence of a significant loss of archaeological data due to water submersion. However, we have evidence that some sites might be still accessible under specific conditions of receding waters. As explained below, these conditions can be prompted by natural or anthropic events, such as climate change or water management policies. Identifying and monitoring water bodies has been a common practice in disciplines such as geography or geology, and one of these studies can be used as a starting point for this research. A study group from the University of North Carolina explored the impact of climate-and politics-related events on some artificial lakes of present-day Iraq, with a specific focus on the Haditha and Mosul reservoirs [27]. Using remote sensing data, they explored lake-surface changes from their formation in 1985 and 1986 to 2016. The result was a better understanding of how a combination of anthropic and natural events brought drastic changes in the lake-surface areas over the last 30 years. Sudden changes, registered for the years 1991, 2001, 2009, 2015, were mostly linked to water management policies tied to political events (e.g., the series of conflicts from the Iraq-Iran war to the ISIS invasion, [27], pp. 271-273). On the other hand, prolonged years with low water level were linked to a reduction in precipitation along the Euphrates and Tigris rivers, coupled with the number of dams upstream of each reservoir. A more widespread drought was registered in the area since 1999, with a peak in 2009, which in turn affected the water level of the reservoirs [4] (pp. 58,59), [27] (pp. 273-276), [28] (pp. 270-276).
Usually, remote sensing research focused on studying submerged archaeological features relies on methods which are difficult to apply in countries such as Syria and Iraq, e.g., airborne laser remote sensing [29] or sonar technologies mounted on boats [30,31]. In the Near East, research has focused mainly on mapping relict water features [32][33][34][35][36] or used coarser-resolution imagery that does not allow for immediate validation of the results [37,38]. Instead, cultural heritage monitoring using remote sensing techniques focused mainly on looting and destruction caused by conflicts [39][40][41][42][43][44][45][46] (the EAMENA project also released an online database with the location and types of damage affecting archaeological heritage, available at http://eamena.arch.ox.ac.uk/ (last accessed on 14 December 2020). On the other hand, other studies that focused on water body detection or temporal change mapping [47][48][49][50][51][52] were never linked to archaeological heritage research, even if we now know that sites can be affected by these phenomena as well (on this topic, using the example of the Delmej reservoir in southern Iraq see Marchetti et al., 2018 [21], pp. 12-15; on temporal change detection for mapping damage to archaeological sites caused by human activities but not focused on dam construction, see also Rayne et al., 2020 [53]). For these reasons, the aims of this paper were:
•
To evaluate whether more archaeological sites than the two mentioned before are affected by the cycles of water retraction; • To establish a systematic and easy-to-reproduce methodology for the mapping and monitoring of these events over a long time span, on an annual basis. The selected time span will cover from the completion of the dams to the present day, but the method could be adapted to different situations; • To evaluate the impact of the interannual variability of the water level and its impact on the archaeological sites, by monitoring, monthly, over a single year.
The monthly analysis was carried out over the year 2018, which was chosen because a sharp lowering of the water level was registered in all the three reservoirs. The examination of the water level fluctuation and its impact on archaeological heritage is not only interesting from the research point of view, but it can also be useful in helping future targeted excavations or preservation activities.
Study Areas
Given the evidence quoted above, the choice of the Haditha and Mosul Dam lakes as study areas seemed obvious. Moreover, both areas were surveyed before the construction of dams, and archaeological reports are available for each of them (see details in Section 2.2). However, there is a third reservoir that can be included as well: the Hamrin Dam area was surveyed before its flooding in 1981, and many archaeological sites were recovered beforehand [54,55]. As described below (Section 2.1.3), the dynamics of the resulting artificial lake make the Hamrin reservoir another ideal candidate for this study. The Haditha lake is prone to strong surface reductions: for example, during the droughts recorded in 2007-2009 and 2013-2014, the surface lake area dropped by 72% and 62%, respectively [27] (pp. 271, 272).
Mosul Dam
The Mosul Dam area, located along the Tigris river, extends roughly from the northwest of the reservoir (37 • 0 27.9792" N, 42 • 21 5.1624" E) to the south-east of the dam (36 • 33 32.49" N, 43 • 1 57.3096" E) ( Figure 5). The dam itself, the largest in Iraq, was completed in 1986, creating the Mosul reservoir, (previously known as Lake Saddam) which covers an area of approximately 372 km 2 with a capacity of 11.2 km 3 . The Mosul Dam lake, if compared to the Haditha Dam, is less prone to substantial reductions in water, possibly because of the presence of fewer dams upstream. Periods of substantial water loss are more or less the same as the former dam, but with a significant loss in 2011, when the surface lake area dropped by 50%, caused by a water management policy aimed at avoiding a dam failure [27] (p. 276).
Hamrin Dam
The Hamrin Dam area, contrary to the other two areas, does not extend along the two main Mesopotamian rivers, but instead along the Diyala river, one of the tributaries of the Tigris. The area extends from the north-west of the reservoir (34 • 22 57.126" N, 44 • 49 16.4928" E) to its south-east (34 • 2 1.9392" N, 45 • 12 30.6576" E) ( Figure 6). The dam was completed in 1981, forming the Hamrin reservoir, which extends for approximately 340 km 2 with a capacity of 4.2 km 3 . The Hamrin lake, unlike the former two reservoirs, is subject to frequent surface area changes, sometimes affecting the very shape of the lake. Periods of low water level are similar to those of the other two dams, especially regarding the effects of widespread droughts (1999-2001; 2009; 2013-2014). However, the most substantial reduction in the lake area was registered in 2008, when it dropped by 80%, due to the damming of the Lund river in Iran [4] (pp. 139-140), [52] (pp. 58, 59).
Archaeological Sites
This research also relies on the knowledge of site locations; however, accurate coordinates for archaeological sites are not always available. This is especially true for settlements investigated by older salvage projects which, due to time constraints, rarely revisit a site after the first discovery, especially if it was not a target for rescue excavation. Moreover, technological limitations impact the quality of information about a site location [56] (pp. [47][48][49][50]. In order to work properly in a GIS environment, data on archaeological sites from the study regions were gathered from two primary sources:
•
An open, and freely available dataset of Google Earth placemarks, available as a .kmz file named "ANE.kmz", which stores name and location of more than 2500 archaeological sites across Egypt and the Near East The addition of the second source of data was necessary because the ANE.kmz, although massive, does not store a complete inventory of all the archaeological sites in the study areas.
In order to digitise sites from the distribution maps with enough accuracy, it was necessary to operate in a series of steps, which are part of a common procedure for repositioning sites in GIS, especially for Near Eastern archaeologists, who often deal with old survey data. These steps, apart from georeferencing the maps, combine the analysis of site location descriptions and topographic maps provided by scientific reports and the use of satellite imagery, in order to improve the accuracy of repositioning a site in GIS (see ref [56] pp. 50-53 for an overview of the process). However, it was not always possible to relocate all known sites in each area with enough confidence, and the resulting dataset used in this research is necessarily incomplete.
The final dataset consisted of 54 sites for the Haditha Dam, 48 for the Mosul Dam, and 42 for the Hamrin Dam, for a total of 144 archaeological sites (see also Section 2.4.2). The dataset used here for the Mosul Dam is different from the one published in Sconzo; Simi, 2020 [60]. While location precision for sites analysed in this paper was checked against that dataset, the overall accuracy and related results might have changed. The Mosul Dam dataset published in Sconzo; Simi, 2020 [60] will be the target of a future, more accurate, endeavour conducted by P. Sconzo, F. Simi, and A. Titolo using the same methodology applied here.
This research also relies on the possibility of measuring the resurfaced extent of an archaeological site. For this reason, polygons representing the extent of a site were drawn in GIS, based on their visibility on satellite images coupled with topographic maps, when available. It was not possible to reconstruct the actual extent of all the sites available; therefore, polygons were drawn only for those sites that satisfied the following conditions:
•
On satellite images, the site shows evident features such as walls that delimit its extension; • Visible topographic features (e.g., a Tell) that was taken as minimum site extent.
Such conditions were necessary in order to account for an ongoing discussion about the reliability of overall sites dimensions reconstruction and definition of clear limits for archaeological settlements. This discussion is also tied to the matter of how Near Eastern archaeological surveys have historically been conducted (see, e.g., Banning, 2002 [61], pp. 81-84; Lawrence, 2012 [56], pp. 54-56). All these elements dictate careful consideration when dealing with sites' extent. For this reason, simple dimensions such as, e.g., "50 × 100 m" provided by survey reports were not considered here, and only sites that met the conditions highlighted before were included in the analysis. It was possible to reconstruct the surface extent of 51 sites, of which 11 were for the Haditha Dam, 9 were for the Mosul Dam, and 31 were for the Hamrin Dam.
Satellite Images
The goal of this research was to monitor events over 40 years, but also to be as easy to reproduce as possible; therefore, satellite images were chosen accordingly. To cover a time span from 1984 to the present day, it was an obvious choice to rely on Landsat satellites, and specifically Landsat 5, 7 and 8. Some Landsat 5 images were also acquired as replacements for some Landsat 7 images that were too problematic to use due to the Scan Line Corrector (SLC) failure. For more recent years (from 2015 to 2019) Sentinel-2 images were acquired. The result was an average of 36 images for each area, but because it was necessary to produce Normalised Difference Water Index (NDWI) values for the year before the completion date of the Hamrin Dam (1981), one Landsat 2 image was acquired as well, only for that area ( Table 1). All images were taken at the Top of Atmosphere (TOA) reflectance (for details, see Section 2.4).
The monthly interannual analysis for 2018 was carried out on 12 Sentinel-2 images for each area, for a total of 36 images.
To obtain a visual confirmation of the results from the pixel analysis of the NDWI images, it seemed more convenient to use higher resolution imagery, at least for this last stage. A good compromise between cost-effective and good resolution imagery is provided by Planetscope images, distributed by Planet at no cost after a European Space Agency (ESA) Third Party Mission (TPM) application. The Planetscope satellite constellation is formed of more than 120 individual satellites (named DOVE) that cover the globe daily at a 3-3.7 m Ground Sampling Distance (GSD). More information is available at: https://www.planet.com/products/ (last accessed on 14 December 2020). Access to Planetscope images was granted for the entire year of 2018 and limited within the three study areas indicated above. Between specific close-up scenes and monthly snapshots, 46 Planescope Ortho Scene (Level 3B) images were gathered through the Planet Explorer portal ( Figure S1, the portal is accessible at https://www.planet.com/explorer (last accessed on 14 December 2020)). For previous years, Sentinel-2 or pansharpened Landsat 8 images were used instead. Figure 7 illustrates the methodological approach adopted in this study. Firstly, an assessment of the processing methods useful to discriminate between submerged and emerged areas was made, and the NDWI was chosen as the most appropriate. Secondly, NDWI images were generated in Google Earth Engine, and then imported in QGIS for the pixel analysis. This latter stage was carried out using Saga's Add Raster Values to Points algorithm on the archaeological sites point shapefile. The analysis of the extent of the emerging sites was carried out in QGIS as well, and it required a reclassification of the NDWI images beforehand, paired with the use of the Zonal Histogram algorithm on the polygon shapefiles. The results obtained were then checked with Planet satellite images for the year 2018, and with Sentinel-2 or Landsat images for any previous year not covered by the assigned Planet quota. To provide reproducibility, two QGIS models and one R script (.rsx) to use in QGIS were created; moreover, the analysis was also reproduced in R, and both the R code and the QGIS models and code are linked in the Supplementary Materials.
NDWI
Medium-resolution images alone can rarely be used to visualise archaeological sites; hence, to achieve the aims of this research, there was a need for a method able to clearly discriminate between land and water surfaces.
There are different techniques available to delineate and map water features (for an overview, see refs 48,63), but one of the most widely used in is the Normalised Difference Water Index (NDWI). As argued by Du et al. ( [48], pp. 672, 673), NDWI is a suitable tool for mapping water features, because it is easier to obtain and generally more reliable than single-band classification methods, such as density slicing (on this topic, see also Qiao [64]; the same approach was also used by Marchetti et al., 2019 [19]). Both qualities perfectly fit the scope of this research, and this index was chosen as the one to delineate water masses.
Two formulae are generally used to obtain the NDWI, by combining the green band with either the near-infrared band (NIR) or the short-wave infrared (SWIR) band of satellite images. These formulas are usually called after their creators, or named NDWI [65] and modified NDWI [66], respectively: In the NDWI images, values close to or below an arbitrary threshold (usually 0, but for other examples see Ji et al., 2009 [67]) can be identified as land formations, while values above the threshold can be classified as water. There is no unanimous agreement about which of the two formulas works best for any occasion, and literature about their use mostly shows how the benefit of the two bands might be very situational and tied to the immediate surrounding environment of the water bodies [50,[68][69][70][71][72]. The formula used here is Equation (2), because it is generally agreed that, unlike Equation (1), it can better distinguish between water and built-up areas without returning false positives [66] (pp. 3026, 3027). One weakness of this formula is, however, that it tends to return positive values when snow is present [51]. The selected study areas do not have dense vegetation or snow around them but can present sparse built-up features close to the lakes; therefore, Equation (2) seemed more appropriate. The NDWI is generally aimed at identifying water bodies, but if the location of an archaeological site is known, then the index can be used to check if that location is under water or not. In fact, on satellite images, the basic difference between an emerged and a submerged archaeological site will be whether pixels can be identified as representing water or not.
Google Earth Engine
The processing to obtain the NDWI images was carried out in Google Earth Engine (hereafter GEE), because it is capable of processing a large quantity of data in a short time [73]. The number of raw images necessary would have also required significant amounts of space on any physical drive, a problem that can be avoided by using the cloud-based GEE platform. A script was written to generate the NDWI images for every year and areas using Equation (2) seen at Section 2.4.1. Each image was acquired at the TOA reflectance, and a cloud masking function was applied to the data as well. During the process, a filter for a low cloud pixel percentage was applied; however, this percentage was tweaked in cases when there were no available images that satisfied those criteria. Nonetheless, there were still some problematic years where images without dense cloud cover were available, e.g., 1985 for the Haditha Dam, and December 2018 for the Mosul Dam. After the image selection and the generation of the NDWI, the annual composite was created by taking the median of the NDWI values at each pixel through the collection of NDWI images in each year interval [71] (p. 158). The same procedure was applied for the generation of the 2018 monthly composites. One NDWI composite was also generated for the year before the construction of each dam. The Hamrin Dam was completed before the first Landsat 5 satellite acquisition (1984); therefore, the NDWI image was generated using a Landsat 2 image. Given that the SWIR band was not present on the Multispectral Scanner System (MSS) sensor of Landsat 2, the NIR band (Band 7) was used instead, and the image underwent a transformation process from digital numbers (DN) to reflectance values (the process for the Landsat 2 images was compiled on a separate script). The low cloud filter and the cloud masking function were applied to minimize potential impacts from the atmosphere [74] (see the discussion Section 4), while the median usually helps in removing noise and minimizing the effects of clouds and shadows [53]. In R, the process used in GEE was carried out using the RGEE package [75]. After obtaining all the processed images needed, the subsequent analyses were carried out in QGIS.
Pixel Analysis and Zonal Histogram
In QGIS (versions 3.10/3.16), the pixel analysis was carried out using Saga's Add Raster Values to Points algorithm. This algorithm allows for the retrieval of pixel values at each point location (i.e., the archaeological sites) across multiple selected raster images. This process was applied to the archaeological site shapefiles using the annual NDWI composites, and then repeated with the 2018 monthly composites.
The algorithm generates a new shapefile with the pixel values at the point location in all the underlying images. In the resulting shapefile, using 0 as a threshold between water and land, if a point has a pixel value below 0 in all the images, it can be assumed that the area was likely not affected by the dynamics of water retraction, while if the values are always above 0, the area was always submerged. Furthermore, if the point shows alternating values above and below 0, it can be assumed that part of the site re-emerged more than once due to the lowering of the water level. Although it can be argued that a single point is not very representative of the entire area of an archaeological site, its use can be suitable to give a first idea of the dynamics affecting an archaeological site, because it gives an unambiguous and quantifiable value. The point was usually placed at the highest point of a site, which is more prone to emerge after a change in the water level. Moreover, once a value below 0 is registered, a visual inspection of the area around the sites using satellite images can help validating the results and to assess the extent of the emersion.
The next step was to assess to what extent the sites were impacted by the water retraction, both over the long term and during the sample year of 2018. The NDWI images were reclassified so that all the pixel values between −1 and 0 were counted as 0, and all the values between 0 and 1 were counted as 1. The reclassification allowed for the easy application of the Zonal Histogram algorithm to count the number of 1 and 0 pixels within each polygon representing site extents. In R, the pixel analysis was carried out using the raster package [76], while the qgisprocess package was used for the zonal histogram analysis. This last package is still in an experimental phase, but it is available at https://github.com/paleolimbot/qgisprocess (last accessed on 25 January 2021).
Results
Results show that a larger number of sites were affected by the cycles of water retraction than just the two presented at the beginning of this paper (Figure 8). The overall number of sites for which a negative pixel value was registered at least once during the time span analysed was 73 ( Figure 9). Most of these sites were located at the edges of the three reservoirs; however, some differences can be highlighted for each area, especially regarding the number of sites that emerged per year.
Haditha Dam
Of the 54 sites analysed for this region, 27 sites registered a negative pixel value at least once from their submersion in 1985, which means that it is likely that they have been affected by the dynamics of water retraction. Most of these sites are located in the western part of the reservoir, but others were also evident further to the east ( Figure 10 and Figure S2). Following what was already known about the years with lower water level (Section 2.1.1), 2009, 2010 and 2015 had the highest number of resurfaced sites, with 25, 13 and 24 sites, respectively (Figure 11a-c). Another significant year was 2018, when 17 sites resurfaced due to another water lowering event (Figure 11d). On the contrary, other years in which the lake was known to have suffered a retraction (see Section 2.1.1) showed only a limited impact on archaeological sites: in fact, in 1991, four resurfaced sites were registered, while five resurfaced in 2001.
In general, while years with exceptionally low water levels have a high number of resurfaced sites, some archaeological sites are more prone to emerge even if the water level is not low (Figure 12). Among these, there is Kifrin, which emerged from the water 11 different times. Analysis of the resurfaced extent of the site showed that even when the water level was not extremely low, (e.g., 1999, 2001) more than 80% of the site was out of the water ( Figure 13). Other interesting results come from the site of Sur Telbis, an important Iron Age and probably Parthian site [10] (pp. 356-358), which was out of the water four times, in 2009-2010, 2015 and 2018. A high-resolution image in Bing Maps likely from 2011 shows that it suffered significant erosion on the river side, and only the highest part of it survived, probably because it was never directly reached by water ( Figure 14). This is confirmed by the analysis on the polygon shapefiles, which shows how the emerged extent of the site changed every year, suggesting once again substantial erosion damage. Another interesting result is visible for the Iron Age site of Glai'a [10] (pp. 166-180). It is located in the middle of the reservoir and was recorded as being out of water three times, in 2009, 2015 and 2018 ( Figure 15). The images from 2018 show that the south-eastern portion of the site was visible, together with its double wall. Given its position in the middle of the reservoir, the site rarely experienced significant emersions; however, in 2009, 72% of the site was out of the water (see also Section 3.4).
Mosul Dam
Compared to the Haditha Dam, in the Mosul Dam region fewer sites were affected by the water-level reduction cycles. In fact, in the Mosul Dam area, 16 sites emerged during the chosen time span, while 23 do not show any sign of resurfacing ( Figure 9). Due to the shape of the lake, the sites that emerged are more evenly distributed than in the Haditha Dam reservoir; they were found on both the eastern and western parts of the artificial lake ( Figure 16 and Figure S3). A significant number of settlements resurfaced in similar years to the Haditha Dam, for example 2009, 2011 and 2015 (with 9, 16 and 12 sites, respectively), but, as before, it is possible to add the year 2018 as well, when 16 sites emerged from the ebbing waters (Figure 17a-d). One significant difference in respect to the Haditha reservoir is that archaeological sites at the edges of the Mosul Dam registered a negative pixel value more often than the sites in the former dam. The result is that some of them were probably above the water level more often than the time they were below it ( Figure S3). Among these sites, there are Ger Matbak, Khirbet Hatara, and Tell Baqaq 1 (26, 20, and 31 times, respectively).
For Ger Matbak, the polygon analysis showed that it was rarely completely submerged, and that even years with moderate water levels (e.g., 1998 and 2002) saw 60% of the site emerge (Figures 18 and 19). Moreover, this latter analysis showed that the entirety of the site was out of the water for four consecutive years (2014-2018). The area of Kemune site, mentioned above (Section 1), was recorded to be above the water level four times during the 35 years of analysis. It is possible that the exceptional preservation state of the site might be due to the fewer number of times it was subject to erosion by the ebbing waters (see Section 4). A note on this site is necessary, however: no coordinates were available at the time of the analysis; therefore, its position was assessed from the published photographs, available satellite images, and from the dataset published in Sconzo; Simi, 2020 [60]. It is likely that the pixel values and the respective emersion times will differ with more precise coordinates. The same can be said about other archaeological sites and the general results for the region, which might change with a more accurate and extended dataset. Other examples of emerging sites come from some settlements such as Khirbet Kharasan (9 times) and Tell Dhuwaij (12 times). In contrast to Ger Matbak, Khirbet Kharasan was most affected by significant reductions in the water levels; its resurfaced extent was around 80% only on two occasions, in 2011 and 2018 (Figures 18 and 20).
Hamrin Dam
This area shows quite different results in respect to the other two reservoirs; while the Haditha and Mosul reservoir maintained their overall shape even in years with very low water levels, the Hamrin lake was subject to substantial shape changes. For this reason, the number of sites resurfacing from the lake was very high when compared to the overall number of settlements in the region used in this analysis. Of 42 sites analysed, 12 had never been affected by artificial lake formation, while 30 of them had resurfaced at least once, meaning that all sites have resurfaced at least once ( Figure 9). During the drastic change in water level in 2008 (see Section 2.1.3) even the sites in the middle of the reservoir were above the water level. Other periods with a significant number of sites emerging from the reservoir were registered in 2000 (23 sites) and in 2015 (17 sites); in 2018, the water level still impacted the area, but was not as low as the other years, and in turn resulted in fewer sites having resurfaced (12) (Figures 21 and 22). Many sites show frequent emersion rates ( Figure S4), such as Tell Yelkhi (32 times), or the group of settlements around Tell Razuk, which seem to have been out of the water more often than they were submerged (more than 28 times). Other settlements that have resurfaced less often are, e.g., Tell Zubeidi (13 times) or the group of sites around Tell Baradan: here, Tell Hadad and Tulul es-Sib emerged 13 and 15 times, respectively, while Tell Baradan itself resurfaced 31 times. The polygon analysis showed that both Tell Yelkhi and Tell Baradan were affected by most of the fluctuations of the water level, and that they could have most of their surface exposed over one year while being almost completely submerged the next, or vice versa. This happened, for example, from 1998 to 1999, or again in 2006 and 2007; moreover, it also means that they probably suffered significant erosion damage (Figures 23-25). Regarding the other sites mentioned above, Tell Razuk rarely showed a sharp drop in the emerged extent, and Tell Hadad showed a pattern similar to that of Sur Telbis in the Haditha Dam, with a resurfaced area that changed almost every year, suggesting once again a significant erosion effect ( Figure 23).
Interannual Variability-2018 Monthly Analysis
The analysis of the 2018 monthly NDWI images showed that interannual variability can have an impact on the extent of the resurfaced sites. In the Haditha and Mosul Dam reservoirs, 2018 was a year of particularly low water levels, and both reservoirs showed a comparable behaviour in the timing of their retractions (Figure 26a,b). During this year, months of high-water level were associated with the beginning of Summer, during May and June, while the minimum levels were registered in October and November for that year. The Hamrin Dam lake followed a similar trend, even if the period of lower water level began slightly earlier, during September (Figure 26c). Individual pixel values showed a large number of sites likely out of the water (i.e., with negative pixel values) for all the three reservoirs. During October and November, the Haditha Dam registered 26 and 25 emerged sites, respectively, also counting 13 which showed a negative pixel value for the entire year. In the Mosul Dam lake, 19 sites of the 48 analysed were out of the water for both the same months, including four sites with a negative pixel value for the entire year. Unfortunately, December was a particularly cloudy month in the Mosul Dam region, which prevented the extraction of any pixel values for this month. In the Hamrin Dam region, the earlier water reduction resulted in September and October as the months that registered a great number of sites resurfaced, 17 and 20, respectively, including eight sites that could be considered as out of the water for the entire year.
In terms of spatial distribution, the visible pattern is similar to that of the long-term analysis, with sites at the edges of the reservoirs being more prone to emerge. However, given the particularly low water level in the Haditha and Mosul Dam during this year, even sites closer to the central part of the reservoirs emerged at least once (Figures 27 and 28). A similar pattern is evident for the Hamrin Dam, which showed a spatial distribution of emerged sites close to that of some previous years of low water level, such as 2015 and 2010 ( Figure 29). The results from the point pixel analysis can be integrated with the analysis of the resurfaced extent: a pattern of decreasing and increasing emerged area was indeed registered for many sites in all three dams from May to December 2018 (Tables 2-4). In the Haditha Dam reservoir, while many sites were almost completely out of the water during the entire year, it is possible to appreciate the progressive emerging and submerging of a site such as Glai'a. This site was mostly submerged during summer, but approximately 80% of it resurfaced during October-November (Figure 30j,k). A comparable behaviour has been registered for Khirbet Kharasan in the Mosul Dam lake (>80% of site emerged) ( Figure 31). In the Hamrin Dam sites such as Tell Yelkhi, Tell Khesaran and Tell Kharbud, all followed a similar trend, with more than 85% of their extent emerged during September-November ( Figure 32). Planetscope images, acquired for specific sites for each month in 2018, confirmed the results of the analysis; the progressive emergence of the sites was clearly visible on the high-resolution images.
Accuracy Assessment
Accuracy assessment for time-series analysis is notoriously difficult [77] (see also Congalton; Green, 2019 [78], p. 233), and in this case also complicated by the lack of field recording, the less-than-optimal coverage, and the still-prohibitive costs and political restrictions for obtaining high-resolution images from commercial satellites. For these reasons, accuracy assessment was carried out on pansharpened true-colour composites from the same satellites (a similar approach due to the same limitations was adopted by Rayne et al., 2020 [53]). While this is not always optimal, because it mostly assesses the accuracy of the reclassification for that single area [79], it was the best option especially for less recent years [78] (p.235) (see also Olofsson et al., 2014 [77], p. 14) and it also provides consistency between reclassified and reference data. Pansharpening is not possible on Landsat 5; therefore, validation has been carried out on 30 m true-colour composites (thus, validation results should be taken with care). For more recent years, when possible, high-resolution satellite images from Bing Maps or Google Earth were used. For the monthly composites, pansharpened composites and Planetscope images were used as reference data [53].
Given the number of images and the different sensors used in this research, validation was carried out for two images per sensor, one with a lower water level, and one with a higher water level. Sampling was carried out at the per-pixel level using a stratified random sampling with two classes: water, and non-water pixels [77,78]. This method allows for allocating enough samples for each class depending on their area proportion, thus accounting for the lake area variation between images. For the annual images, validation was carried out on the entire research area. When mixed pixels were detected, the reference class was assigned to either water or non-water depending on the most prevalent class inside the sample. For monthly composites, validation was limited in areas covered by the Planetscope data, around 20 km maximum around some archaeological sites (Glai'a in the Haditha Dam, Khirbet Kharasan in the Mosul Dam, and Tell Yelkhi in the Hamrin Dam, see . All the validations were carried out in QGIS, using the Semi-Automatic Classification plugin [80]. Overall, the NDWI showed promising results for all the areas and satellites used (Table 5), as usually expected by a simple water/non-water classification in areas with a clear separation between a lake and its surroundings [81]. Errors of omission were generally rare and were mostly tied to the medium resolution of the images used for the NDWI or to the surroundings of each lake. In fact, agricultural fields around the Hamrin Dam lake increased in number since the last decade, and more recent Landsat or Sentinel-2 images did not identify smaller canals. However, in the context of highlighting emerged areas from the lakes, these errors had a minimum impact on the results. This is especially evident when assessing the accuracy at a smaller scale for the monthly images, in which the omission errors of the Hamrin Dam were significantly lowered. Errors of commission were slightly more frequent, but again, mostly tied to the medium resolution of the images used, sometimes unable to distinguish mixed pixels in areas of finer sediments at the edges of the lakes. It should be mentioned that no builtup pixels were incorrectly registered as water, thus confirming the correct choice of the NDWI xu for these regions (see Section 2.4.1). Nonetheless, as shown in Fisher; Danaher, 2013 [81], commission errors can be potentially lowered by experimenting with different thresholds. However, this should be evaluated for each area in order to find the respective optimum threshold, which is something beyond the scopes of this paper.
Discussion
By including more than one case study, it was possible to see how the different dynamics of each lake affect archaeological sites. The Mosul and Haditha lakes are mostly similar in these terms, while the Hamrin lake is a peculiar case, much more unstable, and with its sites that are likely to emerge more often than in the former two areas.
The results showed that the cyclical water retraction is indeed affecting a significant number of archaeological sites. This number might even be higher if we consider that older archaeological surveys suffered from visibility issues and methodological limits, and that this research had access only to a limited dataset (see Section 2.2).
The analysis shows how the sites in each area can be divided into: • Sites that emerged cyclically from the waters; • Sites that were never affected by the reservoir; • Sites that were always submerged during the observed time span.
Among the first category of sites, the one of interest for this research, the pixel analysis was able to identify those that were more likely to emerge more often. Usually, these sites are found at the edges of the reservoirs and are therefore impacted by any water level change; this is the case for sites such as Kifrin in the Haditha Dam, Khirbet Karhasan in the Mosul Dam, and Tell Zubeidi in the Hamrin Dam. On the other hand, drastic and substantial water level changes can directly affect inner sites, which can emerge, partially or entirely, from the waters even for a prolonged time. This is the case for sites such as Glai'a in the Haditha Dam, Jamrash and Kemune in the Mosul Dam, and Tell az Zawyeh in the Hamrin Dam.
There are, however, some considerations to make about the results presented above, tied to the limits of a remote sensing approach.
Firstly, even if an archaeological site is shown to be above the water level, the access to it might still be problematic, e.g., the environment might still present marshes, unstable sediment loads, or it might still be submerged. This is the case for sites such as Tell Yelkhi ( Figure 32).
The current state of an archaeological site after its emersion will also be tied to its characteristics before the formation of the lakes. For example, a site characterised by a surface scatter of archaeological material will have fewer chances to survive than a site with standing architectural elements (even if artefacts might be lost in this case as well).
The dynamics of the receding waters also have an impact on the preservation of a site. In fact, frequent emersions and submersions might damage a site more than its long submersion, through processes of erosion, particularly dangerous for sites with subtle archaeological features [22] (p. 72). This effectively means that, even if a site emerges from the water, its archaeological features might not have survived.
A visual inspection of the results is always advisable, because the pixel analysis might return negative values (i.e., land masses) for areas with high sediment loads. This is not a classification error, because the NDWI correctly identifies sediments as non-water pixels, but it is something to consider when planning a field inspection. This is evident for the Island of 'Ana, a multiperiod archaeological site in the Haditha Dam reservoir [82]. After a visual inspection of the year 2018 (Figure 33), the site showed a large meander bar (covering more than the extent of the original island) formed by the accumulation of sediments along the river, which was channelled to the north and east. Another limit of the single-pixel analysis is that it might return different values from the approach using polygons. In fact, the analysis of the site resurfaced extent is, of course, more accurate in determining which site is out of the water, because a single pixel might indicate a submerged site when only a small part of it has emerged. However, the zonal histogram analysis is limited by the difficulties in determining the extent of archaeological sites both from remote sensing and on the ground (Section 2.2), and it can reduce the samples significantly. Both analyses are also influenced by the fact that the images used are monthly or annual composites, and many variations might be observed with shorter temporal intervals, as shown by the 2018 monthly analysis.
Of course, any accuracy in measurements is tied to the precision at which it is possible to georeference archaeological sites in GIS. While extreme care was taken during this process (see Section 2.2) by refining the dataset and including polygons only with specific criteria, the impact of any inaccuracy should be acknowledged. This will manifest mainly in the pixel analysis, because if the actual position of a site is different, the result will change as well. Of course, accurate field data coupled with a thorough analysis of remote sensing or cartographic data, will improve any inherent inaccuracy in the data.
Another element to consider is the possible atmospheric impact on the results. All the images used to generate the NDWI were acquired at the TOA reflectance, meaning that they have not been atmospherically corrected. It must be said that the use of ratios (e.g., NDWI) can already lower the atmospheric impact [83][84][85][86] (see also Lillesand et al., 2015 [79], pp. 518-522). Applying a low cloud filter to generate cloudless images and a cloud masking function also minimises these effects [74,87]. Nonetheless, its impact should not be underestimated, because the two products will have a slight difference in the pixel values (atmospheric correction usually lowers reflectance [88][89][90]. It should be mentioned that, if applied to the identification of archaeological remains, BOA reflectance data are, however, highly suggested [91][92][93]. Future studies focused on single areas might use atmospherically corrected data and compare TOA data results to see which one works best, especially when reference ground data are not available.
Lastly, although it could be argued that the medium resolution of the images used could have a negative impact over the classification, the results highlighted above show that Sentinel-2-and Landsat-generated NDWI can still be used with a high degree of reliability. Of course, accuracy will always be tied to the resolution of reclassified and reference images, and higher-resolution data could always improve the results. However, as it has been already shown, when these are not available, testing with different thresholds can strengthen the results from medium-resolution images and eliminate noise from other ground sources caused by the surrounding context of each area [51,67].
Conclusions
Spectral indexes applied to medium resolution satellite images over a wide time frame can help in monitoring the effects of the receding waters on archaeological sites. The NDWI can distinguish between water and land surfaces in a useful way, allowing us to understand when a specific area is out the water or not. Applying the index on different images over time and by quantifying the results of the pixel analysis, it is possible to highlight years with very low or high water levels, with results partially comparable with the Global Surface Water (GSW) dataset [94] (Figure 3e), but also sites that resurfaced more often than others during the considered time frame. Knowing which sites are more likely to emerge can prove useful, because it is likely that these sites will resurface again if there is a decrease in water level in the reservoirs. This, in turn, can help in planning future targeted investigations. In fact, by knowing in advance which sites are more likely to emerge from the ebbing waters, salvage surveys, targeted excavations, or assessments of the site's conditions can have a more precise lead. On the other hand, knowing the number of emersions and coupling this information with the site status before its flooding (e.g., the type of archaeological evidence recorded) will help in a preliminary assessment of the site status before any future investigation.
The present methods and results, together with the difference between the single pixels and the zonal histogram analysis, also highlight the importance of accurate field recording of the extent of archaeological materials, and the need for more accurate and available geospatial data for archaeological sites. Moreover, field campaigns will help not only the cultural heritage management, but also the validation of the results.
In the end, the methodology presented here has the advantage of being easy to reproduce and apply to different areas, but also easily accessible by being based on free software and easy to access data, if we exclude archaeological sites data. In fact, the NDWI reclassification and analysis is faster and easier to reproduce than more in-depth methodologies such as the GSW, while still yielding promising results. Moreover, this approach can set a starting point to address a hitherto missing element of many salvage projects, i.e., the evaluation of the post-flooding impact of reservoirs on cultural heritage. In fact, apart from a few examples that inspired the development of this methodology [22,23], submerged sites have been considered mostly lost. However, it is clear that, at least for the lakes presented in this study, this may not always be the case.
Lastly, it has been already highlighted how costs, availability, and ease of access to commercial satellite images for the Near East are still a major hindrance for any research aimed at monitoring the archaeological cultural heritage, and how archaeologists had to cope with these restrictions in different ways (see, e.g., Rayne et al., 2020 [53] and related bibliography). In this sense, the daily availability of Sentinel-2 images and the possibility to monitor archaeological sites frequently, prove that these satellites images are and will be among the most useful free tools in any future remote sensing application.
Supplementary Materials:
The following are available online at https://www.mdpi.com/2072-4 292/13/4/786/s1, Figure S1: Temporal coverage of satellite images used in the analysis; Figure S2: Emersion rate of archaeological sites in the Haditha Dam; Figure S3: Emersion rate of archaeological sites in the Mosul Dam; Figure S4: Emersion rate of archaeological sites in the Hamrin Dam. Code S1: Google Earth Engine code for the generation of the NDWI composite, QGIS models and scripts, and R code for images generation, pixel and zonal histogram analysis are available on Github: https:// github.com/andreatitolo/IraqEmerginSites, and archived on Zenodo (doi:10.5281/zenodo.4446664) NDWI composites, due to their size, are available on figshare (links for the annual and monthly NDWI composites are provided in the Github repository). Code S2: Link to Google Earth Engine repository: https://code.earthengine.google.com/?accept_repo=users/sapienza_at/IraqEmergingSites.
Data Availability Statement:
The data presented in this study are openly available on Github: https: //github.com/andreatitolo/IraqEmerginSites, and archived on Zenodo (doi:10.5281/zenodo.4446664) NDWI composites, due to their size, are available on figshare (links for the annual and monthly NDWI composites are provided in the Github repository). | 2021-03-23T13:13:45.032Z | 2021-02-21T00:00:00.000 | {
"year": 2021,
"sha1": "7eb25b0839248099e806f2c612c233f542d51be8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/13/4/786/pdf?version=1614320852",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7372c1d567252f33af0d5d32fb9625ce5088104f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
8351892 | pes2o/s2orc | v3-fos-license | High Sensitivity Detection of CdSe/ZnS Quantum Dot-Labeled DNA Based on N-type Porous Silicon Microcavities
N-type macroporous silicon microcavity structures were prepared using electrochemical etching in an HF solution in the absence of light and oxidants. The CdSe/ZnS water-soluble quantum dot-labeled DNA target molecules were detected by monitoring the microcavity reflectance spectrum, which was characterized by the reflectance spectrum defect state position shift resulting from changes to the structures’ refractive index. Quantum dots with a high refractive index and DNA coupling can improve the detection sensitivity by amplifying the optical response signals of the target DNA. The experimental results show that DNA combined with a quantum dot can improve the sensitivity of DNA detection by more than five times.
Introduction
It is advantageous to use porous silicon because of its simple preparation processes; the pore size, pore density, and porous silicon layer thickness can be controlled by changing electrochemical corrosion conditions (e.g., the silicon wafer type, the doping, the electrolyte ratio, and the corrosion current density) [1][2][3]. A large specific surface area can adsorb large amounts of chemical and biological molecules. These properties make porous silicon a good candidate for highly sensitive, unlabeled optical biochemical sensors [4,5]. The combination of a biomolecules and porous silicon can result in change of an effective refractive index. By observing the effect of this change on the reflectance spectrum shift such as a monolayer porous silicon interference peak shift [6,7], the Bragg reflector center position [8], or the microcavity defect state [9,10]-it is possible to detect biochemical molecules.
Quantum dots (QDs) are mostly used as high-sensitivity fluorescent molecule labels because of their high luminescence quality, good monochromaticity, surface functionalization, and tunable photoluminescence range [11,12]. Nanoparticles such as Au or QDs were attached to the DNA before the DNA was hybridized with the complementary DNA for optical sensing [13,14]. Many optical methods are used to observe the changes caused by the nanoparticle addition to the DNA such as light scattering, SPR, fluorescence, and SERS [15]. Moreover, the high refractive index of QDs can increase the effective refractive index of QD-labeled molecules while making them luminescent. This causes a greater reflectance spectrum shift, improving the sensitivity of the sensor. Girija Gaur et al. utilized reflectance spectrum shifts to detect QD-labeled small molecules of biotin in a porous silicon monolayer [16]. The result was a six-fold increase in detection sensitivity compared to that of unlabeled targets. We know that a porous silicon microcavity (PSM) is more sensitive to an effective refractive index change due to a sharp dip in the reflectance spectrum for the defect state. In addition, the pore compared to that of unlabeled targets. We know that a porous silicon microcavity (PSM) is more sensitive to an effective refractive index change due to a sharp dip in the reflectance spectrum for the defect state. In addition, the pore size is an important factor that we must consider in sensing. The pore size of P-type porous silicon ranges from 10 nm to 30 nm [17], the size of the QD is about 4 nm, and the size of the 20-base DNA molecule is about 6 nm in length. Note that the size of the DNA molecule will increase after quantum dot labeling. A small ratio between pore size and molecule size will cause the QD-labeled DNA to plug the pore and prevent other biological molecules from entering the microcavity [18]. In comparison, N-type silicon pores are generally much larger (50 nm-120 m) [19,20].
In this paper, N-type porous silicon microcavity structures are used. Hydroxyl-modified CdSe/ZnS water-soluble QDs are coupled with DNA molecules; the QDs are used to amplify the effective refractive index change.
The schematic image of the sensing principle is shown in Figure 1. After the hybridization between the complementary DNA and the probe DNA in the PSM structure, the effective refractive index of the structure changed, and the red shift of the spectrum can be observed in the reflection spectrum. QDs are marked by a series of chemical couplings to the target DNA, the effective refractive index of DNA-QD significantly increased due to the high refractive index of the quantum dots. The red shift in the reflectance spectrum increased compared to the unmodified DNA. QDs play a role in increasing the spectral response signal. This method increases the sensitivity of detecting DNA hybridization.
Field emission scanning electron microscopy (FESEM, ZEISS SUPRA 55VP) was used to characterize the porous silicon surface and cross section. The transmission electron microscopy (TEM) image was acquired using a 200 kV field emission transmission electron microscope (JEM-2100F). The absorption spectrum is measured using a Nano Drop 2000 UV-Vis spectrophotometer (Thermo Scientific, Waltham, MA, USA). The fluorescence emission spectrum Field emission scanning electron microscopy (FESEM, ZEISS SUPRA 55VP) was used to characterize the porous silicon surface and cross section. The transmission electron microscopy (TEM) image was acquired using a 200 kV field emission transmission electron microscope (JEM-2100F). The absorption spectrum is measured using a Nano Drop 2000 UV-Vis spectrophotometer (Thermo Scientific, Waltham, MA, USA). The fluorescence emission spectrum was measured using a Hitachi UV-Vis spectrofluorometer (Hitachi F-4600, Tokyo, Japan). A Hitachi UV-Vis spectrophotometer (Hitachi U-4100, Tokyo, Japan) was used to measure the sample reflectance spectrum.
Porous Silicon Microcavity Preparation
N-type heavily doped silicon wafers with 0.01 ohm·cm to 0.02 ohm·cm and (100) crystal orientation were diced into 1 cm 2 pieces. These pieces were subsequently cleaned using acetone, ethanol, and deionized water for 10 min each, and then dried in air. The samples were etched with a single cell under constant current, using a mixture of hydrofluoric acid and water; the HF acid concentration was roughly 5% (25 mL 40% HF: 200 mL water). The corrosion current density cycle was 14 mA for 4.5 s and 28 mA for 4 s, with an interval of 5 s. The defect layer was etched using a current density of 14 mA for 9 s. Note that the N-type silicon pore size is relatively large; the pores etched using corrosion currents of 14 mA and 28 mA are shown in Figure 2a,b, respectively. The corresponding pore sizes are roughly 30-40 nm and 40-50 nm. The pore diameters are large enough for the biomolecules and the quantum dots to enter the porous silicon. The pores are round, with relatively thick walls, as shown in Figure 2a. In Figure 2b, the pore shapes are irregular with thin walls. Figure 2c shows a cross-sectional view of the PSM. The defect layer is in the middle, with eight upper and eight lower layers. Figure 2d shows an enlarged view of the cross section.
Porous Silicon Microcavity Preparation
N-type heavily doped silicon wafers with 0.01 ohm·cm to 0.02 ohm·cm and (100) crystal orientation were diced into 1 cm 2 pieces. These pieces were subsequently cleaned using acetone, ethanol, and deionized water for 10 min each, and then dried in air. The samples were etched with a single cell under constant current, using a mixture of hydrofluoric acid and water; the HF acid concentration was roughly 5% (25 mL 40% HF: 200 mL water). The corrosion current density cycle was 14 mA for 4.5 s and 28 mA for 4 s, with an interval of 5 s. The defect layer was etched using a current density of 14 mA for 9 s. Note that the N-type silicon pore size is relatively large; the pores etched using corrosion currents of 14 mA and 28 mA are shown in Figure 2a,b, respectively. The corresponding pore sizes are roughly 30-40 nm and 40-50 nm. The pore diameters are large enough for the biomolecules and the quantum dots to enter the porous silicon. The pores are round, with relatively thick walls, as shown in Figure 2a. In Figure 2b, the pore shapes are irregular with thin walls. Figure 2c shows a cross-sectional view of the PSM. The defect layer is in the middle, with eight upper and eight lower layers. Figure 2d shows an enlarged view of the cross section.
Quantum Dot Coupled with DNA
A 50 uL CdSe/ZnS carboxyl-modified water-soluble quantum dot solution (8 µ M concentration) was diluted with PBS to 1 µ M. Next, 40 µ L of EDC (0.01 M concentration) and 40 µ L of Sul-NHS (0.01 M concentration) were added to the reaction mixture. After reacting for 10 min, 50 µ L of amino-modified DNA (40 µ M concentration) was added and the reaction vessel was shaken gently for 10 h in the dark at room temperature. Centrifugation can remove excess DNA due to the solubility differences between DNA and carboxyl moiety on the QD. The QD-modified DNA separates because many unlinked carboxyl moieties on the surface of QD have less solubility than DNA. The QD-DNA weight is larger than the DNA, while the unlinked DNA is still in the solution. After centrifugation at 10,000 rpm for 10 min, the supernatant was removed and the precipitate was diluted with PBS into a different concentration of DNA solution and stored at 4 °C.
Quantum Dot Coupled with DNA
A 50 uL CdSe/ZnS carboxyl-modified water-soluble quantum dot solution (8 µM concentration) was diluted with PBS to 1 µM. Next, 40 µL of EDC (0.01 M concentration) and 40 µL of Sul-NHS (0.01 M concentration) were added to the reaction mixture. After reacting for 10 min, 50 µL of amino-modified DNA (40 µM concentration) was added and the reaction vessel was shaken gently for 10 h in the dark at room temperature. Centrifugation can remove excess DNA due to the solubility differences between DNA and carboxyl moiety on the QD. The QD-modified DNA separates because many unlinked carboxyl moieties on the surface of QD have less solubility than DNA. The QD-DNA weight is larger than the DNA, while the unlinked DNA is still in the solution. After centrifugation at 10,000 rpm for 10 min, the supernatant was removed and the precipitate was diluted with PBS into a different concentration of DNA solution and stored at 4 • C. The fluorescence emission spectra of the quantum dot and the quantum dot-DNA solution were measured from a 100 µL sample using an excitation wavelength of 370 nm, an excitation voltage of 400 v, and a slit width of 10 nm. The fluorescence and absorption spectra of QDs before and after DNA coupling are shown in Figure 3. The emission peak of carboxyl-modified CdSe/ZnS water-soluble quantum dots is located at 528 nm. Note that the peak shifted to 530 nm after the quantum dot coupled with DNA. The absorption spectrum shows that there is an added 260 nm absorption peak after the QD combined with DNA. Thus, both the fluorescence and absorption spectra indicate that the quantum dot and DNA conjunction was successful. Quantum dot-labeled DNA has little effect on the overall photoluminescence intensity. The fluorescence emission spectra of the quantum dot and the quantum dot-DNA solution were measured from a 100 µ L sample using an excitation wavelength of 370 nm, an excitation voltage of 400 v, and a slit width of 10 nm. The fluorescence and absorption spectra of QDs before and after DNA coupling are shown in Figure 3. The emission peak of carboxyl-modified CdSe/ZnS water-soluble quantum dots is located at 528 nm. Note that the peak shifted to 530 nm after the quantum dot coupled with DNA. The absorption spectrum shows that there is an added 260 nm absorption peak after the QD combined with DNA. Thus, both the fluorescence and absorption spectra indicate that the quantum dot and DNA conjunction was successful. Quantum dot-labeled DNA has little effect on the overall photoluminescence intensity.
PSM Functionalization
To immobilize the target biological molecules to the porous silicon surface, the PSM requires a series of functional processes to stabilize the porous silicon surface and modify the pore surface functional groups. First, the freshly-prepared PSM was oxidized; the samples were heated to 500 °C for 30 min in an oxidizing furnace. Next, they were soaked in 5% APTES for 1 h, rinsed with deionized water, dried in air, and incubated at 100 °C for 10 min. After silanization, the amino groups were immobilized to the pore surface. The samples were immersed in a 2.5% glutaraldehyde solution for 1 h, then rinsed with PBS (pH 7.4) and dried in air to modify the surface of the porous silicon with an aldehyde group. The aldehyde group can be coupled with amino-modified DNA.
DNA Probe Connected to Porous Silicon Microcavity
A 50 µ L DNA probe (10 µ M concentration) was dropped into the functionalized PSM sample using a pipette. After 2 h at 37 °C, the samples were rinsed with PBS to remove excess DNA. The excess/unreacted aldehyde functional groups were then closed with EA (3M in HEPES buffer, pH 9.0) incubated at 37 °C for 1 h. Finally, the samples were rinsed with PBS and dried in air.
Target DNA Detection
Fifty microliters of target DNA labeled with QDs at different concentrations were attached to the PSM samples. The samples were incubated at 37 °C for 2 h to allow DNA to hybridize. Afterwards, the samples were rinsed with PBS to wash away the unconnected target DNA and QDs, and then dried in air.
The sample reflectance spectra were measured after each step of sample functionalization (oxidization, silanization, GA) after probe combination and the hybridization of the DNA probe with QD-DNA.
PSM Functionalization
To immobilize the target biological molecules to the porous silicon surface, the PSM requires a series of functional processes to stabilize the porous silicon surface and modify the pore surface functional groups. First, the freshly-prepared PSM was oxidized; the samples were heated to 500 • C for 30 min in an oxidizing furnace. Next, they were soaked in 5% APTES for 1 h, rinsed with deionized water, dried in air, and incubated at 100 • C for 10 min. After silanization, the amino groups were immobilized to the pore surface. The samples were immersed in a 2.5% glutaraldehyde solution for 1 h, then rinsed with PBS (pH 7.4) and dried in air to modify the surface of the porous silicon with an aldehyde group. The aldehyde group can be coupled with amino-modified DNA.
DNA Probe Connected to Porous Silicon Microcavity
A 50 µL DNA probe (10 µM concentration) was dropped into the functionalized PSM sample using a pipette. After 2 h at 37 • C, the samples were rinsed with PBS to remove excess DNA. The excess/unreacted aldehyde functional groups were then closed with EA (3M in HEPES buffer, pH 9.0) incubated at 37 • C for 1 h. Finally, the samples were rinsed with PBS and dried in air.
Target DNA Detection
Fifty microliters of target DNA labeled with QDs at different concentrations were attached to the PSM samples. The samples were incubated at 37 • C for 2 h to allow DNA to hybridize. Afterwards, the samples were rinsed with PBS to wash away the unconnected target DNA and QDs, and then dried in air.
The sample reflectance spectra were measured after each step of sample functionalization (oxidization, silanization, GA) after probe combination and the hybridization of the DNA probe with QD-DNA.
Results and Discussion
The defect state in the high reflection region of the PSM reflectance spectrum is near 650 nm. When the sample was oxidized, silanized, and functionalized with glutaraldehyde, and the probe DNA was hybridized with complementary target QD-DNA, the reflectance spectrum of the PSM showed a regular red shift. Figure 4 shows the reflectance spectrum shifts. The reflectance spectrum shift reflects the PSM refractive index changes that result from the functionalization and DNA connection. When a small molecule was grafted to the porous silicon surface, the effective refractive index of the porous silicon becomes large, and the reflectance spectrum shifts to the red. The amount of red shift and the refractive index shift has a linear relationship in a certain range. The relationship between the target DNA molecules at different concentrations and the reflectance spectrum red shift is shown in Figure 5.
Results and Discussion
The defect state in the high reflection region of the PSM reflectance spectrum is near 650 nm. When the sample was oxidized, silanized, and functionalized with glutaraldehyde, and the probe DNA was hybridized with complementary target QD-DNA, the reflectance spectrum of the PSM showed a regular red shift. Figure 4 shows the reflectance spectrum shifts. The reflectance spectrum shift reflects the PSM refractive index changes that result from the functionalization and DNA connection. When a small molecule was grafted to the porous silicon surface, the effective refractive index of the porous silicon becomes large, and the reflectance spectrum shifts to the red. The amount of red shift and the refractive index shift has a linear relationship in a certain range. The relationship between the target DNA molecules at different concentrations and the reflectance spectrum red shift is shown in Figure 5.
Results and Discussion
The defect state in the high reflection region of the PSM reflectance spectrum is near 650 nm. When the sample was oxidized, silanized, and functionalized with glutaraldehyde, and the probe DNA was hybridized with complementary target QD-DNA, the reflectance spectrum of the PSM showed a regular red shift. Figure 4 shows the reflectance spectrum shifts. The reflectance spectrum shift reflects the PSM refractive index changes that result from the functionalization and DNA connection. When a small molecule was grafted to the porous silicon surface, the effective refractive index of the porous silicon becomes large, and the reflectance spectrum shifts to the red. The amount of red shift and the refractive index shift has a linear relationship in a certain range. The relationship between the target DNA molecules at different concentrations and the reflectance spectrum red shift is shown in Figure 5. 14 nm, 18 nm, and 23 nm, respectively. It can be seen from the figure that the QD-DNA red shift is larger than that of the control DNA at the same concentration. As shown in Figure 4, the red shift of the 0.1 µM QD-DNA at point a is equivalent to the red shift of the control DNA concentration at 0.5 µM (point a ). The red shift of the 0.5 uM QD-DNA at point b is similar to the red shift of the 2.5 µM control DNA (point b ). Finally, the red shift of the 1 µM QD-DNA at point c is a bit larger than that of the 5 µM control DNA concentration at point c . That is to say, the detection sensitivity of QD-DNA is roughly five times that of conventional DNA detection. As can be seen from Figure 4, at DNA concentrations below 0.5 µM, the slope of the QD-DNA (14.34 nm/µM) data points is greater than the slope of controlled DNA (2.67 nm/µM) at the concentration range from 0.5 µM to 5 µM. Therefore, the QD-DNA detection sensitivity is at least five times higher than conventional DNA detection.
When QD-DNA is chosen as the target molecule for the reflectance spectrum shift, the sensitivity is improved due to the high refractive index of QDs. The QDs and DNA molecules are bonded together to increase the effective refractive index of DNA molecules, and then hybridized with a DNA probe in the PSM. This increases the PSM effective refractive index shift. To further improve the DNA detection sensitivity using QD as a marker, one should optimize the ratio of QDs to DNA. Table 1 shows the red shifts for two different ratios of QDs and DNA: 1:20 and 1:5 at different concentrations. The 1:20 ratio exhibits a 7 nm and a 12 nm red shift at 0.1 µM and 0.5 µM DNA concentrations, respectively. The red shift for the ratio of 1:5 is clearly larger than 1:20. However, the red shift in the 0.5 uM control DNA is 10 nm, so the detection sensitivity for 1:20 is not significantly improved. However, the detection sensitivity increased appreciably for the 1:5 ratio, indicating that the ratio of quantum dots and DNA has an impact on the detection sensitivity. The TEM image (Figure 6a) shows that the QD is about 4 nm, which is appropriate for our porous silicon holes. The refractive index of a QD is related to its size. Generally, the larger the QD is, the larger the refractive index. A QD with a large refractive index has a stronger spectral response, so the sensitivities scale with the QD size. However, the particle size will affect the particles entering into the porous silicon hole. The PSM structure detection sensitivity will be greatly reduced when the QDs cannot enter the pores, so the appropriate size is an important factor for increasing the sensitivity. The absorption spectroscopy image (Figure 6b) shows that the peak is located at 525 nm and the emission spectroscopy image (Figure 6b) shows that the peak is located at 500 nm. Our reflectance spectrum (Figure 4) shows that the dip is located at 650 nm; far away from the emission and the absorption peaks. Furthermore, the reflection light intensity is much stronger than the QD emission and absorption. Therefore, the QD absorption and emission have almost no impact on the sensing sensitivity. At present, the detection limit of the latest porous silicon optical biosensor is about 40 nM [21,22]. In this paper, the sensor detection limit is also at this level when using the PSM to detect the control DNA without a quantum dot. However, the QD-labeled DNA detection sensitivity is five times higher than that of the conventional method. Using the QD-labeled DNA strand as an analyte has its own drawbacks; the method is more complicated and interferes with the target DNA more than the label-free detection method. As a result, the detection limit of this method is lower (0.1 nm/14.34 = 6.97 nM), and the sensitivity of this method is much higher.
Conclusions
In this work, we fabricated N-type macro-porous silicon microcavities and prepared QD-labeled DNA with carboxyl-CdSe/ZnS soluble QDs. By increasing the pore size, more quantum dots-labeled DNA molecules can be bonded to the microcavity pores. The effective refractive index is increased by combining the target DNA with the high-refractive-index QDs; thus, PSM spectral responses to the analytes are increased. The relationship between the QD-DNA concentration and the red shifts of the PSM reflectance spectrum was measured using a detection method based on refractive index changes. The results showed that the demonstrated method of QD-labeled DNA increased the reflectance spectrum red shift, improving sensitivity by a factor of five times compared to conventional DNA hybridization detection methods based on porous silicon. As a result, DNA labeled with QDs can be used for high sensitivity DNA sensing.
Conclusions
In this work, we fabricated N-type macro-porous silicon microcavities and prepared QD-labeled DNA with carboxyl-CdSe/ZnS soluble QDs. By increasing the pore size, more quantum dots-labeled DNA molecules can be bonded to the microcavity pores. The effective refractive index is increased by combining the target DNA with the high-refractive-index QDs; thus, PSM spectral responses to the analytes are increased. The relationship between the QD-DNA concentration and the red shifts of the PSM reflectance spectrum was measured using a detection method based on refractive index changes.
The results showed that the demonstrated method of QD-labeled DNA increased the reflectance spectrum red shift, improving sensitivity by a factor of five times compared to conventional DNA hybridization detection methods based on porous silicon. As a result, DNA labeled with QDs can be used for high sensitivity DNA sensing. | 2017-05-06T12:14:51.184Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "b5ed8d875972a8ce88a2f915aada7c5b1b16cd79",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/s17010080",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b5ed8d875972a8ce88a2f915aada7c5b1b16cd79",
"s2fieldsofstudy": [
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Computer Science",
"Medicine"
]
} |
229489587 | pes2o/s2orc | v3-fos-license | A Zero Point Offset Monitoring Method of Rotation Axis Based on Siemens 840D System
In order to monitor the abnormal condition that the zero point of the encoder does not coincide with the actual mechanical zero position in the full-closed control loop of a rotating axis, a Siemens 840D numerical control system based on abnormal condition inspection method is proposed. According to the gathered data by the measurement system of a rotating axis servo motor, the logical comparison program in PLC (Programmable Logic Controller) is developed to monitor the zero point of rotating axis by using Siemens 840D numerical control system and STEP7 software platform. The verification experiment results show that the proposed method can quickly identify the abnormal displacement of the zero point for a rotating axis and disable the servo motor in time, which can effectively avoid the poor quality problem caused by the abnormal condition in rotating axis.
INTRODUCTION
The five axis CNC (Computer numerical control) machine tool has the characteristics of fast feed speed and high precision, which makes it possible to process complex curved surface parts. The measurement system of each axis of five axis CNC machine tool adopts the form of full-closed control loop, as shown in Fig 1. As the last link, the position loop feedback device realizes the full closed loop control of the machine axis to ensure the machining accuracy. The common position feedback devices are grating ruler, photoelectric encoder, magnetic gate, etc., which must have high accuracy and sensitivity [1] . The position feedback system relies on two internal relative moving components (such as reading head and grating) to realize feedback of position. During installation, the two components are fixed on two relative moving components of the measured axis. For the rotating axis, the device can be roughly divided into integral type and split type based on its structure. The encoder is the common integral position feedback device, as shown in Fig 2. The distance between the reading head and the dial has been adjusted by the manufacturer. The rotary parts of the encoder are fixed on the rotating axis by mechanical locking, and driven by static friction [2] . The common split position feedback device include circular grating and a magnetic grating, as shown in Fig 3. The working principle of integral type and split type is the same, but it is necessary for split type to adjust the distance between the reading head and the dial during installation [3] . During the long time use of the machine, vibration or other factors may cause the loosening of the connection of the position feedback device. Such as the relative displacement between the inner ring of the encoder and the rotating axis, and the loosening of the reading head. Which may cause a slight relative displacement between the measuring part and the fixed part, resulting in the deviation of the zero mark of the rotating axis. While the position feedback system cannot achieve the loosening of its fixed connection inspection, therefore, often causes quality problems of parts. Different encoder combinations can be used for position feedback loop and speed feedback loop. Among them, "incremental -incremental" combination is the most common in machine tool configuration due to its reasonable price and high reliability. Because of the above problems may occur in the rotating axis with this configuration, this paper proposes a design of zero point offset monitoring of rotating axis based on Siemens 840D system.
Analysis of zero mark's offset of incremental position feedback device of rotating axis
The working principle of the incremental position feedback device is to convert the relative displacement of a straight line or an angle into a periodic electrical signal, and then convert the electrical signal into a counting pulse. The number of pulses is used to express the relative displacement, and indirectly calculating the mechanical displacement of machine tool axis [4] . But it is unable to output the absolute position of the rotation angle of the rotating axis, so it is necessary to increase a zero pulse of the encoder, and take the zero pulse as a reference to correct the memory position of the counting device. The machine axis with incremental position feedback system needs to return to the reference point every time when it turns on. There is usually only one zero pulse of the rotating axis position feedback system. In the Siemens 840D system, through the action of returning to the reference point, the system finds the unique mechanical position corresponding to the zero pulse of the rotating axis. And sets the offset value of the position with the parameters MD34080 & MD34090, to ensure the correspondence between the zero mark of the rotating axis and the mechanical zero position. The process is shown in When the connection of the position feedback device is loose, it may cause the deviation between the zero pulse of the incremental position feedback device and its unique corresponding mechanical position, and the corresponding mechanical position of the zero mark will also change, that is, the zero offset. Because the offset is very small, the system will not give any alarm, and the rotating axis will still find the incorrect zero mark. The offset can't be found by eyes, but there are great hidden dangers in the processing of parts with high precision requirements.
Take a five axis CNC machine tool with C and A rotating axis with Siemens 840D system as an example. The turning center distance of the machine is 300 mm and the length of machining tool is 100 mm. If the zero point mark of axis A is offset by -0.05 °. When the RTCP function is closed, execute the program A-30C0, and the actual mechanical position of axis A is -30.05 °. The error between the actual value and the theoretical value of the tool tip in the Y direction is Δ Y = -0.302 mm, and in the Z direction is Δ Z = 0.175 mm, as shown in Figure 5, such errors are unacceptable for aviation structures. Due to the uncertainty of looseness of the connection, the machine tool state can be divided into two categories when the zero mark of the rotating axis is offset: ①there is a zero mark on the rotating axis; ② there is no zero mark on the rotating axis. In the case of type ①, there are many ways to detect or monitor, such as the comparison function of full closed loop and half closed loop in the safety integration function of Siemens 840D. In the case of type ②, there is no zero mark on the rotating axis 4 and no reference for the position. At present, such offset cannot be monitored. Through analysis, it is found that in this case, the normal operations that cause the zero mark offset or the operations after the offset are NCK (NC Realtime Kemal) reset, shutdown, JOG move, return reference point or combination of these operations. As shown in Fig 6,
Principle of zero mark monitoring
Under the good condition for the mechanical transmission of machine rotating axis, the motor measurement system can reflect the actual mechanical position of axis, so it can be considered to use the motor measurement system to monitor the position in reverse.
Assuming that the zero mark on the rotating axis of the machine does not deviate and returns to zero normally (this assumption must exists). The angle of the actual mechanical position of the last stop is the same as the rotation displacement, which generated by the next time finding the zero mark of the rotating axis. Theoretically, the absolute value of this two angles is the same and the direction is opposite, as shown in Fig 7. If the difference between the two angles is out of the monitoring value, then it is considered that the new zero mark is offset.
Zero mark monitoring design
In the SIEMENS 840D system, if the two measuring systems of the rotary axis are incremental + incremental, we need to set the parameter MD30242 ($_MA_ENC_IS_INDEPENDENT[n]) of rotation axis first. The parameter MD30242 [0] or MD30242 [1] of the motor measurement system is set to 1, making the motor encoder an independent encoder [5] . This means that after the rotating axis completes the zero return operation, the value of the motor measurement system will not be reset to zero with the position measurement system is reset to zero [6] .
In case of zero mark on the rotating axis, the data of the motor measurement system is collected and stored in real time and recorded as α. When the zero point is lost and before it return to the reference point, α 1 is triggered by conditions to read α, so α 1 is the last actual mechanical position before the zero point is lost. After returning to the reference point, the data of the motor measurement system α 2 is read (α 1 = α at this time), that is, the rotation displacement when the zero mark is found again。 Set the alarm threshold value, so that the value of | α 1 + α 2 | < threshold value is established, so as to achieve the purpose of monitoring. However, the experiment shows that | α 1 + α 2 | < threshold value is not established after multiple operations. There are two reasons: ①The incremental measurement system is powered on again after NCK reset / shutdown, and the MMEAT 2020 Journal of Physics: Conference Series 1676 (2020) 012142 IOP Publishing doi:10.1088/1742-6596/1676/1/012142 5 measurement system will clear the position value. After multiple times shutdown without zero mark, α 1 cannot correctly reflect the actual mechanical position when the zero point is lost, so it is necessary to record the last normal state of the rotating axis when returning to the reference point, the position value β of the motor measurement system, which can be read by α 3 , and the triggering condition is the same as α 1 . ②The rotation axis moves without zero mark, and the rotation displacement is recorded as Δx, and then NCK resets / shuts down. This operation can be performed multiple times, and there may have Δx 1 , Δx 2 … Δx n . Therefore, it is necessary to modify the above relationship. When the zero point of the next rotation axis does not offset, there is such a relationship between them |α 1 +α 2 -α 3 +(Δx 1 +Δx 2 +… Δ x n ) | < threshold value (1) The number and value of Δx 1 , Δx 2 , ... Δx n are uncertain, and it is difficult to collect and store the data. However, if the operation of NCK reset / shutdown is performed every time when the rotating axis has zero mark, these data will be zero. And this operation is easy to achieve for normal machine operation. If a false alarm is caused by ②, we just need to reset the α 1 , α 2 , α 3 to zero through the NCK operation, and check the zero accuracy of the rotating axis. Based on this, the relationship can be simplified as: |α 1 +α 2 -α 3 |<threshold value (2) According to the relationship formula, α 1 and α 3 are both collected under the normal state of the rotation axis, and triggered by the condition when there is no zero mark, α 2 is collected by returning the rotation axis to the reference point when the zero point offset may occur. If the relationship does not hold, it means that the zero mark has offset after the rotation axis returns to zero. Based on SIEMENS 840D system, PLC is used to realize α 1 , α 2 , α 3 data collection. The specific process is shown in
Verification application of zero mark monitoring design
This design is applied in a five axis machining center with good mechanical transmission .In the case where the rotation axis is normally returned to zero, the following operations were carried out: NCK reset / shutdown at any angle within the allowable range → JOG moving after power on →return to reference point → moving → NCK reset / shutdown → JOG moving after power on →return to reference point. The specific values are shown in Table 1. It can be seen from the two tables that no matter what operation was performed, as long as the zero mark of the rotation axis is not offset, the value of | α 1 + α 2 -α 3 | will be stable in a small interval, so it can be monitored. In June 2019, when a five-axis machining center returned to the reference point, relative displacement occurred between the C-axis encoder inner ring and the mechanical rotation axis due to the loosening of the locking screw, which caused the C-axis zero mark to offset by 0.4 ° The monitoring function successfully triggers an alarm to avoid parts accidents.
Conclusion
In this paper, aiming at the problem of zero point offset of rotating axis, a method of zero point offset detection of rotating axis based on Siemens 840D is proposed. The feasibility is verified by the experimental results, and it has been implemented and applied in several five axis CNC machines. It has the advantages of simple principle, strong operability, high sensitivity, strong reliability and easy maintenance. And it can effectively monitor the zero offset of the rotating axis. In the process of practical application, two problems of zero offset caused by loose measuring parts of rotating axis are found in time. Through this method, the accidents of parts are effectively prevented, and it provides a strong guarantee for the quality control of parts processing. | 2020-11-26T09:05:34.576Z | 2020-11-01T00:00:00.000 | {
"year": 2020,
"sha1": "b5f2bd86769736dfbcdde41065fbefa24c3caa71",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1676/1/012142",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0ae002ea698b1449c8b729685875b2487302099c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
53317660 | pes2o/s2orc | v3-fos-license | BCS Instability and Finite Temperature Corrections to Tachyon Mass in Intersecting D1-Branes
A holographic description of BCS superconductivity is given in arxiv:1104.2843. This model was constructed by insertion of a pair of D8-branes on a D4-background. The spectrum of intersecting D8-branes has tachyonic modes indicating an instability which is identified with the BCS instability in superconductors. Our aim is to study the stability of the intersecting branes under finite temperature effects. Many of the technical aspects of this problem are captured by a simpler problem of two intersecting D1-branes on flat background. In the simplified set-up we compute the one-loop finite temperature corrections to the tree-level tachyon mass using the frame-work of SU(2) Yang-Mills theory in (1 + 1)-dimensions. We show that the one-loop two-point functions are ultraviolet finite due to cancellation of ultraviolet divergence between the amplitudes containing bosons and fermions in the loop. The amplitudes are found to be infrared divergent due to the presence of massless fields in the loops. We compute the finite temperature mass correction to all the massless fields and use these temperature dependent masses to compute the tachyonic mass correction. We show numerically the existence of a transition temperature at which the effective mass of the tree-level tachyons becomes zero, thereby stabilizing the brane configuration.
MOTIVATION
• In conventional QCD The Nambu, Jona-Lasinio-model of chiral symmetry breaking elucidates certain apparent similarities between chiral symmetry breaking and the BCS instability in superconductors.
• Inspired by this similarity, a holographic model of BCS superconductivity has been proposed within the broken chiral symmetric scenario in the Sakai Sugimoto model.(N. Sarkar, S. Sarkar, B. Sathiapalan, K. Rama) • proposal: BCS instability (Cooper pairing between Baryons) in the boundary(D4 wrapped on S 1 ) corresponds to tachyonic instability in the bulk (D8).
MOTIVATION: INTERSECTING D8-BRANES
• The formation of Cooper pairs in the boundary: introduce a finite Baryon number density on the boundary theory i.e. a Chemical Potential for Baryon number.
• How?: A point source of Baryon number in the bulk which creates a cusp singularity in the bulk. For two D8-branes, SU (2) is broken and the branes intersect at one angle between them.(Bergman, Lifschytz, Lippert) • In the SS-model a configuration of two intersecting D8-branes were found to have a tachyonic instability in the bulk spectrum which is proposed to correspond to Cooper pairing instability in the boundary theory. (B. Sathiapalan, et.al.) • The tachyon mode is identified as the lowest mode in the open string excitation between the intersecting branes. • Another way of stabilizing: Finite temperature field theory.
• Computation: Finite temperature one-loop mass-squared corrections to the tree-level tachyon.
• Finite temperature effects : Existence of T c at which the effective mass-squared of the tachyon vanish. Our main goal is to calculate the T c .
• However this problem is difficult to handle in the case of D8-branes on a curved D4-background. But many of the technical features are captured by a much simpler set-up consisting of two intersecting D1-branes on a flat background.
• We choose to study the finite temperature effects in this simpler set-up. We are able to do so because the tachyon dynamics is a local phenomenon and not influenced significantly by curvature effects.
Validity
• The low energy theory on the brane can be described by the DBI action for the massless fields on the brane. This is valid as long as only energies << 1 α are being probed. • We can study this as a quantum theory with a cutoff Λ < 1 √ α and proceed to study the corrections due to the massless mode quantum and thermal fluctuations.
• Thermal corrections should be unambiguously finite.
• The background fields : The Lagrangian for the background fields decouple into two pieces, one for each of these doublets.
• In each doublet the fields satisfy a set of coupled differential equations.
• There are two sectors of solutions: , m 2 n = 0. Two different sets of normalized eigenfunctions for each of these doublet fields.
Normalized Eigenfunctions: INTERSECTING Dp-BRANES:SPECTRUM OF BOSONS • We turn on all the other fields as fluctuations. For the other bosonic fields: where I = 1 • The third gauge components of all fields are massless The fermions in the picture play a crucial role in ensuring the UV finiteness of one-loop computations.
• We shall restrict our discussion to only D1-branes now. We have a complete calculation for this case. For D2 and D3branes the work is still in progress.
• The fermions: sixteen left and sixteen right moving Majorana-Weyl fermions, grouped into two different sets of eight pairs distinguished by their e.o.m.
and their complex conjugates.
• L 3 i and R 3 i are massless fermions(plane waves).
TACHYON INSTABILITY
• The bosonic doublets ζ k are eigenvectors corresponding to the mass squared eigenvalue: where k = 0 corresponds to tachyonic modes. Step 1: Calculate the finite T 1-loop mass-corrections for the massless fields namely, Φ 3 1 , Φ 3 I , (I = 1) and A 3 x . • m 2 n = 0: Infinitely degenerate massless modes corresponding to the zero eigenvalue sector: diagonalized mass matrices as a function of temperature (numerically). • Step 2: These temperature dependent masses modify the propagators in the tachyonic amplitudes.
• The tachyon two-point functions are computed self-consistently (numerical computation).
FINITE TEMPERATURE CORRECTIONS
• UV problem: for all fields.
• Finite T 1-loop bosonic and fermionic amplitudes: Each term is UV divergent.
• Sum over discrete momemtum n (fields coupled to the background are massive) • integral over continuous momemtum (massless modes).
• Compute the integrals involved in the vertices and expand the sums over n about n = ∞: leading order 1 √ n .
• Cancellation between Bosonic and fermionic terms yeilds finite answer.
SUDIPTO PAUL CHOWDHURY BCS Instability and Finite Temperature Corrections to Tachyon
FINITE TEMPERATURE CORRECTIONS
• No divergence from temp-dependent part.
• One-loop corrections to the tachyon mass term: set all external momenta in the Feynman diagrams = 0 and integrate/sum over the loop momenta. One-loop diagrams: • The parameter q provides a scale for supersymmetry breaking. The effective mass of the tree-level tachyon • m 2 0 : Quantum corrections (T = 0). Only true for 1 + 1-dimensions.
FINITE TEMPERATURE CORRECTIONS (MASSLESS FIELDS)
Sample plot for massless field :
CONCLUSION
• The finite temperature effects remove the tachyon instability in intersecting D1-branes and stabilize the configuration.
• The effective mass-squared of the tree-level tachyon grows linearly with temperature as expected in (1 + 1)-dimensions.
• The zero temperature quantum corrections are independent of the parameter q (1 + 1-dim.).
• At finite temperature the superconducting instability transits into a stable normal phase.
• This phenomenon bears the hallmark of a phase transition.
FUTURE DIRECTIONS
• To do the full stability analysis we must compute the full finite temperature effective action for the tachyon, which calls for computing higher point functions.
• Our results can be generalized to higher dimensional branes(D2 and D3) without much difficulty. It will be interesting to study the issue of phase transition in higher dimensions. (ongoing) • By scaling arguments(scaling the integrals by powers of β) we see that the finite temperature bhaviour in p + 1-dims (p > 1)is T p−1 .
• Question of adding α -corrections in the loop may be interesting.
• Open string world-sheet perspective : calculating the annular amplitudes at finite T.
THANK YOU!
SUDIPTO PAUL CHOWDHURY BCS Instability and Finite Temperature Corrections to Tachyon
FINITE TEMPERATURE CORRECTIONS
• The one-loop corrections from the bosonic diagrams with 4-pont vertex • where F 's denote the four point vertices in this expression.
FINITE TEMPERATURE CORRECTIONS
• The one-loop corrections from the bosonic diagrams with 3-pont vertex • After performing the Matsubara sums, the mass correction for the four-point vertices become SUDIPTO PAUL CHOWDHURY BCS Instability and Finite Temperature Corrections to Tachyon
FINITE TEMPERATURE CORRECTIONS
The mass correction for the three-point vertices after the Matsubara sum assumes the form SUDIPTO PAUL CHOWDHURY BCS Instability and Finite Temperature Corrections to Tachyon
FINITE TEMPERATURE CORRECTIONS
• The fermionic corrections are accompanied with diagrams with only 3-point vertices. | 2014-09-12T13:24:26.000Z | 2014-03-03T00:00:00.000 | {
"year": 2014,
"sha1": "dee86d39f15091c3250839d865bffc40d9ddc082",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP09(2014)063.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "087abc94ab4a134796986fe88ea9baff3221f72a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
18365092 | pes2o/s2orc | v3-fos-license | Nonlinear Model Reduction in Power Systems by Balancing of Empirical Controllability and Observability Covariances
In this paper, nonlinear model reduction for power systems is performed by the balancing of empirical controllability and observability covariances that are calculated around the operating region. Unlike existing model reduction methods, the external system does not need to be linearized but is directly dealt with as a nonlinear system. A transformation is found to balance the controllability and observability covariances in order to determine which states have the greatest contribution to the input-output behavior. The original system model is then reduced by Galerkin projection based on this transformation. The proposed method is tested and validated on a system comprised of a 16-machine 68-bus system and an IEEE 50-machine 145-bus system. The results show that by using the proposed model reduction the calculation efficiency can be greatly improved; at the same time, the obtained state trajectories are close to those for directly simulating the whole system or partitioning the system while not performing reduction. Compared with the balanced truncation method based on a linearized model, the proposed nonlinear model reduction method can guarantee higher accuracy and similar calculation efficiency. It is shown that the proposed method is not sensitive to the choice of the matrices for calculating the empirical covariances.
an effective approach for improving calculation efficiency and finally achieving faster than real-time simulation and control by reducing the external area to be a lower-order simpler model [12]. Although the stability study by dynamic simulation is to determine the dynamic response of the generators and control systems in a study area under disturbances inside the area, these disturbances will impact the neighboring area (called the external area), which in turn will impact the study area, due to the interconnected nature of large power systems.
For model reduction, the study area is of interest and therefore is modeled in detail, while the external area is not of direct interest and thus can be reduced and replaced with a simpler mathematical description. Physically based coherency model reduction has been extensively studied [12]- [18]; it first identifies coherency of generators and then performs reduction by aggregating the coherent generators. The performance of this method mainly depends on the identification of coherent generators. When system conditions change, it might be necessary to adjust the existing boundary to accurately capture the dynamic characteristics of the system [17], [18]. Other approaches, such as synchrony [19], singular perturbations [20], selective modal analysis [21], and computation intelligence methods [22] have also been developed.
There are also model reduction techniques based on the moment matching methods [23]- [25], which attempt to make the leading coefficients of a power series expansion of the reduced system's transfer function match those of the original system transfer function. Another model reduction approach from the perspective of input-output properties has also been studied, such as balanced truncation [26] and structured model reduction based on an extension balanced truncation [27]. Compared with coherency-based methods, these methods have a stronger theoretical foundation and are more general, not specially targeted to a particular application [27].
Besides, recently some new methods have also been developed, such as measurement-based model reduction [28]- [31], border synchrony based method [32], ANN-based boundary matching technique [33], independent component analysis approach [34], heuristic optimization based approach [35], [36], and approximate bisimulation-based method [37]. For detailed survey of the model reduction methods in power systems, the reader is referred to [38] and [39]. For most existing model reduction methods, the external system has to be linearized. Because of the strong nonlinearity of power systems, linearization-based methods cannot always arXiv:1608.08047v1 [cs.SY] 1 Aug 2016 provide accurate description of the physical system. In this paper, however, we discuss model reduction directly for nonlinear power systems through balanced truncation based on empirical controllability and observability covariances [40]- [47]. This method has been discussed in [40]- [43] where it has been applied to mechanical systems [40], [41] and chemical systems [42], [43]. On one hand, similar to the balanced truncation method based on a linearized model, the proposed method also has a solid theoretical foundation and thus holds promise for application to large systems. On the other hand, the proposed method is expected to be able to perform more accurate model reduction by using the empirical controllability and observability covariances. Unlike analysis based on linearization, for which the controllability and observability only work locally in a neighborhood of an operating point, the empirical covariances are defined using the original system model and can thus reflect the controllability and observability of the full nonlinear dynamics in the given domain.
The remainder of this paper is organized as follows. Section II introduces the empirical controllability and observability covariances and discusses their implementation. Section III discusses the model reduction method based on the balancing of empirical controllability and observability covariances. Section IV applies the method in Section III to the power system model. Section V proposes a procedure for performing simulation for the study area and reduced external area. In Section VI, the proposed model reduction method is tested and validated on a system comprised of a 16-machine 68bus system and an IEEE 50-machine 145-bus system. Finally, conclusions are drawn in Section VII.
II. E C O C To perform model reduction for a system from the perspective of input-output properties, we should first obtain its inputoutput properties. For a linear time-invariant system where x ∈ R n is the state vector, u ∈ R v is the input vector, and y ∈ R p is the output vector, the controllability and observability gramians defined as [48] can be used to analyze the controllability and observability and thus the input-state and state-output behavior. The gramians W c,L and W o,L are actually the unique positive definite solutions of the Lyapunov equations [40] A W c,L + W c,L A + B B = 0 (4) However, for a nonlinear system where f (·) and h(·) are the state transition and output functions, x ∈ R n is the state vector, u ∈ R v is the input vector, and y ∈ R p is the output vector, there is no analytical controllability or observability gramian. In order to capture the controllability and observability of a nonlinear system, one can linearize the nonlinear system and calculate the gramians of the linearized system, in which case, however, the nonlinear dynamics of the system will be lost. Alternatively, in order to directly capture the input-output behavior of a nonlinear system in a similar way to a linear system, the empirical controllability and observability covariances [40]- [47] are proposed, which provide a computable tool for empirical analysis of the input-state and state-output behavior of nonlinear systems, either by simulation or experiment.
Different from analysis based on linearization, the empirical covariances are defined using the original system model and can thus reflects the controllability and observability of the full nonlinear dynamics in the given domain, whereas the controllability or observability gramians based on linearization only work locally in a neighborhood of an operating point. It is proven that the empirical covariances of a stable linear system described by (1) is equal to the usual gramians [41].
A. Scaling the System
The nonlinear system described by (6) should first be scaled because a state changing by orders of magnitude can be more important than a state that hardly changes, even though its steady state may have a smaller absolute value. Specifically, system (6) can be scaled bỹ where T x = diag(x 0 ), T u = diag(u 0 ), x 0 and u 0 are the state and input at steady state, and the scaled system is (9b)
B. Empirical Controllability Covariance
The following sets are defined for empirical controllability covariance: where r is the number of matrices for excitation directions, s is the number of different excitation sizes for each direction, and v is the number of inputs to the system, and I v is an identity matrix with dimension v. For the nonlinear system described by (6), the empirical controllability covariance can be defined as is the state of the nonlinear system corresponding to the input u(t) = c c m T c l e i v(t)+u 0 (0), and v(t) is the shape of the input.
The discrete form of the empirical controllability covariance can be defined as [42] is the state of the nonlinear system at time step k corresponding to the input u k = c c m T c l e i v k + u 0 (0), K is the number of points chosen for the approximation of the integral in (10), and ∆t k is the time interval between two points.
C. Empirical Observability Covariance
The following sets are defined for empirical observability covariances: where T o defines the initial state perturbation directions, r is the number of matrices for perturbation directions, I n is an identity matrix with dimension n, M o defines the perturbation sizes and s is the number of different perturbation sizes for each direction; and E o defines the state to be perturbed and n is the number of states of the system. For the nonlinear system described by (6), the empirical observability covariance can be defined as where Ψ lm (t) ∈ R n×n is given by Ψ lm ij (t) = (y ilm (t) − y ilm,0 ) (y jlm (t) − y jlm,0 ), y ilm (t) is the output of the nonlinear system corresponding to the initial condition x(0) = c o m T o l e i + x 0 , and y ilm,0 refers to the output measurement corresponding to the unperturbed initial state x 0 , which is usually chosen as the steady state under typical power flow conditions but can also be chosen as other operating points.
Similarly, (12) can be rewritten as its discrete form [42] where Ψ lm k ∈ R n×n is given by Ψ lm k ij = (y ilm k − y ilm,0 ) (y jlm k − y jlm,0 ), y ilm k is the output at time step k, and K and ∆t k are the same as in (11).
The empirical covariances obtained in Section II contain important information about which states are controllable or observable, based on which a coordinate transformation T ∈ R n×n can be obtained to transform the original model into another state space model whose states are decomposed into four categories: states which are 1) both controllable and observable; 2) controllable but not observable; 3) observable but not controllable; and 4) neither controllable nor observable. For the scaled system in (9), letx = Tx and the transformed system is and the corresponding transformed covariances are If the transformed covariances have the following feature where Σ 1 and Σ 3 are both diagonal matrices and I is an identity matrix, the transformed system in (14) is said to be balanced and the corresponding transformed covariances are denoted by W bal c and W bal o . The states of the balanced system are decoupled into the four categories mentioned above. Specifically, the covariance matrix of the states of the balanced system that are both controllable and observable is given by Σ 1 , the controllability covariance matrix of the states that are controllable but not observable is the identity matrix in the transformed controllability matrix, and the observability covariance matrix of the states that are observable but not controllable is Σ 3 in the transformed observability matrix [42].
A proof for always existing a transformation that can balance a system is given in [49]. As for how to calculate such a coordinate transformation T to balance a system that can be not completely controllable and observable, a method has been proposed in [42], which requires the calculation of four matrices T 1 ∈ R n×n , T 2 ∈ R n×n , T 3 ∈ R n×n , and T 4 ∈ R n×n from the empirical covariances W c and W o . In the following we will briefly introduce this method and more details can be found in [42].
where I c is an identity matrix with dimension equal to the rank of W c and the rows and columns that contain only zeros refer to the rank deficiency of the controllability covariance.
2) Determine T 2 The transformation T 1 found in Step 1 is applied to the observability covariance and a Schur decomposition can be found for the matrix W o,11 as The unitary matrix of this decomposition is required for the second part of the transformation and is given by 3) Determine T 3 A transformation using both T 1 and T 2 can be applied to the observability covariance matrix to obtain the third transformation, T 3 , as given by and 4) Determine T 4 A transformation using T 1 , T 2 , and T 3 is applied to the observability covariance and a Schur decomposition is found for the square matrix containing the last columns and rows of the transformed system as and The forth transformation can further be determined by Then the transformation matrix T that balances the states that are observable and controllable is given by which can be further used to reduce the scaled system in (9) by Galerkin projection [42], [43]. Specifically, letx = Tx and the reduced system is where P = [I nred 0] is the projection matrix, which has the rank of the reduced system n red ;x 1 andx 2 respectively represent the retained states and the reduced states, among whichx 2 are kept at their steady state valuesx 2ss .
Here, n red can be determined by Hankel singular values, which are the eigenvalues of W bal o W bal c [40]- [43]. The Hankel singular values provide a measure for the importance of the states in the sense that the state with the largest singular value is affected the most by the control inputs and the output is most affected by the change of this state. Thus the states corresponding to the largest singular values influence the input-output behavior the most. When the states that correspond to zero or very small Hankel singular values are eliminated, the reduced system retains most of the input-output behavior of the full-order system.
IV. R P S M
The whole system is partitioned into the study area and external area (see Fig. 1). The study area has n s g generators and n s b buses and the external area has n e g generators and n e b buses. There are p tie-lines between the study and external area, and the set of boundary buses that belong to the study and external area are denoted by Correspondingly, the voltage magnitude and phase angles of the boundary bus b s i , i ∈ {1, 2, · · · , p} are denoted by V s i and θ s i , and those for the boundary bus b e i , i ∈ {1, 2, · · · , p} are denoted by V e i and θ e i . The model reduction method in Section III is applied to reduce the external area. The model reduction procedure can be summarized in the following four steps.
1) Scale the external system
The external system is scaled by using the method in Section II-A.
2) Calculate empirical covariances
The empirical controllability and observability covariances are calculated for the scaled system on time interval [0, t f ]. In (11) and (13) ∆t k can take different values according to the required accuracy, and x 0 is the steady state.
For the external area, the inputs and outputs are, respectively, the voltage magnitude and the phase angles of the boundary buses in B s,bound and B e,bound . More details about the power system model can be found in Appendices A and B.
3) Balance empirical covariances
The balancing of empirical covariances is performed as discussed in Section III and the coordinate transformation that can balance the scaled external system is obtained by (28).
4) Perform model reduction
Model reduction is performed for the external area by (29).
V. S W S The whole system is partitioned into the study area and the external area, as shown in Fig. 1. For both areas, the boundary buses in the other area are treated as generators with a classical second-order model and very large inertia constant. The generators corresponding to boundary buses that belong to the study area and external area are denoted by sets G s = {g s 1 , g s 2 , · · · , g s p } and G e = {g e 1 , g e 2 , · · · , g e p }. The whole system can be simulated in the following way.
1) Simulate the study area
The simulation is performed for the study area, the tie-lines, and the boundary buses in the external area.
Since the boundary buses b e 1 , b e 2 , · · · , b e p are treated as generators, the simulated system thus has a total of n s g +p generators and n s b + p buses. The states of the study area at time step k + 1, denoted by x s,k+1 , can be obtained by solving the following differential equationṡ with given x s,k that is the state at time step k. The input u s is comprised of voltage magnitude and phase angles of the boundary buses in B e,bound and can be written as u s,k = V e,k θ e,k for time step k. When solving (30), since only the second-order generator model is used, the voltage magnitude of the boundary buses (also transient voltage e q of the corresponding generators) will remain unchanged. In addition, since the inertia constant is very large, the phase angle of the boundary buses (also rotor angle δ of the corresponding generators) will not change. The rotor angle and transient voltage at q and d axes at time step k + 1 of the generators in study area (not including boundary buses in external area) are denoted by δ s,k+1 , e q s,k+1 , and e ds,k+1 .
2) Simulate the external area
The simulation is performed for the external area, the tie-lines, and the boundary buses in the study area. The boundary buses b s 1 , b s 2 , · · · , b s p are treated in the same way as in Step 1 and the simulated system thus has a total of n e g + p generators and n e b + p buses. The states of the reduced external system at time step k + 1, denoted byx e1,k+1 , can be obtained by solving the differential equationṡ with givenx e,k , state of external area at time step k. The input u e is comprised of voltage magnitude and phase angles of the boundary buses in B s,bound and can be written as u e,k = V s,k θ s,k for time step k. Similar to Step 1, the voltage magnitude and phase angles of the boundary buses will remain unchanged. The states of the original system can be obtained by transformation of the states of the reduced external system as x e = T x T −1 x e1x e2ss . The rotor angle at time step k + 1 of the generators in external area (not including boundary buses in study area) is denoted by δ e,k+1 . The transient voltages at q and d axes are denoted by e q e,k+1 and e de,k+1 .
3) Update boundary buses
Given the states of the study area δ s,k+1 , e q s,k+1 , and e ds,k+1 and the states of the external area δ e,k+1 at time step k + 1, the voltage sources of the generators can be obtained as follows: Ψ re e = e de,k+1 sin δ e,k+1 + e q e,k+1 cos δ e,k+1 (32a) Ψ im e = e q e,k+1 sin δ e,k+1 − e de,k+1 cos δ e,k+1 (32b) As in Appendix A, we denote by B s,ZIP the n s ZIP load buses in the study area that are modeled as ZIP load (also called non-conforming load, as in [50] (34) whereṼ s is the complex voltages for all buses in B s , V s,Bs,ZIP andṼ s,B c s,ZIP are, respectively, the complex voltages for the non-conforming load buses and the other buses, R ncs ∈ C (n s b +p−n s ZIP )×n s ZIP is the voltage reconstruction matrix which gives the original bus voltages components due to the non-conforming load, and V ncs ∈ C n s ZIP ×1 is the complex voltages of the nonconforming load buses that can be obtained asṼ nc by solving the nonlinear equations in (57) by Newton's method. Similarly, we can also getṼ e,Be,ZIP andṼ e,B c e,ZIP for the external area for which the notations are similar to those for the study area. Then the nonlinear equations for the boundary buses at time step k + 1 can be written as follows, for which V s,k+1 , V e,k+1 , θ s,k+1 , and θ e,k+1 are unknowns: arg Ṽ s,Bs,bound whereṼ s,Bs,bound andṼ e,Be,bound are, respectively, the complex voltages of the boundary buses in the study area and external area that are obtained by (33)-(34), and | · | and arg(·) represent the absolute value and argument of a complex vector. Note that the left-hand side of these equations are actually also functions of the unknowns V s,k+1 , V e,k+1 , θ s,k+1 , and θ e,k+1 . The obtained nonlinear equations can be solved by Newton's method, for which the inputs u s,k and u e,k at time step k are used as initial guess. The solution of the nonlinear equations can be used to update u s,k+1 and u e,k+1 , which are further used for simulation in Steps 1 and 2 for the next time step.
VI. C S
The proposed model reduction method is tested on a system comprised of a 16-machine 68-bus system as the study area and an IEEE 50-machine 145-bus system as the external area. Both systems are extracted from Power System Toolbox [50]. The empirical covariance calculation and model reduction are implemented with Matlab. All tests are carried out on a 3.2-GHz Intel(R) Core(TM) i7-4790S based desktop.
For the study area, the fast sub-transient dynamics and saturation effects are ignored and the generators are described by the two-axis transient model with IEEE Type DC1 excitation system. Each generator has seven state variables, which are rotor angle δ, rotor speed ω, transient voltage along q and d axes e qi and e di , regulator output voltage V R , excitation output voltage E fd , and stabilizing transformer state variable R f . A subset of load buses, buses 1, 16, 23, 28, 39, 45, 48, and 51, are modeled as ZIP loads. The proportions of constant impedance, constant current, and constant power loads are determined by the parameters p 1 , p 2 , p 3 , q 1 , q 2 , and q 3 in Appendix A. We choose p 1 = q 1 = 0.2, p 2 = q 2 = 0.3, and p 3 = q 3 = 0.5. The other loads are modeled as constant impedance. More load buses can be modeled as ZIP loads. But there is a tradeoff between the model accuracy and the computational complexity, since the computation burden of both the differential equations and the boundary bus updating will increase when the number of ZIP loads increases. For the external system extracted from PST, only seven generators (generators 1-6 and 23) have high-order model while all the others only use a second-order model. Here, we use a fourth-order transient model to describe generators 1-6 and 23, for which the state variables are rotor angle δ and rotor speed ω, and transient voltage along q and d axes e q and e d , and a second-order classical model for the others, for which the state variables are rotor angle δ and rotor speed ω. All of the loads are modeled as constant impedance. More details about the models for the study and external areas can be found in Appendices A and B.
A. Parameter Setup
The ∆t k in (11) and (13) is chosen as 0.01s. The empirical controllability and observability covariances are calculated for the scaled system in time interval [0, 5 s]. When calculating empirical controllability or observability covariance, the inputs or the states are perturbed by adding a step change at t = 0. For T c and T o , a reasonably simple choice is where I v and I n are identity matrix with dimension v and n, since this corresponds to using both positive and negative inputs or initial states perturbations on each input or each state separately [40].
where u is an input of the external area and can be V or θ, x is a state variable of the external area that can be δ, ω, e q , or e d , and k u and k x are used to consider different ranges of change for different types of variables. For example, the voltage magnitude can only change in a small range while phase angle can change much more significantly. Then the perturbation for u or x will range from 25k u % or 25k x % to 100k u % or 100k x % of the steady state value. In order to determine k u and k x , we apply a total of n f = 100 three-phase faults, for each of which the fault is applied on one of the randomly chosen lines at one end and is cleared at near and remote end after 0.05s and 0.1s. For a fault j, we calculate the changes from the pre-fault input u ei0 or state x ei0 to the post-fault input u eif or state x eif for the ith input or state as The k u and k x can thus be calculated as where p is the number of inputs of the external area, n x is the number of generators with state variable x in the external area, ∆u ei = ∆u 1 ei , · · · , ∆u n f ei , ∆x ei = ∆x 1 ei , · · · , ∆x n f ei , ||v|| ∞ is the infinity norm of a n-dimensional vector v defined as ||v|| ∞ = max |v 1 |, · · · , |v n | , and α u and α x are chosen as real numbers greater than 1.0 (here we choose them as 2) since the applied n f faults cannot represent all of the possible disturbances. By using this method, k u and k x are determined, as listed in Table I, which shows that different types of variables do have very different ranges of change.
B. Scenario Setup
Without losing generality, we add three tie-lines between the study and the external area which connect bus i in study area to bus i in external area, where i = 1, 2, 3. To generate dynamic response, a three-phase fault is applied at bus 6 of line 6 − 11 in the study area at 0.1s and is cleared at the near and remote ends after 0.05s and 0.1s. The corresponding test system and the location where the fault is applied are shown in Fig. 2. For simplicity, we only show the parts of the study area and the external area that are close to the boundary buses. The simulation is performed for 15 seconds and the time step is 0.01s and 0.03s, respectively, for before and after the fault clearing. The differential equations are solved by Matlab function "ode23t".
Note that the dynamic simulation is performed for 15 seconds while the empirical controllability and observability covariance calculation is only for the first 5 seconds. In the following sections we will show that the empirical covariances obtained in this manner are good enough for performing model reduction for the external area.
It has been shown in [12] that the reduced-order model via balanced truncation [26] represents a better approximation with lower orders compared with the Krylov subspace method [25]. Thus we only compare the proposed method with the balanced truncation method using a linearized model in [26]. Partitioned-Reduced-LM Partition the whole system and reduce the external area by method in [26] based on the Linearized Model (LM) The external area has G e = 50 generators. Seven of them have fourth-order transient model and the others have secondorder classical model. Therefore, there are a total of 114 state variables. The number of retained states n red can be determined by Hankel singular values. For our test case, only 9 of the Hankel singular values are greater than 10 −5 and we thus choose n red = 9, which only accounts for 7.9% of the number of states and is also used for the method in [26].
Note that we apply the method in Section III to calculate the transformation matrix T for the balanced truncation method based on a linearized model in [26], rather than directly using the method used in [26], which is proposed in [51] and can be summarized as: If the transformation matrix obtained by this method is used to get the reduced model for the linearized system, the corresponding simulation using the reduced model cannot proceed because the Newton's method is difficult to converge when used to solve the nonlinear equations in (57). By contrast, by using the method in Section III to get the transformation matrix and further getting the reduced model of the linearized system, the performance of the simulation is acceptable, although not as good as that of the proposed nonlinear model reduction method. This is mainly because the balancing transformation method discussed in Section III is applicable to systems that are not completely controllable and observable [42]. The simulation methods considered in this paper are summarized in Table II. The results for these methods will be given in the following sections.
C. Results for the Study Area
There are G s = 16 generators in the study area whose states are of direct interest. In Figs. 3 and 4, we present results for rotor angle and transient voltage along q-axis of the study area when the proposed model reduction and the model reduction in [26] are performed for the external area. For rotor angles, t/s generator 13 in the study area is used as the reference. We can see that the results for "Partitioned-Reduced-NM" are closer to those for the "UnPartitioned" and "Partitioned-Unreduced" methods, compared with those for "Partitioned-Reduced-LM". In order to quantify the accuracy of the model reduction methods, we define the following index: where x is one type of states and can be δ, ω, e q , e d , V R , E f d , or R f ; x red i,t is the simulated ith state for "Partitioned-Reduced-NM" or "Partitioned-Reduced-LM" method and x unred i,t is the ith state from simulations without doing model reduction, both for time step t; N is the number of trajectories to be compared, and here N = G s , and T s is the total number of time steps. When we compare results from methods doing model reduction with "UnPartitioned" or "Partitioned-Unreduced" method, s will be separately denoted by 1 s or 2 s , which are listed in Table III. It can be seen that for all types of state variables the defined indices for the proposed method are much smaller than those for the method in [26].
D. Results for Boundary Buses
The results for the phase angle differences between boundary buses for both model reduction methods are shown in Fig. 5. It can be seen that the phase angle differences from the proposed method are very close to those from the "UnPartitioned" and "Partitioned-Unreduced" methods, while for the reduction method in [26] the differences are more obvious.
A similar index to that in (50) can be defined (denoted by 1 b and 2 b , respectively, for comparison with the "UnPartitioned" and "Partitioned-Unreduced" methods) for the boundary buses for which x is a type of variable for boundary buses and can be voltage magnitude (V s or V e ) or phase angles (θ s or θ e ), N = 3 for our case is the number of boundary buses in each area. The defined indices for the proposed method can be much smaller than those for the method in [26], as in Table IV.
E. Sensitivity Analysis for Empirical Covariance Calculation
Here, we perform sensitivity analysis about how the empirical covariance calculation influences the accuracy of model reduction. Firstly, the M 0 in (39) and (40) chosen as a
F. Efficiency
The calculation times, t total , for simulating 15 seconds by different methods are listed in Table IX the time for linearly scaled M 0 . It is seen that our proposed model reduction method can improve the calculation efficiency of dynamic simulation and help achieve faster than real-time simulation. Also, the efficiency of our model reduction method based on a nonlinear model is similar to that for the balanced truncation method in [26] based on a linearized model. To clearly identify the bottleneck of the proposed method and that in [26], in Table X we list the calculation time for the three steps in Section V. Here, t s , t e , and t b are the time for simulating the study area, the external area, and updating the boundary buses, respectively. For both model reduction methods, most calculation time is for simulating the detailed modeled study area. The calculation time of simulating the external area for nonlinear model reduction is a little higher than that based on a linearized model, which explains why the t total for the nonlinear model reduction is a little higher.
Note that the first two steps in Section V are decoupled and can be calculated in parallel, which can further improve the simulation efficiency. Then the total calculation time will be t total = max{t e , t s } + t b , which is also listed in Table X In this test case, if the first two steps in Section V are calculated in parallel, the advantage of the model reduction methods over the "Partitioned-Unreduced" method is not obvious. This is because the external area in our test case is not significantly larger than the study area. In the case that the external area is much larger than the study area, we will have t total (Par) t total (Red) = max{t s (Par), t e (Par) where "Par" represents the "Partitioned-Unreduced" method and "Red" indicates the model reduction methods, either nonlinear or linear model reduction. The speedup for the model reduction methods compared with the "Partitioned-Unreduced" method can achieve t e (Par)/t e (Red). If we assume the speedup for the external area simulation for larger external areas is the same as that in our test case, then the speedup can be 10.57/2.14 ∼ = 4.94 or 10.57/1.58 ∼ = 6.69 for the proposed nonlinear model reduction and the method in [26] based on a linearized model, respectively.
VII. C
In this paper, a nonlinear power system model reduction method is proposed by balancing of the empirical controllability and observability covariances. Compared with the balanced truncation method based on a linearized model, the proposed model reduction method can guarantee higher accuracy for simulated state trajectory, mainly because the empirical covariances are defined using the original system model and can thus reflect the controllability and observability of the full nonlinear dynamics in the given domain.
The proposed method is validated on a test system comprised of a 16-machine 68-bus system as the study area and an IEEE 50-machine 145-bus system as the external area. The results show that by using the proposed model reduction method the simulation efficiency is greatly improved and at the same time the obtained state trajectories are close to those for directly simulating the whole system and for partitioning the system while not performing reduction. By contrast, for the balanced truncation method based on a linearized model when using the balancing transformation method in Section III, the simulation accuracy is lower but is still acceptable, and the calculation efficiency is similar to that of our proposed model reduction method. However, when the balancing transformation method from [51] is applied for the balanced truncation method based on a linearized model, as in [26], the simulation cannot proceed, which is mainly because that balancing transformation is not applicable to systems that are not completely controllable and observable.
By solving the differential equations in the study area and the external area in parallel, in our test case the speedup compared with the "UnPartitioned" method finally achieves 1.88 and the simulation is 1.22 times faster than real time. When the external system is much larger than the study area, the speedup of the proposed method compared with the "Partitioned-Unreduced" method can achieve 4.94. It is also shown that the proposed model reduction method is not sensitive to the choice of the matrices for calculating the empirical controllability and observability covariances.
For the study area, the fast sub-transient dynamics and saturation effects are ignored and the generator is described by the two-axis transient model with IEEE Type DC1 excitation system [52]: where i is the generator serial number, δ i is rotor angle, ω i is rotor speed in rad/s, and e qi and e di are transient voltage along q and d axes; i qi and i di are stator currents at q and d axes; V Ri is regulator output voltage, E fdi is excitation output voltage, R fi is stabilizing transformer state variable; T mi is mechanical torque, T ei is electric air-gap torque; ω 0 is the rated value of angular frequency, H i is inertia constant, and K Di is damping factor; T q0i and T d0i are open-circuit time constants, x qi and x di are synchronous reactance, and x qi and x di are transient reactance, respectively, at the q and d axes; T Ai is voltage regulator time constant, T Ei is exciter time constant, T Fi is stabilizer time constant, K Ai is voltage regulator gain, and K Ei is exciter constant. The load buses in B s,ZIP are modeled as a combination of constant impedance, constant current, and constant power (also called non-conforming load, as in [50]) as where P i and Q i are the active and reactive power at load bus i, P 0,i and Q 0,i are the initial active and reactive power at load bus i, p 1 , p 2 , and p 3 are proportions of constant active impedance load, constant active current load, and constant active power load, q 1 , q 2 , and q 3 are proportions of constant reactive impedance load, constant reactive current load, and constant reactive power load, and there is p 1 +p 2 +p 3 = 1 and q 1 + q 2 + q 3 = 1,Ṽ nc,i andṼ nc0,i are the complex voltage and initial complex voltage at load bus i. The other load buses that do not belong to B s,ZIP are modeled as constant impedance. The input and output are, respectively, the voltage magnitude and phase angles of the boundary buses in external area and study area. The boundary buses in the external area are treated as generators with a classical second-order model and very large inertia constant, which can be described by the first two equations in (52). The voltage magnitude and phase angles of the boundary buses in external area are respectively used as the e q and δ of the equivalent generator, for which ω = ω 0 and e d = 0. The dynamic model (52) can be rewritten in a general state space form in (6) and the state vector x s , input vector u s , and output vector y s can be written as The i qi , i di , T ei , V Ai , and S Ei in (52) can be written as functions of x s and u s (note that for boundary bus b e i in external area, the generator number is g e i and there are e qg e i = V eb e i , e dg e i = 0, and δ g e i = θ b e i ): Ψ Ri = e di sin δ i + e qi cos δ i (56a) Ψ Ii = e qi sin δ i − e di cos δ i (56b) I ti = Y g,i (Ψ R + jΨ I ) + Y gnc,iṼ nc (56c) i Ri = Re(I ti ) (56d) i Ii = Im(I ti ) (56e) e qi = e qi − x di i di (56h) e di = e di + x qi i qi (56i) P ei = e qi i qi + e di i di (56j) V TRi = e di 2 + e qi 2 (56m) where Ψ i = Ψ Ri +jΨ Ii is the voltage source, Ψ = Ψ R +jΨ I is the column vector of all generators' voltage sources, e qi and e di are the terminal voltage at q and d axes, Y g,i is the ith row of the reduced admittance matrix connecting the generator current injections to the internal generator voltages (including boundary buses in external area) Y g , and Y gnc,i is the ith row of the reduced admittance matrix which gives the generator currents due to the voltages at non-conforming loads Y gnc ; P ei is the electrical active output power, and S B and S Ni are the system base MVA and the base MVA for generator i; K Fi is the stabilizer gain; exc 1 i , exc 2 i , and exc 3 i are internally set exciter constants; and sgn(·) is the signum function. TheṼ nc in (56c) is the complex voltages of the non-conforming load buses and can be obtained by solving the following nonlinear equations by Newton's method: where Y ncg is the reduced admittance matrix connecting nonconforming load current to machine internal voltages, Y nc is the reduced admittance matrix of non-conforming loads, and I cc andĨ cp are current injections of the constant current and constant power components.Ĩ cc +Ĩ cp is actually a function ofṼ nc . For |Ṽ nc,i | > 0.5, it can be written as − p 3 P 0,i + p 2 P 0,i |Ṽnc,i| |Ṽnc0,i| + j q 3 Q 0,i + q 2 Q 0,i |Ṽnc,i| |Ṽnc0,i| Ṽ nc,i * while for |Ṽ nc,i | ≤ 0.5 it is − p 3 P 0,i + jq 3 Q 0,i + p 2 P 0,i + jq 2 Q 0,ĩ V nc0,iṼ * Both fourth-order and second-order generator model are used for the external area. In (52), the generators with fourthorder model are described by the first four equations and V Ri , E fdi , and R fi are kept unchanged. The generators with secondorder model are described only by the first two equations and e qi , e di , V Ri , E fdi , and R fi are all kept unchanged. The input and output are respectively the voltage magnitude and phase angles of the boundary buses in study and external area. T ei can be obtained by (56a)-(56k) and the outputs can be calculated in a similar way to (58a)-(58h) in Appendix A. The dynamic model can be rewritten in the form (6) and the state vector, input vector, and output vector can be written as x e = δ e ω e e q e e d e (59a) u e = V s θ s (59b) | 2016-08-01T22:36:27.000Z | 2016-08-01T00:00:00.000 | {
"year": 2016,
"sha1": "8c96192a7369b8b5034a95c7eec033c7d42e8909",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1608.08047",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8c96192a7369b8b5034a95c7eec033c7d42e8909",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
25423107 | pes2o/s2orc | v3-fos-license | A Translational Animal Model for Scar Compression Therapy Using an Automated Pressure Delivery System.
Background: Pressure therapy has been used to prevent and treat hypertrophic scars following cutaneous injury despite the limited understanding of its mechanism of action and lack of established animal model to optimize its usage. Objectives: The aim of this work was to test and characterize a novel automated pressure delivery system designed to deliver steady and controllable pressure in a red Duroc swine hypertrophic scar model. Methods: Excisional wounds were created by dermatome on 6 red Duroc pigs and allowed to scar while assessed weekly via gross visual inspection, laser Doppler imaging, and biopsy. A portable novel automated pressure delivery system was mounted on developing scars (n = 6) for 2 weeks. Results: The device maintained a pressure range of 30 ± 4 mm Hg for more than 90% of the 2-week treatment period. Pressure readings outside this designated range were attributed to normal animal behavior and responses to healing progression. Gross scar examination by the Vancouver Scar Scale showed significant and sustained (>4 weeks) improvement in pressure-treated scars (P < .05). Histological examination of pressure-treated scars showed a significant decrease in dermal thickness compared with other groups (P < .05). Pressure-treated scars also showed increased perfusion by laser Doppler imaging during the treatment period compared with sham-treated and untreated scars (P < .05). Cellular quantification showed differential changes among treatment groups. Conclusion: These results illustrate the applications of this technology in hypertrophic scar Duroc swine model and the evaluation and optimization of pressure therapy in wound-healing and hypertrophic scar management.
Background: Pressure therapy has been used to prevent and treat hypertrophic scars following cutaneous injury despite the limited understanding of its mechanism of action and lack of established animal model to optimize its usage. Objectives: The aim of this work was to test and characterize a novel automated pressure delivery system designed to deliver steady and controllable pressure in a red Duroc swine hypertrophic scar model. Methods: Excisional wounds were created by dermatome on 6 red Duroc pigs and allowed to scar while assessed weekly via gross visual inspection, laser Doppler imaging, and biopsy. A portable novel automated pressure delivery system was mounted on developing scars (n = 6) for 2 weeks. Results: The device maintained a pressure range of 30 ± 4 mm Hg for more than 90% of the 2-week treatment period. Pressure readings outside this designated range were attributed to normal animal behavior and responses to healing progression. Gross scar examination by the Vancouver Scar Scale showed significant and sustained (>4 weeks) improvement in pressure-treated scars (P < .05). Histological examination of pressure-treated scars showed a significant decrease in dermal thickness compared with other groups (P < .05). Pressure-treated scars also showed increased perfusion by laser Doppler imaging during the treatment period compared with sham-treated and untreated scars (P < .05). Cellular quantification showed differential changes among treatment groups. Conclusion: These results illustrate the applications of this technology in hypertrophic scar Duroc swine model and the evaluation and optimization of pressure therapy in wound-healing and hypertrophic scar management.
Drs Alkhalil and Tejiram contributed equally to manuscript preparation and seek first authorship. This work was funded by the NIH grant no. 1R15EB013439.
Hypertrophic scar (HTS) is a common cutaneous complication following cutaneous trauma, surgery, infection, or burn. Scar tissue is visibly different from surrounding uninjured skin by measures of height, pliability, pigmentation, glossiness, and vascularity. Patients further experience severe itching, neuropathic pain, sleep disturbances, and impairment of daily activities. 1 Cases of disfigurement or unpleasant aesthetics have led to psychological complications such as posttraumatic stress, 2 loss of self-esteem, 3 and stigmatization. 4 Severe cases of scaring can cause contractures or disabling physical deformities. 5 In developed countries, as many as 100 million people are estimated to acquire some form of scar following cutaneous injury. About 15% of these scars develop into further unaesthetic or debilitating conditions. 6 A survey of major patient concerns after surgery showed that 91% of the patients favored better resolution for scar. 7 The cellular and molecular mechanisms underlying impaired wound healing are poorly understood. 8 Wounds resulting in HTSs exhibit longer epithelialization times, delayed and extended overlapping in wound-healing phases, abundant inflammatory mediator expression, 9 and excessive accumulation of extracellular matrix (ECM) components such as collagen with modulated subtype proportions.
A variety of approaches are used in HTS management including surgical excision with or without grafting, 10-12 intralesional interferon, 13 topical and intralesional corticosteroids, 14 intralesional bleomycin, 15 silicone gel sheeting, 16,17 and laser therapy. 18 Mechanomodulatory strategies have proven effective in controlling incisional wound scarring. 19 Pressure therapy has been used in HTS treatment, 20 with varying degrees of success due to the limited understanding of its mechanism of action, nonstandardized application protocols, and lack of validated animal models. 8,21 Developing a reproducible wound model is paramount to the study of scar physiology 22 and assessing the efficacy of therapeutic intervention. While studying human tissue is most ideal, uncontrollable factors such as the injury depth, location, and patient compliance make this impractical. Research using Duroc pigs have increasingly documented similarities to human wound healing and scar formation by molecular, cellular, and gross measures. Unlike other animals such as guinea pigs, rabbits, or rats, red Duroc size is comparable to humans and offers flatter skin surfaces that make them the choice animal in large and producible wounds creation.
To date, there is no established tool offering precise pressure delivery or a validated animal model for assessing the effect of pressure in HTS therapy. Here, a novel automated pressure delivery system (APDS) capable of delivering an adjustable steady pressure was designed and tested in red Duroc scar model. The aim of this work was to evaluate the suitability of APDS in studying and characterizing the effect of pressure application in red Duroc swine as a model of HTS therapy under controlled conditions.
Animal selection
Juvenile castrated male Duroc swine were handled according to facility standard operating procedures under the animal care and use program accredited by the Association for Assessment and Accreditation of Laboratory Animal Care International and Animal Welfare ePlasty VOLUME 15 Assurance through the Public Health Service. All described animal work was reviewed and approved by the MedStar Health Research Institute's Institutional Animal Care and Use Committee.
Experimental design
Six red Duroc pigs were used for wound creation with a Zimmer dermatome (Zimmer, Ltd, Swindon, United Kingdom). On each flank, a 4 x 4 in (10.16 x 10.16 cm) wound was excised over the lateral thorax to a partial-thickness depth of 0.060 in (0.030 in x 2 passes) or full-thickness depth of 0.090 in (0.0.030 in x 3 passes).
Wounds were dressed with Mepilex Ag (Monlylke, Gothenburg, Sweden) and changed regularly. Pain was managed by buprenorphine and fentanyl at the end of each procedure. Animals were examined at least twice daily to monitor pain, wound, or behavior changes. Animals were brought back weekly to the operating room, examined, and images and biopsy specimens collected. Punch biopsy specimens (3 mm) were taken pre-and postexcision at weekly assessments. Biopsy specimens were placed in formalin for histology or Allprotect Tissue Reagent (Qiagen, Valencia, Calif) for RNA and protein isolation.
After wound reepithelialization and scar development, an automated pressure delivery device was mounted on day 70 to scars. The treatment period lasted 2 weeks whereupon developed scars received pressure treatment (device/pressure at 30 mm Hg), sham treatment (device/no pressure), or no treatment at all (no device). Weekly assessments continued during and after pressure application.
Automatic pressure delivery system
Briefly, the APDS consisted of a set of linear actuators for pressure delivery to underlying tissue and force-sensitive resistors for pressure measurements. Wireless communication allowed for pressure recording and feedback control to ensure accurate pressure delivery. 23
Surgical mounting of the APDS and pressure recording
Animals were sedated using a combination of ketamine and xylazine delivered intramuscularly, followed by intubation and general anesthesia delivery. Animals were maintained on isoflurane, placed on a warming blanket, and ventilated during examination or device mounting.
A Plexiglass base was secured to surrounding skin using MYO/WIRE II Sternotomy Suture (A&E Medical Corporation, Durham, NC), followed by APDS attachment. Protective padding was applied and reinforced using Delta fiberglass casting material (BSN Medical, Charlotte, NC). A custom-fitted neoprene vest was then placed to ensure further protection of the animal and device. 24 Upon procedure completion, anesthesia was stopped and animals were brought back to the animal housing facility. Pressure recording started after animal recovery from anesthesia (Fig 2). Pressure boxes were removed temporarily after 1 week of pressure application (<3 hours) for scar assessment and biopsy specimen procurement and then permanently after 2 weeks of pressure application.
Imaging
At each assessment, wounds or scars received standard digital imaging and Laser Doppler Imaging (LDI). The amount of perfusion was calculated by LDI to produce a mean perfusion unit. Moor LDI software (v5.3; Moor Instruments, Devon, United Kingdom) was used for image capture and analysis of mean perfusion units.
Histology
Punch biopsy specimens were fixed in 10% formalin and embedded in paraffin. Paraffin blocks were sectioned (5 μm) and left to dry overnight. Slides were deparaffinized using xylene and dehydrated using an ethanol gradient. Staining was then performed using either hematoxylin and eosin (H&E) or the fluorescent dye DAPI. A Zeiss Axioimager microscope was used to view slides (Carl Zeiss, Oberkochen, Germany). Zeiss Zen Pro 2012 software (Carl Zeiss) was then used to capture digital images and conduct measurements.
Assessment of skin thickness and cellularity
Sections' images were used for gross examination, skin layer measurements, and quantification of cells. Epidermal thickness was measured as the distance from the surface of the skin to the dermal-epidermal junction (μm). Dermal thickness was measured as the distance from the dermal-epidermal junction to the first identifiable sign of hypodermis (μm). Hypodermis was identified by the presence of lobules of fat or loose connective tissue compared with dermal layers.
Cell quantification was performed using ImageJ software (v1.48; NIH, Bethesda, MD) to produce a percent cellularity per high-powered field. Ten high-powered fields per section were used for cell quantification.
Reproducible HTS in red Duroc swine requires full-thickness wounds
Six red Duroc swine had 4 x 4-in wounds created on their flanks with a dermatome. Two pigs received partial-thickness (0.06 in) wounds (n = 4) and 4 pigs received full-thickness wounds (n = 8). All partial-thickness wounds reepithelialized by day 7 and healed with no significant skin deformities (Fig 1, Table 1). Full-thickness wounds reepithelialized between 30 and 40 days (Fig 1).
Scars assessment using Vancouver Scar Scale (VSS) scores in all animals showed no significant differences prior to pressure therapy. Pressure-treated scars received lower VSS scores after 1 week of compression and significant decreases (P < .05) after 2 weeks compared with sham-treated and untreated scars (Tables 2 and 3). This effect was sustained on subsequent assessments following APDS removal (Fig 5). ePlasty VOLUME 15
Gross examination of the effects of APDS mounting
Wound assessments under anesthesia prior to APDS mounting were approximately 1 to 2 hours in duration. Mounting of the APDS device added approximately 2 additional hours. The most traumatic part of the APDS mounting process was securing the device base by sternal wire suture. At no point during the procedure deterioration in vital signs was noted.
No signs of abnormal animal behavior or distress were noted after the procedure. Transient signs of inflammation were noted at suture entry and exit points, but no significant skin damage or signs of infection were observed (Fig 2). Pressure boxes were removed once for no more than 2 hours to assess scar and collect biopsy specimens. Gross examination of sham-treated and untreated scars at days 70 and 84 ePlasty VOLUME 15 postwounding showed similar scars, suggesting a negligible effect of the device mounting procedure on scar development.
APDS performance and the effects of pressure on gross HTS characteristics
Pressure recordings for each device were analyzed to evaluate the performance of the APDS. The data showed that all APDS devices maintained a targeted pressure level of 30 ± 4 mm Hg for more than 90% of the total pressure application duration. The variations in pressure outside of the targeted pressure range accounted for about 9% to 10% of the total pressure application time (Fig 3). These variations mostly ranged between 22-26 and 34-38 mm Hg and were transient, underscoring the rapid response of the system in correcting pressure changes caused by animal activity (Table 4). Out-of-range pressure incidents were more frequent above the desired range than below it at a ratio of approximately 5:1, suggesting that animal behavior accounted for pressure fluctuations rather than mechanical APDS deficiencies. Pressure readings greater than the targeted pressure range accounted for only 8.8% of all pressure readings, and readings exceeding 40 mm Hg only accounted for 2% of all readings. Pressure-treated scars showed clear differences at day 84 postwounding compared with pretreated scar at day 70 or relative to sham-treated scars at day 84. These differences primarily encompassed the pliability, height, and vascularity parameters of the VSS (Tables 2 and 3). Pressure-treated scars consistently showed significantly lower total VSS values during the treatment period (P < .05; Tables 2 and 3, Fig 4). Continuous observation of scar development for 4 weeks after removal of pressure showed persistently lower VSS values in pressure-treated HTS compared with sham ( Fig 5). ePlasty VOLUME 15
Pressure delivered using the APDS induces changes in dermal thickness of HTS
Thickness of the epidermis and the dermis was quantified from H&E-stained sections (Fig 6 a) of scars. Comparative analysis revealed a steady trend of reduced dermal thickness in pressure-treated scars (Fig 6 b). Significant differences between sham and untreated scars were noted at days 70 and 84 postwounding (P < .05 ; Fig 6 b). Assessment of changes in the epidermal layer showed nonspecific differences. These differences are probably due to heterogeneity of the epidermis across biopsy specimens (Fig 6 c). Note that the effect of applying pressure for 2 weeks caused persistent changes in the dermis for at least 4 weeks after APDS removal (Fig 6 b). Figure 5. Changes in scar assessment using Vancouver Scar Scale scoring compared on the basis of treatment modality. * Significant differences were noted past day 77 between pressure-treated and the sham-treated scars.
Pressure delivered to HTS using the APDS induce changes in the behavior of HTS cells
H&E-stained sections from pressure-treated, sham-treated, and untreated HTS biopsy specimens were examined under the microscope for quantifiable cellular changes during and after the treatment period (Fig 7 a) with results confirmed by DAPI fluorescent staining (not shown). Pressure-treated scars had a significant decrease in cellularity compared with both sham-treated and untreated scars 1 week into the treatment period (P < .05). However, cellularity increased significantly compared with its previous week as well as to sham-treated scars at day 84 and afterward (P < .05). Compared with untreated scars, pressure-treated scars had significantly lower cell counts after 1 week of treatment and up to 1 week after treatment (P < .05 ; Fig 7 b).
Assessment of pressure-treated HTSs using LDI showed an increase in tissue perfusion relative to sham-treated HTS
Evaluation of wound perfusion before and after APDS mounting using LDI showed differences between pressure-treated scar and other arms of the study. While sham-treated and untreated scar produced laser Doppler images suggestive of no significant change, pressure-treated scar laser Doppler images showed evidence of increased scar perfusion by day 84 (Fig 8 a). Further software-aided analysis confirmed a significant increase in scar perfusion (Fig 8 b) relative to sham-treated and untreated scars during the treatment period. ePlasty VOLUME 15
DISCUSSION
Compression is commonly used in HTS therapy without clear understanding of its influence on scar pathophysiology. Varying garment elasticity over time and patient commitment are major hurdles to effectively study pressure therapy. 25 The inability to reliably quantify pressure delivery has often resulted in suboptimal pressure application in scar treatment. A pressure delivery system capable of delivering controllable and precise pressure doses to scars 23 was engineered and tested in a red Duroc model of HTS. 22,26 The system featured wireless real-time recording of pressure and minimal restriction of animal mobility. This enabled unimpeded characterization of device function that revealed additional information about the relation of animal behavior and delivery of steady pressure.
A standard protocol to generate reproducible HTSs was critical to the evaluation of the APDS. Deeper full-thickness wounds were required for HTS in this red Duroc model. The newly generated tissue was typical of an HTS such that it was raised, contracted, swollen, and less pliable than the uninjured skin.
The summative decrease in epidermal and dermal layer thickness was less than the overall decrease in scar height following pressure application, suggesting that subdermal layers must be considered in determining the effect of pressure on HTS and skin thickness dynamics.
The decrease in height may result from loss of local fluid, cells, and/or ECM. The immediate effect of pressure application is reduction in blood flow, which will cause marginal reduction in scar volume. However, the final decrease in scar height and the persistence of changes in scar tissue following removal of pressure suggest the involvement of more complex mechanisms. 27,28 Hypoxia has been described to induce changes in cell proliferation, differentiation, and survival in different cell lines. 29 Similarly, mild hypoxic conditions correlated with modulation of cell metabolism and secretome. 30 It has been proposed that the hypoxic conditions induced by pressure result in cellular changes that ultimately reduce collagen deposition and decrease scaring. 31 While this hypothesis might be valid, further investigations are still needed for direct evidence and specific mechanisms, given the variety of cell and tissue types used to generate these results and the diverse and dynamic cellular content of scar.
The significant variations in total cell count and the associated decrease in scar height after pressure application observed in this work suggest changes in cell behavior and/or the homeostatic balance of cell types in scars under pressure. Cellularity changes reported in treatment groups testify to the complex interactions involved. Changes in the total cellularity of scars undergoing compression are influenced by compounded variables such as the described decreases in apoptosis rates in HTSs 32 and increased rates of cell apoptosis and upregulation of IL-1β and TNF-α described in vitro in response to compression (35 mm Hg/24 h). 33 Shifts in cell-type balance in pressure-treated scars are possible. Such changes would not be distinguished in the cell counting method used here. Changes in ECM are a natural correlate of shifts in cell count, type, and activity. Further work is underway to identify and characterize changes in cell activities and cell types upon pressure application on scars.
The neovascularization enhancement is consistent with reports of increased vasculogenesis under hypoxic conditions. 34 Interestingly, this also associates with changes in cellular secretome, which could be a way for ECM deposition modulation. The increased perfusion in pressure-treated scars noticed after removal of the APDS might be different from perfusion levels under compression. This perfusion increase change persisted for at least 2 hours after the APDS is removed and went back to levels similar to that of sham-treated scars after 1 week. This suggests that perfusion changes directly related ePlasty VOLUME 15 to pressure and that the mechanisms regulating perfusion homeostasis are preserved in HTS. | 2018-04-03T06:09:20.957Z | 2015-07-02T00:00:00.000 | {
"year": 2015,
"sha1": "9a2a65784dd131a12673cdcea28bc881dbf16030",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9a2a65784dd131a12673cdcea28bc881dbf16030",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7592630 | pes2o/s2orc | v3-fos-license | Immune modulation by molecular cancer targets and targeted therapies
Following our recent observation that alterations of the Natural Killer (NK) cell compartment in the presence of BCR-ABL-induced myeloproliferation fail to revert under targeted therapy, we discuss by what mechanisms oncogenic molecular pathways and their pharmacological inhibition may interfere with immune functions. Rational combinations of molecularly targeted and immunological strategies may provide a means for more effective cancer targeting.
Following our recent observation that alterations of the Natural Killer (NK) cell compartment in the presence of BCR-ABLinduced myeloproliferation fail to revert under targeted therapy, we discuss by what mechanisms oncogenic molecular pathways and their pharmacological inhibition may interfere with immune functions. Rational combinations of molecularly targeted and immunological strategies may provide a means for more e ective cancer targeting.
Natural killer (NK) cells are attractive effector cells of therapeutic antitumor immune activation, since they have potent cytolytic effector function and can amplify adaptive immune responses by interacting with various immune effector and antigenpresenting cells. Functional and immunogenetic studies suggest that autologous NK cells contribute to immune surveillance of both leukemias and solid tumors. 1,2 Conversely, suppression of NK-cell mediated effector function can contribute to tumor escape from immune control.
In our recent article published in Leukemia, 3 we have reported that the NK cell compartment is impaired in chronic myelogenous leukemia (CML). We have identified profound quantitative and functional NK cell defects in patients with newly diagnosed CML. These aberrations were mimicked in a mouse model of CML-like myeloproliferation induced by BCR-ABL expression in hematopoietic stem cells. Thus, the disease-driving aberrant tyrosine kinase activity appears to be involved in NK cell dysfunction. Since NK cells in CML are not part of the malignant clone, we concluded that the malignant environment is likely responsible for the observed alterations.
Remarkably, targeted therapy with imatinib failed to restore NK cell function even in patients achieving molecular remissions.
This raises the question to what extent and
in what way the drug may interfere with the immune system.
Imatinib is the most prominent example for a new generation of anticancer drugs. Progress in understanding the molecular architecture and functional circuits of cancer cells has led to clinical evaluation of agents that selectively target relevant signaling pathways. By inhibiting the aberrant BCR-ABL tyrosine kinase, imatinib has revolutionized the treatment of CML. 4 Development of molecularly targeted drugs has focused on their direct effects on disease-driving and disease-related pathways in tumor cells. More recently, there is an increasing awareness of the significance of the surrounding nonmalignant stroma for tumor development and sustenance. Stroma besides fibroblasts and cells of vascular structures includes various components of the immune system, and evidence is now accumulating that targeted drugs can modulate immune functions in a complex manner. Besides imatinib and the follow-up compounds dasatinib and nilotinib, various inhibitors of receptor-induced and intracellular pathways (e.g., AKT/mTOR) as well as drugs that target epigenetic transcriptional control were found to exert differential effects on individual immune cell subpopulations and antigen-specific effector cells.
These observations are not unexpected, since molecular targeting is not cancerspecific. Pathways involved in oncogenic receptor and intracellular tyrosine kinase signaling are also important for the functionality, proliferation and survival of immune effector cells. 5 Moreover, drugs generally interact with more than one pathway. E.g. imatinib is also active against c-KIT and platelet-derived growth factor receptor (PDGFR), and the newer tyrosine kinase inhibitors have even broader activity. Therefore, systemic treatment with molecularly targeted agents acts not only on the tumor cells, but also affects various other cell types (Fig. 1). Imatinib again, by its interaction with the c-KIT receptor, was shown to interfere with the licensing of c-KITexpressing DCs to activate resting NK cells in vivo. 6 Recent evidence further illustrates that therapeutic inhibition of oncogenic pathways can affect tumor host interactions by altering the immunogenicity and immunomodulatory function of targeted tumor cells. The efficacy of imatinib in a mouse model of gastrointestinal stroma tumors (GIST) critically depended on CD8 + T cells, suggesting that the drug in this model acts at least in part by amplifying a preexisting T cell response. 7 Inhibition of oncogenic Kit expression in GIST cells resulted in reduced expression of the immunomodulatory molecule IDO, subsequent activation of intratumoral CD8 + T cells and apoptosis of regulatory T cells (T reg cells). In CML blasts, by contrast, imatinib was shown to reduce immunogenicity by interfering with the BCR-ABLinduced upregulation of immunogenic antigens and thereby to impair antigenspecific T cell responses. 8 What is the relevance of interference by molecularly targeted drugs with the tumor-interacting immune functions?
Despite the efficacy of tyrosine kinase inhibitors to induce and maintain remissions, CML patients are rarely cured by these drugs. BCR-ABL expressing leukemic stem cells persist even in the sustained absence of detectable molecular residual disease 9 and may reinitiate leukemic growth. By contrast, immunological graft-vs.-leukemia reactions in the context of allogeneic hematopoietic stem cell transplantation can eradicate the disease. Moreover, in mouse models, an intact immune system was required for sustained tumor regression upon inactivation of disease-driving oncogenes. 10 Thus, molecularly targeted drugs should be selected not only for their direct, on-target antitumor effects, but also for their capacity to interfere with immune escape, activate various components within the immune microenvironment and initiate and maintain potent and sustained anticancer immune responses. This requires detailed studies of the effects of drug treatment on immune effector cells and dissection of the mechanisms by which molecular pathways affect tumor-host immune responses. Moreover, the potential of combined molecular and immune targeting will need to be explored. Preferably, the effects of combination therapies should be studied within the tumor microenvironment where the most relevant tumor-immune interactions are likely to take place. The mouse model that we used in our study closely mimics BCR-ABL-driven myeloproliferation and proved suitable for studying the role of aberrant tyrosine kinase expression in NK cell dysfunction. However, its application to the investigation of tyrosine kinase inhibition on the immune system is limited by differences in clonal origins and target expression between patients and mice. The generation of adequate models for studying these interactions remains a challenge.
We conclude that the effects of molecularly targeted agents on the immune system deserve close attention. Strategies should be explored that target diseasedriving pathways in tumor cells while at the same time suppressing immune escape mechanisms, inducing tumorspecific immune activation inside tumors and thereby providing long-term immune control over residual cancer cells. | 2016-05-12T22:15:10.714Z | 2012-05-01T00:00:00.000 | {
"year": 2012,
"sha1": "81406fd9421940d1f9435fdb93d48e692352694b",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/onci.18401?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "81406fd9421940d1f9435fdb93d48e692352694b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1309628 | pes2o/s2orc | v3-fos-license | Zbtb16 has a role in brown adipocyte bioenergetics
Objective: A better understanding of the processes influencing energy expenditure could provide new therapeutic strategies for reducing obesity. As the metabolic activity of the brown adipose tissue (BAT) and skeletal muscle is an important determinant of overall energy expenditure and adiposity, we investigated the role of genes that could influence cellular bioenergetics in these two tissues. Design: We screened for genes that are induced in both the BAT and skeletal muscle during acute adaptive thermogenesis in the mouse by microarray. We used C57BL/6J mice as well as the primary and immortalized brown adipocytes and C2C12 myocytes to validate the microarray data. Further characterization included gene expression, mitochondrial density, cellular respiration and substrate utilization. We also used a Hybrid Mouse Diversity Panel to assess in vivo effects on obesity and body fat content. Results: We identified the transcription factor Zbtb16 (also known as Plzf and Zfp14) as being induced in both the BAT and skeletal muscle during acute adaptive thermogenesis. Zbtb16 overexpression in brown adipocytes led to the induction of components of the thermogenic program, including genes involved in fatty acid oxidation, glycolysis and mitochondrial function. Enhanced Zbtb16 expression also increased mitochondrial number, as well as the respiratory capacity and uncoupling. These effects were accompanied by decreased triglyceride content and increased carbohydrate utilization in brown adipocytes. Natural variation in Zbtb16 mRNA levels in multiple tissues across a panel of >100 mouse strains was inversely correlated with body weight and body fat content. Conclusion: Our results implicate Zbtb16 as a novel determinant of substrate utilization in brown adipocytes and of adiposity in vivo.
INTRODUCTION
The increasing prevalence of obesity worldwide reflects changes in lifestyle, including a combination of increased food intake and reduced physical activity. [1][2][3] This combination contributes to the accumulation of an energy surplus in the form of adipose tissue. Because obesity develops when energy intake exceeds energy expenditure, increasing energy expenditure is an attractive strategy to reduce body weight and fat storage. There has been considerable interest in modulating thermogenesis, particularly in the skeletal muscle and brown adipose tissue (BAT), as a target for the treatment of obesity. [4][5][6] Skeletal muscle is one of the central tissues involved in shivering thermogenesis and also contributes to non-shivering thermogenesis. 5,7,8 Skeletal muscle is responsible for at least 30% of the basal metabolic energy expenditure and 80% of wholebody glucose uptake. 9 Because skeletal muscle accounts for approximately 30-40% of total body mass, it may contribute substantially to adaptive thermogenesis in humans. 5 Recently, the existence of BAT in humans has been reappraised and there is good evidence that brown fat depots are active in adults and are capable of energy dissipation through adaptive thermogenesis. [10][11][12][13] Furthermore, body-mass index and percentage of body fat are inversely correlated with the BAT in humans 12 , suggesting that the BAT contributes to metabolic rate and is an important regulator of body fat accumulation. Further evidence for a shared physiological function between brown adipocytes and myocytes is the fact that they are derived from a common cell lineage. 14,15 Increased heat production through decreased coupling between fatty acid oxidation and ATP synthesis has been observed during both diet-induced and cold-induced adaptive thermogenesis. 16,17 During adaptive thermogenesis, a significant amount of energy stored as fat and glycogen is converted to heat instead of being efficiently converted to ATP, suggesting that induction of mitochondrial respiration or the level of uncoupling activity could promote caloric dissipation.
Because of its potential for energy dissipation, it is of great interest to identify the genes that orchestrate the changes occurring during adaptive thermogenesis and to determine their mechanisms of action. One key factor in adaptive thermogenesis of the BAT is the mitochondrial uncoupling protein-1 (Ucp1). Ucp1 is responsible for enabling the proton leak in mitochondria that dissipates energy produced through oxidative metabolism to generate heat. [17][18][19] Previous systematic studies have identified genes that are induced in BAT after cold exposure in mouse, rat and squirrel, [20][21][22][23][24][25][26] and a recent transcriptome-profiling study identified several genes that are induced in the skeletal muscle and BAT in mice. 27 We hypothesized that understanding the function of genes promoting energy expenditure would reveal new targets for modulation of energy balance. Genes differentially expressed in both the BAT and skeletal muscle could be of great importance because they are indicative of a general thermogenic response not limited to the BAT-specific uncoupling or skeletal musclespecific shivering. We performed transcriptional profiling of the BAT and skeletal muscle during acute cold exposure in mice, a model of adaptive thermogenesis. We identified genes that were induced in both the BAT and skeletal muscle following cold exposure. One of the significantly upregulated genes is the transcription factor Zbtb16, which belongs to the POZ/domain and Krü pel zinc finger family and generally acts as a repressor. [28][29][30] Here, we demonstrate that Zbtb16 levels influence cellular bioenergetics in vitro and inversely correlate with adiposity in vivo.
MATERIALS AND METHODS Mice
Animal studies were performed under approved UCLA animal research protocols and according to guidelines established in the Guide for Care and Use of Laboratory Animals. C57BL/6J mice were maintained in 12-h light/dark conditions and fed a regular chow diet (Purina 50010, Lab Diet, Richmond, IN, USA). Cold exposure was performed as described. 31 After euthanasia, tissues were collected and frozen until use.
Microarray analysis
Total RNA was isolated from mouse tissues by extraction with TRIzol (Invitrogen, Carlsbad, CA, USA). Samples were then hybridized to Illumina 24K BeadChips using standard protocols (The Southern California Genotyping Consortium at UCLA). Differentially expressed genes were identified by applying a Student's t-test and fold-change thresholds. The microarray data can be accessed from the NCBI Gene Expression Omnibus (GEO) database.
Cell culture and over-expression Primary brown adipocytes were cultured as described previously. 31 C2C12 cells (ATCC, Manassas, VA, USA) were grown in complete DMEM (Dulbecco's modified Eagle's medium; 10% fetal bovine serum, 25 mM glucose, 1 mM pyruvate and 1 mM glutamine). For differentiation, serum was switched to 2% horse serum for 6 days. The immortalized brown adipocyte cell line was a gift from Bruce Spiegelman and was cultured as described. 32 Mouse Zbtb16 cDNA was cloned into pcDNA3.1/V5-His vector (Invitrogen) and transfected with BioT (Bioland, La Palma, CA, USA). Empty vector was used as control. For transduction, an adenovirus was constructed from the Zbtb16 cDNA and the AdEasy XL Adenoviral Vector System (Stratagene, Santa Clara, CA, USA). Viral particles were purified with Adeno-X Maxi Purification Kit (Clontech, Mountain View, CA, USA) and titered with Adeno-X Rapid Titer Kit (Clontech). Transduction (MOI 50) was performed at day 3 post-confluence and gene expression analyzed at day 6. Quantitative real-time (qRT-PCR) gene expression analyses Total RNA was isolated from mouse tissues by extraction with TRIzol (Invitrogen). For human tissues, a cDNA panel was purchased from Clontech. cDNA synthesis, real-time PCR, and qRT-PCR were performed as described previously. 33 Values are expressed in arbitrary units, normalized to housekeeping genes. All primer sequences used in this study are presented in Supplementary Table S1.
Triglyceride (TAG) contents and lipolysis assay
Cells were homogenized by sonification and TAG contents determined with L-Type triglyceride M kit (Waco Chemicals, Richmond, VA, USA). Lipolysis experiments from cell medium were performed after 10 h incubation with the Adipolysis Assay Kit (AB100, Millipore, Billerica, MA, USA). TAG contents and glycerol concentrations were normalized to protein.
Cellular oxygen consumption and extracellular acidification rates Cellular metabolic rates were measured using a XF24 Analyzer (Seahorse Bioscience, Billerica, MA, USA). Immediately before the measurement, cells were washed with unbuffered DMEM as described. 34 Plates were placed into the XF24 instrument for measurement of oxygen consumption and extracellular acidification (ECAR) rates. Mixing, waiting and measurement times were 3, 2, and 3 min for C2C12 and 5, 2 and 1.5 min for brown adipocyte cells, respectively. The measures were normalized per protein.
Test compounds were obtained from Sigma and injected during the assay at the following final concentration: 0.75 or 0.5 mM oligomycin, 0.5 or 0.4 mM FCCP (carbonyl cyanide 4-(trifluoromethoxy)phenylhydrazone), 0.75 or 0.15 mM rotenone/myxothiazol (differentiated brown adipocyte cells or C2C12, respectively), 25 mM glucose, 100 mM 2,4-dinitrophenol, 50 mM Naoxamate, and 200 mM 2-deoxyglucose. The basal and percentage of oxygen consumption rate and ECAR levels as well as area under the curve were obtained with the XF24 Analyzer software.
Mitochondrial DNA content
Total genomic DNA from the BAT tissue was isolated by phenol/ chloroform/isoamyl alcohol extraction. Mitochondrial and nuclear DNA were amplified by RT-PCR with 25 ng of DNA and primers in the D-Loop region and Tert gene, respectively (sequences in Supplementary Table S1).
Substrate utilization with Biolog microplates PM-M TOX1 microplates (Biolog, Hayward, CA, USA), consisting of eight different substrates in replicate, were used. Differentiated brown adipocyte cells were seeded at 60 000 cells per well in complete medium without pyruvate, glucose or phenol red. After 24 h, Biolog Redox Dye MA was added and rates measured every hour at 590 nm to detect the metabolic activity. Time 0 was subtracted for each time point. The reductase activity (NADH) is proportional to the energy produced from each substrate.
Hybrid Mouse Diversity Panel
Mouse strains, RNA isolation and expression profiling were described previously. 35 Briefly, at 16 weeks of age, 4-6 mice per strain were assessed for body composition using an EchoMRI instrument (EchoMRI, Houston, TX, USA). Mice were fasted overnight before plasma and tissue collection. Tissues were weighed and then snap frozen in liquid nitrogen until RNA, cRNA and cDNA were prepared. Biotin-labeled cRNA was generated from the cDNA and used to probe by using Affymetrix Mouse Genome HT_MG430A arrays (4-6 mice per strain). Expression data (probe 1442025_a_at) and phenotypic traits are presented in Supplementary Table S2.
Statistical analysis
The data are expressed as the mean ± s.d., except for the XF24 Seahorse Bioscience analysis where s.e. was used. Unpaired Student's t-test was used to compare the difference between the groups.
Identifying genes differentially regulated in adaptive thermogenesis
We assayed the transcriptomes of mice exposed to cold using gene expression microarrays to identify genes differentially expressed during thermogenesis. In the BAT, 124 genes were upregulated and 176 were downregulated at least twofold in response to cold exposure. In the skeletal muscle (quadriceps), 69 genes were upregulated and 67 were downregulated at least twofold in response to cold exposure. We detected 11 genes that were altered by cold exposure in both the BAT and skeletal muscle, 5 of which were coordinately upregulated and 6 downregulated ( Table 1). The transcription factor Zbtb16 (also known as Plzf and Zfp145) exhibited a robust induction in both the BAT (5.6fold) and skeletal muscle (4.8-fold). Zbtb16 has been associated with white adipogensis 36 but has not previously been implicated in the BAT or muscle metabolism. We selected this gene to further characterize its function.
Validation of Zbtb16 induction in adaptive thermogenesis We first confirmed that Zbtb16 expression is consistently upregulated by cold exposure at the mRNA and protein levels in an independent group of mice. As shown in Figure 1a, Zbtb16 mRNA levels were increased 15.3-fold in the BAT and 2.1-fold in the muscle as detected by qPCR. Zbtb16 mRNA induction by cold was observed after 4 or 8 h and whether mice were fasted or not during the cold exposure (data not shown). Western blot analysis confirmed a concomitant increase in ZBTB16 protein levels ( Figure 1a, bottom inserts). Thus, the upregulation of Zbtb16 Stat1 À 2.5 3.0 Â 10 À 2 À 2.6 4.6 Â 10 À 2 51789 Tnk2 Vegf À 3.6 1.9 Â 10 À 2 À 2.9 1.5 Â 10 À 2 22418 Wnt5a during adaptive thermogenesis was validated at both the transcript and protein levels.
Tissue expression pattern of Zbtb16 in mouse and human We then investigated whether the BAT and skeletal muscle were the most biologically relevant tissues for studying Zbtb16 function by comparing mRNA expression levels in mouse and human tissue panels by qPCR. The highest Zbtb16 expression levels were detected in the skeletal muscle and heart for both mouse and human (Figure 1b). In contrast to what Mikkelsen, et al. 36 reported, we detected a lower expression level in the mouse BAT than in the skeletal muscle. However, the levels of ZBTB16 protein measured by immunoblot in the white adipose tissue and BAT were similar to those of the heart and skeletal muscle (Figure 1b, insert). Within the muscle, the expression of Zbtb16 was similar in fast twitch (extensor digitorum longus and tibialis), slow twitch (soleus) and mixed (quadriceps and gastronecmius) muscles (Figure 1c), suggesting that Zbtb16 may function across muscle fiber types.
Expression of Zbtb16 during brown adipocyte and myocyte differentiation We then studied Zbtb16 function in vitro using a mouse brown adipocyte cell line 32 and the mouse C2C12 myocyte cell line. We first assessed Zbtb16 expression dynamics during differentiation of brown adipocyte precursors into mature brown adipocytes. Zbtb16 expression was dramatically upregulated 60-fold during brown adipocyte differentiation, whereas Ucp1 showed a 30-fold induction (Figure 1d). By contrast, Zbtb16 mRNA levels did not change during differentiation of C2C12 myoblasts to myotubes, whereas myogenin, used as control, was dramatically induced. The increased mRNA levels in brown adipocytes were mirrored by a substantial increase in ZBTB16 protein levels (Figure 1d, bottom inserts). The induction of Zbtb16 during brown adipogenesis is consistent with a role for this protein in mature brown adipocytes.
Zbtb16 expression activates the brown adipocyte gene expression program To begin to elucidate the effect of Zbtb16 induction during thermogenesis, we overexpressed Zbtb16 in brown adipocyte cells using an adenoviral vector carrying Zbtb16 cDNA. We achieved roughly sixfold overexpression of Zbtb16 mRNA (Figure 2a), which led to alterations in expression of genes involved in brown adipocyte differentiation and mitochondrial function. Indeed, compared with cells infected with a control lacZcontaining adenovirus, cells overexpressing Zbtb16 exhibited significant upregulation of brown fat-enriched markers such as Ucp1, Ppargc1a, Ppara and Prdm16 (PR domain containing 16; Figure 2b). Enhanced Zbtb16 expression also stimulated the expression of genes promoting fatty acid oxidation, such as Fabp3 (fatty acid-binding protein 3), Fabp5 (also known as Mal1) and Cpt1b (carnitine palmitoyltransferase 1b). Zbtb16 expression also elevated expression levels of mitochondrial gene markers such as Tfam (mitochondrial transcription factor A), Elovl3 (elongation of very long chain fatty acids-like 3), Cidea (cell death-inducing DFFAlike effector a), Cox7a1 (cytochrome c oxidase, subunit VIIa 1) and Cox8b (cytochrome c oxidase, subunit VIIIb). To confirm these effects observed in an immortalized brown adipocyte cell line, we performed similar studies in primary brown adipocytes. Zbtb16 adenoviral expression induced UCP1 protein levels ( Figure 2c) and increased expression of brown fat, fatty acid oxidation and mitochondrial genes (Figure 2d). Thus, Zbtb16 expression promotes a gene expression signature characteristic of active brown adipocytes and enhanced fatty acid oxidation.
Zbtb16 enhances mitochondrial respiration and mitochondrial biogenesis
The effect of Zbtb16 on mitochondrial gene expression described above suggested that it may promote changes in brown adipocyte energetics. To assess this, we measured cellular respiration in brown adipocytes by real-time detection of oxygen consumption. Using a XF24 Seahorse Bioscience instrument, we determined the level of different components of mitochondrial respiration contributing to the oxygen consumption using specific effectors. First, we determined the basal and coupling state of mitochondrial respiration with sequential injections of oligomycin and rotenone/myxothiazol. Oligomycin (F 1 F 0 -ATP synthase inhibitor) allows the measurement of ATP-linked oxygen consumption (ATP turnover) and rotenone/myxothiazol (complex I and III inhibitor, respectively) by blocking mitochondrial respiration infers the non-mitochondrial respiration and the mitochondrial uncoupling (difference between oligomycin and rotenone/myxothiazol injections). Cells were 'activated' 2 h before the first measurement with 10 nM of the adrenoceptor agonist CL316,243. In both the immortalized and primary brown adipocytes, Zbtb16 overexpression did not alter basal mitochondrial respiration, but showed higher mitochondrial uncoupling at the expense of coupling respiration (Figures 3a and b). C2C12 cells can also be 'activated' by beta-adrenergic stimulation when they are differentiated in myotubes. 37 When we pretreated the myotubes for 1 h with 100 nM isoproterenol, we also observed a significant increase in mitochondrial uncoupling owing to Zbtb16 expression (Figure 3c). We also assessed mitochondrial reserve capacity by injecting the electron transport uncoupler FCCP. Brown adipocytes (primary and immortalized) and C2C12 cells over-expressing Zbtb16 exhibited a higher FCCP response, indicating more mitochondrial respiratory capacity (Figure 3d). Altogether, these results demonstrate that Zbtb16 is able to promote proton leak during mitochondrial respiration and increase mitochondrial respiration capacity.
The enhanced expression of mitochondrial gene markers and mitochondrial respiration capacity suggested a potential effect of Zbtb16 on mitochondrial biogenesis. To assess mitochondrial density, we measured the relative abundance of mitochondrialencoded gene D-loop to the nuclear-encoded telomerase reverse transcriptase (Tert) by qPCR. Zbtb16 expression resulted in increased mitochondrial DNA by threefold in brown adipocytes and by 1.7-fold in C2C12 myotubes (Figure 3e).
We next asked whether the increased mitochondrial respiration owing to Zbtb16 overexpression affects lipolysis or lipid content in brown adipocytes. Lipolysis rates in brown adipocytes were not affected by Zbtb16 expression, whereas forskolin elicited the expected increase (Figure 3f). However, intracellular TAG content was decreased by 40% in Zbtb16 overexpressing cells (Figure 3g). Together, these data demonstrate that Zbtb16 promotes biogenesis and respiration of mitochondria, and that higher expression of Zbtb16 reduces cellular TAG levels in brown adipocytes.
Zbtb16 increases glucose oxidation
We then investigated whether Zbtb16 affects substrate utilization. Control brown adipocytes or Zbtb16 overexpressing cells were plated in Biolog microplates in which each well contains one of eight specific energy substrates. All the eight substrates were metabolized by brown adipocytes (Figure 4a and Supplementary Figure S1). Overexpression of Zbtb16 specifically increased utilization of glucose (glycolysis pathway) and xylitol (pentose phosphate pathway), while metabolism of other substrates was not affected by Zbtb16.
The preference in carbohydrate utilization suggests that Zbtb16 may influence glucose metabolism. Therefore, we assayed the expression of genes controlling glycolysis in response to Zbtb16 overexpression using qPCR. Levels of Glut4 mRNA were not changed, but Hexkin2 (hexokinase II) and Pkm2 (pyruvate kinase) expression levels were increased by Zbtb16 (Figure 4b). In addition, expression of Pdk4 was decreased. Increased glucose utilization and the differential expression of glycolysis-specific enzymes support a role for Zbtb16 in glucose consumption.
We then assessed the effects of Zbtb16 overexpression on glucose metabolism by quantitating a measure of glycolysis, the ECAR. Addition of glucose or the mitochondrial uncoupler 2,4dinitrophenol increased ECAR in differentiated brown adipocytes. This response was amplified in cells with increased Zbtb16 expression, suggesting a greater ability to increase glycolysis ( Figure 4c). We therefore exposed the brown adipocytes to different modulators of glycolysis and measured the ECAR response. Cells overexpressing Zbtb16 had a greater response to treatment with oligomycin which inhibits mitochondrial ATP production, suggesting an increased glycolytic capacity when ATP demand increases (Figure 4d). When treated with the glucose analog 2-deoxyglucose, which inhibits glycolysis, the effect was slightly less pronounced in cells expressing Zbtb16 (Figure 4d). We also performed experiments with Na-oxamate, an inhibitor of lactate dehydrogenase, which catalyzes the conversion of pyruvate to lactic acid. Na-oxamate did not significantly inhibit ECAR in brown adipocytes (B7%), suggesting that the final step of glycolysis does not account for a substantial amount of glycolysis detected by the ECAR and that anerobic glycolysis is not very active in these cells. Nevertheless, this modest inhibition was abrogated when Zbtb16 was overexpressed (Figure 4d). Together, these results suggest that Zbtb16 promotes glucose oxidation, consistent with the increased mitochondrial respiration and lower Pdk4 expression.
Zbtb16 expression correlates with body weight, fat mass and diabetes in vivo As described above, our data indicate that Zbtb16 is involved in the adaptive thermogenesis response and promotes mitochondrial respiration and carbohydrate utilization in brown adipocytes. To assess the role of Zbtb16 on energy balance in vivo, we assessed whether genetic variations in Zbtb16 gene expression levels are correlated with clinical traits. For these studies, we examined Zbtb16 expression levels in a Hybrid Mouse Diversity Panel. 35 This resource consists of more than 100 highly diverse inbred and recombinant inbred mouse strains, which allow the analysis of association between gene expression levels and metabolic traits such as plasma lipids, plasma hormones and different body fat parameters. 35,38 We investigated the correlations between Zbtb16 expression levels and obesity traits in two tissues available in this panel that are most relevant to our study: as the BAT and skeletal muscle were not available in the hybrid mouse diversity panel, we examined the white adipose tissue and heart. The most robust associations for Zbtb16 were inverse correlations with body weight and fat mass (measured by nuclear magnetic resonance and weight) in both the tissues ( Table 2 and Supplementary Figure S2). The inverse correlation for fat mass included gonadal, retroperitoneal, femoral and mesenteric fat pads. This finding demonstrates that common, natural variations in Zbtb16 expression levels are correlated with body fat content in the mouse.
Given the association of Zbtb16 mRNA levels with body weight and fat mass, we also assessed whether ZBTB16 expression levels are associated with fat mass in humans as well. Data mining from previously reported microarray analyses revealed that in visceral adipose tissue of obese diabetic women, ZBTB16 expression was significantly lower than age-and body-mass index-matched normal glucose-tolerant controls (t-test P-value ¼ 0.0056; GEO accession GDS3665). This result demonstrates that in humans, ZBTB16 expression levels are associated with glucose levels, suggesting a complex interrelationship between ZBTB16 expression, glucose and obesity.
DISCUSSION
Manipulation of cellular bioenergetics is an attractive approach to increase cellular energy expenditure with the ultimate goal to decrease obesity at the whole body level. Mitochondrial respiration, primarily in the BAT and skeletal muscle, can be modified in order to respond to physical activity, diet or ambient temperature. In the current work, we identified Zbtb16 as a novel gene involved in the cold response in both the BAT and skeletal muscle. Zbtb16 was among a few genes that were transcriptionally induced by several-fold in both the BAT and skeletal muscle during acute cold exposure. Previous works on Zbtb16 have shown potential involvement in cancer and development. The human ZBTB16 gene has been implicated in acute promyelocytic leukemia, through chromosomal translocation and fusion to the retinoic receptor alpha. 39,40 Mice lacking Zbtb16 function exhibit limb defects and loss of spermatogonial stem cells. 41,42 Interestingly, Zbtb16 is induced during differentiation of 3T3-L1 preadipocytes. 36 The role of Zbtb16 as a transcription factor provides a potential mechanism by which it may influence thermogenesis, through effects on expression of other genes. Indeed, we found that induction of Zbtb16 expression in brown adipocytes enhances the expression of many genes of the thermogenic program, such as genes involved in fatty acid oxidation and mitochondrial respiration. This effect was accompanied by an increase in mitochondrial content and maximal respiration capacity, and a decrease in intracellular TAG levels, consistent with a role for Zbtb16 in fatty acid metabolism in brown adipocytes. When differentiated brown adipocytes or myotubes were activated by adrenoceptor agonists, Zbtb16 increased mitochondrial uncoupling, in agreement with the increased Ucp1 gene expression.
Another aspect of the role of Zbtb16 is its function in glucose metabolism. Zbtb16 expression drives the expression of glycolysisrelated genes (Hexkn2, Pkm2), and is able to increase glycolytic capacity. Consistent with this role, Zbtb16 expression increases carbohydrate utilization in vitro. The glucose utilization seems to be directed to mitochondrial oxidation as observed by the decrease in Pdk4 expression and lower Na-oxamate-ensitive ECAR levels. Altogether, these results indicate that Zbtb16 promotes oxidative metabolism.
Ultimately, the goal is to identify genes that influence energy metabolism in BAT that translates to whole-body effects on energy balance and adiposity. To evaluate the potential effect of variations in Zbtb16 expression levels in vivo, we assessed the relationship between Zbtb16 expression levels and body mass and fat pad mass across a set of 4100 mouse strains for which gene expression and body fat traits have been determined. 35 In this panel, we observed a striking association between Zbtb16 mRNA levels in two different tissues and body fat parameters. Higher levels of Zbtb16 expression were associated with a decrease in body fat storage in four major white adipose tissue depots and overall body weight. We also detected a correlation between decreased ZBTB16 expression levels in human visceral adipose tissue and diabetic state among obese women. These results are consistent with a role for Zbtb16 in increasing mitochondrial respiration and substrate utilization. Interestingly, a congenic rat strain harboring a mutation in Zbtb16 affected total body weight and adiposity, but it was not possible to rule out other mutations in the congenic strain. 43 Although global loss of Zbtb16 function in mice 41 and humans 44,45 causes skeletal defects, more subtle variation in Zbtb16 expression levels observed among individuals may be a determinant of adiposity. In the future, cell-type-specific knockout or transgenic models could be useful in determining further the role of Zbtb16 in adipose tissue function, energy balance and obesity. | 2017-11-08T01:19:19.904Z | 2012-09-01T00:00:00.000 | {
"year": 2012,
"sha1": "8fe59cf400c0ac32b173340db8f95969bf9ce38a",
"oa_license": "CCBYNCND",
"oa_url": "https://www.nature.com/articles/nutd201221.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8fe59cf400c0ac32b173340db8f95969bf9ce38a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237145070 | pes2o/s2orc | v3-fos-license | Factors Affecting Pre-Hospital and In-Hospital Delays in Treatment of Ischemic Stroke; a Prospective Cohort Study
Introducion: The outcomes of acute ischemic stroke (AIS) are highly affected by time-to-treatment. The present study aimed to determine the factors affecting in-hospital and pre-hospital delays in treatmentof AIS. Methods: This prospective study was carried out on 204 AIS patients referring to the stroke care unit in Zanjan (Iran) in 2019. The required data were collected by interviewing the patients and families and using patients’ records and observations. Results: The maximum delay was related to onset-to-arrival time (288.19 ± 339.02 minutes). The logistic regression analysis indicated a statistically significant decline in the treatment delay via consultation after the initiation of symptoms (p< 0.001), transferring the patient through emergency medical service to the hospital (p<0.001), and patients’ perception regarding AIS symptoms (P< 0.001). Conclusion: It is essential to inform people regarding AIS symptoms and referring to AIS treatment units to reduce the treatment time.
Introduction
Stroke is one of the most prevalent neurological complications (1). Acute ischemic stroke (AIS) is a medical emergency that requires intensive treatment and care in the early hours, because its fast diagnosis and proper interventions can lead to favorable results. Furthermore, delayed treatment can lead to considerable complications, higher mortality, and enormous costs for the person, families, and the healthcare system (2). The most effective approaches to treating AIS patients are recanalization and reestablishing blood flow to the brain tissues using invasive and non-invasive therapies (3). In these processes, the blocked vessels are reopened using recombinant tissue plasminogen activator (rTPA) and mechanical devices (angioplasty) (4,5). In 1996, the Food and Drug Administration (FDA) recommended using rTPA in AIS patients within the first 3 h of symptoms onset (6). American Stroke Association Standard (2018) recommends brain imaging within less than 20 min, the interval of less than 60 min between the hospital arrival, and thrombolytic therapy for over 50% of patients qualified for rTPA (7). Research has shown that rTPA is avoided as it wastes the golden time of medication use due to the delayed arrival at the hospital (8). The time-to-treatment delay in AIS patients may be caused by different factors such as pre-hospital and intra-hospital reasons. Delays in recognizing and transfer of patients are among the pre-hospital causes of their mortality. Meanwhile, delays in neurologic visits, delays in decisionmaking regarding the treatment procedure, and delays in brain imaging are considered among the intra-hospital delay causes (9,10). Treatment delay is a function of several factors, including the patient's delay after the onset of early symptoms and delay by treatment staff (11,12). Lack of access to medical centers (13) and lack of proper management of AIS patients in the hospital are among the factors influencing the time-to-treatment delay (14,15). In this respect, Code 724 has been effective in reducing the delay in treating AIS patients (16). In Iran, the stroke code (Code 724) was announced by Iran's Ministry of Health to the medical universities in 2016 to treat stroke patients more effectively. Thus, in addition to implementing this plan, it is essential to review the status of prehospital and hospital delays in Stroke Care Units (SCU) in Iranian cities. In every community, it is essential to investigate the factors influencing pre-hospital and in-hospital delay, the quality of care delivered, and the individual factors affecting timely treatment. These influencing factors can vary from community to community. The present study aimed to determine the factors influencing in-hospital and pre-hospital delays in treatment of AIS.
Study design and setting
This cross-sectional descriptive study was performed in the SCU of Vali-Asr Hospital, Zanjan (northwest of Iran), from July to the end of October 2019. AIS cases or their relatives were interviewed about potentioal causes of delay in initiation of thrombolytic therapy using a predesigned questionnaire (appendix 1). The Ethics Committee of Zanjan University of Medical Sciences approved this study under the Ethics Code IR.ZUMS.REC.1398.095. The researcher described the study's aims to the patients or their families, and written consent was obtained. The participants were assured about the confidentiality of all their information and the right to leave the study at any time.
Participants
The samples were collected using convenience sampling. Therefore, the study participants included patients referring to the SCU during the sampling interval, who met the inclusion criteria. The physician confirmed the diagnosis of AIS based on clinical signs and brain CT-scan results. Willingness to participate in the study was considered the inclusion criterion. Patients diagnosed with a hemorrhagic stroke or transient ischemic attack (based on CT-scan results) were excluded from the study.
Procedure
SCU of Vali-Asr Hospital in Zanjan, was established in 2016 and is known as the stroke referral center in Zanjan Province. In Iran, Code 724 refers to stroke patients whose stroke symptoms have initiated less than 4 hours and 30 minutes before. Based on this code, as soon as the patients call the Emergency Medical Services (EMS), they are asked about the Face-Arms-Speech-Time (FAST) symptoms. Then, after the ambulance is sent to the patient's bedside, the emergency technician examines the FAST symptoms, and need to SCU is reported followed by confirmation. Patients from the neighboring provinces are immediately transferred from all medical centers to the SCU in Zanjan. After transferring the patient to the hospital, a neurologist examines them at the triage unit and sends him/her to the brain computed tomogramphy (CT) scan if the diagnosis of a stroke is made based on the scan, the rTPA medication is administered there.
Data gathering
The variables in this study included demographic characteristics, factors affecting the time of treatment initiation in both in-hospital and pre-hospital phases, and stroke risk factors (hypertension, hyperlipidemia, smoking, and diabetes). A questionnaire was used to collect the data and identify the information on demographic features and factors affecting time-to-treatment and the average time between onset of symptoms to treatment (7,11,(17)(18)(19)(20). The questionnaire included three parts. Questions about the demographic features of the patients were included in the first part. The second part contained questions regarding the causes of prehospital delays. The third part included questions about the reasons for in-hospital delay (Appendix 1). These data were approved by the treating physician. To assess AIS severity, we considered the National Institutes of Health Stroke Scale (NIHSS) (21). This scale includes 11 items, for which a score of 0 denotes the individual's normal performance in the studied field, and a score of 4 represents maximum impairment in that field. The maximum and minimum scores on this scale are 42 and 0, respectively. In this regard, the score 0 denotes lack of stroke symptoms, 1 to 4 is mild stroke, 5-15 is moderate stroke, 16-20 is moderate to severe stroke, and 21-42 denotes severe stroke. The content validity of the questionnaire was evaluated. The designed questionnaire was offered to 10 experts to make the essential modifications and alterations they believed to be necessary. Its reliability was assessed using inter-rater relaibility. Two researchers completed the questionnaire for the same 10 patients, simultaneously. Then, Cohen's kappa coefficient was assessed between the data of the researchercompleted questionnaires, and the evaluators' reliability was confirmed by achieving K = 0.973. The reliability and validity of the NIHSS tool had been confirmed by Kasner et al. (21). The data were collected through observation and interviews with patients and their families, if necessary. The patients referring to the SCU were chosen based on the inclusion criteria. The researcher completed the questionnaire after treatment and relative stabilization of the patient with the assistance of the patient or his/her caregivers. In this study, to reduce recall bias regarding the timing of the events by the patients and their families, and recording the times and factors influencing pre-hospital delays as accurately as possible we highlighted the critical times like news time, Azan time, and events of the day when asking about the events .
Data analysis
According to a pilot study on 20 AIS patients, we considered a sample size of 181, an effect size of 0.05, a sampling error of 20 min, and a confidence level of 95%. In this study, 204 patients with AIS referring to the SCU were assessed.
Statistical analyses were performed using SPSS V.16 software. The data were distributed based on the normalized central limit theorem. Data were gathered through interviews and observations. For detecting the predictors of delay to treatment, a logistic regression model was performed using the Forward-LR method. To determine the factors affecting time-to-treatment, variables including age (age less than 60 years and age over 60 years), gender, previous history of stroke, calling EMS, consultation after the onset of symptoms, and patient's perception of early symptoms were entered to the model as independent variables. In contrast, delay in treatment was used as the dependent variable. In this study, the significance level was considered less than 0.05. There was no missing data in the present study, because the researchers collected data through interviews, observations, and patient records.
Baseline characteristics of participants
This study was conducted on 230 patients with stroke referring to the SCU from early July to late October 2019. The data of 16 patients with transient ischemic attack and 10 patients with hemorrhagic stroke were excluded from the study. Ultimately, the data of 204 patients with acute ischemic stroke who had referred to the SCU were assessed. The treating physician diagnosed the ischemic stroke in these patients. In total, 204 patients were included in this study, 55.9% of which were male, 19.6% had a high school diploma, and 72.5% were illiterate. The participants' mean age was 68.99 ±13.91 (28 -98) years. Fifty percent of the patients lived in Zanjan. According to patients' statements, 87.7% had at least one risk factor. Hypertension (59.3%) was the most prevalent risk factor for AIS, and ischemic heart disease was in the second rank (30.4%). Moreover, about 77.9% of the patients were at home when the symptoms had initiated. The severity of the stroke was moderate in 52% of the patients. In this study, 140 (68.6%) patients were referred to the SCU with code 724. They arrived at the hospital within less than 4 hours (h) and 30 minutes (min) after the onset of symptoms. Moreover, rTPA was provided for 129 (63.2%) patients, but it was not used for 75 (36.8%) patients.
Analysis of delay to treatment
The reason for not receiving rTPA in 64 (31.4%) patients was that more than 4 hours and 30 minutes had passed from the symptoms' onset to referral to SCU. Table 1 shows the frequency of potentioal prehospital causes of delay in treatment of AIS cases. 70.6% of the patients considered their prime symptoms to be symptoms of other diseases and did not believe they had a stroke. Furthermore, 17.2% had no consultation with anyone after the onset of the symptoms and took no action. After the onset of symptoms, about 47.5% of the patients referred to medical centers rather than SCU. It is noteworthy that they mostly (30.4%) referred to these centers because of availability or proximity. 46.1% of them referred to SCU using personal vehicles. A neurologist performed the first visit for more than half of the patients (62.7%). The mean onset-to-arrival time and the mean onset-to-treatment time were 288.19 ± 339.02 minutes and 314.13 ± 341.04 minutes, respectively. Table 2 shows the time interval between onset of symptoms and treatment based on pre-hospital and in-hospital factors. Among the pre-hospital delay factors, the delay in deciding to contact the emergency service or making the effort to refer to medical centers (204.74 ± 321.38 minutes) was longer compared to the time of patient transfer to the hospital (83.52 ±72.38 minutes). In identifying the predictors of delay in treatment, among the predictor variables included in the model, calling EMS, patient's perception of early symptoms, and consultation after the onset of symptoms could effectively predict this delay. The odds of decreasing the delay in treatment for transportation by EMS, patient's perception of early symptoms, and consultation after the onset of symptoms were 0. 12
Discussion
Our results indicated that pre-hospital delay was longer compared to the hospital delay. The delays in making the effort to refer to the medical center or the decision to call the emergency service were longer compared to the time of patient transfer to the hospital. In a study in Hamadan (Iran), Ghiasian et al. reported that the time interval between symptom onset to arrival at the hospital was 282 min, while it was 192 min in the study of Griesser et al (11,22). This result is consistent with the findings of our study. Nevertheless, in the study of Ayromlou et al. in Tabriz (Iran), this time was 916 min, which is not in line with our results (13). In the mentioned study, which was conducted in the metropolitan area of Tabriz, the delay in patients' arrival could be caused by traffic problems in this city. In the smaller towns around the provinces equipped with SCUs, accurate diagnosis of the stroke, the existence of neurologists, and administering thrombolytic medication can dramatically decrease the onset-to-treatment time. Koksal (12,17,19,(23)(24)(25). In our study, also, less delay was experienced by the patients referring via EMS. Consistent with our study, the findings of studies conducted in America, Asia, and Europe indicated that absence of awareness of stroke symptoms, patients' beliefs and misconceptions about the prime symptoms, and failure to consult an individual after the onset of the symptoms resulted in longer delays in hospital arrival and timeto-treatment for stroke patients (11,12,17,19,(22)(23)(24)(25)(26)(27). The results indicate that consulting with others after initiation of the symptoms may help prevent a delay in cases the symptoms of the patients are not well-recognized or taken seriously. The results of our investigation on factors causing hospital delay in AIS patients revealed that there were no delay for AIS patients receiving Code 724. In this study, the time interval between hospital arrival to rTPA implementation (25.18 ±17.01 min) and between hospital arrival to brain CT scan (10.60 ± 6.79 min) was much shorter compared to the time proposed by the American Stroke Association guidelines (7). In the study by Dhaliwal et al. in the US, the mean initial CT (14). In the study of Mowla et al. in New York, the maximum imaging delay was longer than 25 min (28). According to the findings obtained in Iran and other countries, the time interval between hospital arrival and treatment in patients with Code 724 is much longer compared to our results. This indicates that the management of the stroke code team in Zanjan city have been able to significantly shorten time-to-treatment.
Limitations
The low accuracy of recalling the times, particularly in elderly patients, was among the limitations of this study. The researchers tried to record the times and factors influencing pre-hospital delays as accurately as possible by highlighting the critical times like news time, Azan time, and events of the day. Considering the geographical and cultural position of Zanjan, the present results cannot be generalized to other communities.
Conclusion
In the present study, a longer pre-hospital delay was found compared to hospital delay in stroke events. Among the prehospital delay factors, the delay in visiting a medical center or deciding to call the EMS was longer than the time of patient transfer to the hospital. In other words, a more significant portion of the delays in the pre-hospital phase is caused by the delay in patients' decision to refer to the hospital. It appears that giving information to at-risk people, particularly those over 60 years, about the stroke risk factors, the importance of rapidly initiating treatment to enhance the disease outcomes, and the early stroke symptoms will help patients comprehend their symptoms properly. Hence, they will be transferred to the hospital faster by calling the emergency system.
Acknowledgments
This study is based on a research project with the code A-11-148-19. Here, the researchers thank the research participants for their cooperation and Zanjan University of Medical Sciences Vice-Chancellor for financial support.
Funding and Support
This article results from a Master of Nursing thesis funded by the research department of Zanjan University of Medical Sciences.
Author contribution
NH designed the study, carried out statistical analyses of the data, was involved in interpreting the data, and wrote the manuscript. NG, who also collected the data, was involved in the interpretation of the data. MR D was involved in the interpretation of the data. All authors read and approved the final manuscript. | 2021-08-18T05:38:06.363Z | 2021-07-24T00:00:00.000 | {
"year": 2021,
"sha1": "bc9b6e3c0cebb0ad218c9d1d5bbf1c405e8d84f5",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bc9b6e3c0cebb0ad218c9d1d5bbf1c405e8d84f5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.