id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
14955089 | pes2o/s2orc | v3-fos-license | Efficient Indicators to Evaluate the Status of Software Development Effort Estimation inside the Organizations
Development effort is an undeniable part of the project management which considerably influences the success of project. Inaccurate and unreliable estimation of effort can easily lead to the failure of project. Due to the special specifications, accurate estimation of effort in the software projects is a vital management activity that must be carefully done to avoid from the unforeseen results. However numerous effort estimation methods have been proposed in this field, the accuracy of estimates is not satisfying and the attempts continue to improve the performance of estimation methods. Prior researches conducted in this area have focused on numerical and quantitative approaches and there are a few research works that investigate the root problems and issues behind the inaccurate effort estimation of software development effort. In this paper, a framework is proposed to evaluate and investigate the situation of an organization in terms of effort estimation. The proposed framework includes various indicators which cover the critical issues in field of software development effort estimation. Since the capabilities and shortages of organizations for effort estimation are not the same, the proposed indicators can lead to have a systematic approach in which the strengths and weaknesses of organizations in field of effort estimation are discovered.
INTRODUCTION
Project management is one of the most important activities performed throughout the software projects. Main phases of project including analysis, design, implementation and deployment are entirely dependent on project management process. All policies, milestones and responsibilities are organized in project management plan. It is undeniable that planning and scheduling of project is a critical part of project management regardless of project type. In first steps of project, project management team should decide on several important questions related to project planning such as how to arrange development team, how to distribute the responsibilities, how to determine deadline for artifacts, how to determine the duration of project and so on. Appropriate response to these questions can ensure the success of software project. On the other hand, careless answering and lack of attention to planning aspects of project may lead to project fault. Knowledge of project management team regarding the project attributes has a considerable effect on dealing with the mentioned questions.
Development effort is a key attribute of project that influences on most planning and managing aspects. This attribute refers to amount of effort required for project development. It comprises of all activities done within different phases of project. Development effort is basis of decision making on management issues at first steps of project. Accurate forecasting the amount of effort required for performing the project will make the development process so smooth and convenient. This is why so many researchers have tried to increase the accuracy of software development effort prediction using various techniques.
Software projects are strongly different than other projects because the purpose of software projects is producing an intangible product [1][2]. This fact makes the production cycle to be so complicated and difficult in software projects. Therefore, complexity level of software project management is more than other projects. Software project managers are confronted with uncertain and unstable production which is hard to control. Moreover, customer requirements, development technologies and tools are changing rapidly in this field. All of these make the prediction of development effort to be difficult in software projects. As a solution, analyzing of effective factors on development effort estimation can alleviate the problems existing in this area. Investigation of project attributes, limitations, management issues and knowledge of developers in this area can be useful to draw a conclusion in terms of effective factors on management of effort estimation in software projects.
STUDY BACKGROUND
In 1973, Interactive productivity and Quality (IPQ) [3] was proposed by IBM group as the first automated tool for software development effort prediction. Afterward, Constructive COst Model (COCOMO) was invented by Barry Boehm [4]. COCOMO utilizes some effort drivers to forecast the amount of development effort. It offers several equations based on complexity level of project. "Software Engineering Economics" [4] is a famous book in this area that still numerous researchers employ proposed models in which for effort prediction. Putnam Lifecycle Management (SLIM) [5]and Software Evaluation and Estimation of Resources -Software Estimating Model (SEER-SEM) (Galorath Inc.,1980) have used similar principals to COCOMO [6]. In all mentioned models, Line of Code (LOC) was used for designing the prediction model. In fact, development effort was predicted using LOC as size of project.
Function Point (FP) is so important sizing parameter proposed by Albrecht [7]. It was the first idea for measuring the size of software project by using a functional method. Using of FP showed that it can be placed in effort prediction models instead of LOC because computing process of FP is more reliable and accurate than LOC. Advantages of FP motivated researchers to invent new prediction models based on function point such as Albrecht-Gaffney [7], Kemerer [8] and Matson, Barrett and Mellichamp [9]. Introducing of the new version of COCOMO namely COCOMO II in 2000 [10] is a significant event in this field. COCOMO II considers more details of software project for effort prediction. Prediction equations in this method were improved by applying several scale factors.
In contrary to static methods, there are several dynamic models which rely on using past projects information. Classification And Regression Tree (CART) [11] is one of the dynamic methods in this area. It makes a regression tree according to the available information of completed projects and uses the tree to predict the effort of new project. Analogy Based Estimation (ABE) is the other dynamic method proposed in 1997 [12]. ABE method works based on comparing the attributes of new project and past projects to predict the development effort. It is still so popular because it follows simple and straightforward methods for prediction. ABE have been used widely in recent years [13][14][15][16]. Latest advancements in prediction of development effort are related to using of soft computing techniques. Neural networks [13,[17][18][19][20][21] and fuzzy techniques [14,[22][23][24] are most important soft computing methods employed in this field.
PRIOR SURVEY-BASED STUDIES
Several studies [25][26][27][28][29][30] have investigated the accuracy of schedule and effort estimation, which the results showed that 59%-76% of projects exceeded the determined effort and 35%-80% of which exceeded the determined time. Mean value has been utilized in most previous studies to sum up the overruns in time and effort. Exceeding the effort indicated in range of 18% and 41%, while the overrun in time is stated in range of 22% and 25% [25][26][29][30]. According to the latest Chaos report of Standish group, 32% of software projects are successful, 24% fail and 44% are in challenge.
Since project managers may take small cost and effort overruns [31] easy, it can be helpful to realize the status of effort overruns and recognize the projects which involved in significant effort and cost overruns. Moløkken-Østvold [26] used figures to explain the status of effort overruns which results stated that high number of projects exceeded the determined estimates (below 21%) but only a few projects exceeded effort by higher than 100%. Totally, from this research, it can be said that the mean exceeding in effort (44%) was higher than the median of which (21%). Moløkken-Østvold realized that large projects were more intended to be under estimated. They also investigated if the size of project influenced the accuracy of estimates. It must be said that due to limited size of sample, it was difficult to rely on conclusions from statistical aspect.
Previous surveys [26][27][32][33] have reported that most projects utilized expert judgment or analogy to estimate the effort while only 14%-26% of which utilized algorithmic estimation techniques. The algorithmic techniques comprise of common models such as, COCOMO, Use-Case models, FP-based models and so on.
Several researches attempted to find the cause for the low acceptance level of algorithmic techniques. For instance, most of algorithmic methods are unable to present enough reliable and accurate estimates [34], many companies do not gather enough data to allow the development of algorithmic models [35], organizations and companies feel not well to utilize techniques that they are unable for fully understand [36] and others.
Prior researches investigated the significance of effort estimation and they achieved approximately the same conclusions. Lederer [28] indicated that almost 84% of the developers ranked effort estimation as "very important" or "moderately important". On the other hand, Moløkken-Østvold [37] indicated that 78% of the respondents rated estimation as "most important", "very important" or "extremely important".
Investigating whether organizations and companies have accepted the existing software effort estimation methods is a critical issue in this field. If they are satisfied, they will have no decision to enhance the estimation methods. Otherwise, they can pay more attention toward its improvement. However, it is not as direct and simple as that. Lederer [28] found that, even though development effort estimation is important, developers are neither specially agree nor disagree with the existing methods. The mean rank was 3.02 on one-to-five point scale (1=very disagree 5=very agree). The author indicated that, in terms of the considerable importance of effort estimation and existing inaccurate estimates, the acceptance of developers indicates that they are satisfied with current methods and they accept the inaccurate estimates.
Moores and Edward [31] indicated that 91% of the responding managers and developers said 'yes' to answer the question 'do you see estimation as a problem?', while only 9% answered 'no'. If this is correct, then, it is true that developers and managers have accepted this problem as a fact of project.
As development phases proceed, the knowledge of developers for software effort estimation becomes more and more, and various estimation techniques are applied at different project stages for any organizations. It is explained in [4] that the uncertainty existing in effort estimates shows a decreasing process as the project proceeds, which is called the Cone of Uncertainty [38] . As an addition, Gryphon stated that the amount of Uncertainty cannot be decreased automatically, but it can be decreased by the accurate estimation techniques as the development phase progresses [39]. This matter is addressed by Lederer [40] and found that 77% projects performed estimation during the primary stages of project, 64% projects performed estimation at feasibility study phase, while 51% within requirements analysis and 48% in requirement design. However, the software project aspects and process have changed significantly since the early 1990's, where the survey by Lederer was conducted.
PROPOSED FRAMEWORK
Planning and scheduling of project is a challenging issue for project managers because of uncertain and ambiguous behavior of software projects. The amount of effort is a key factor must be estimated in order to project planning. Since numerous parameters can affect the amount of effort in a software project, classification and prioritization of parameters may facilitate the effort estimation process. Managers need to know the importance of each parameter to make decision realistically throughout the project planning. Each parameter is related to a part of software project, and it influences on a part of activities, artifacts and roles.
Proposing a framework needs to determine the exact scope and area which must be investigated through the survey. In this research, we are going to focus on some aspects of software projects that may affect the effort estimation (based on the results obtained from the prior studies). As seen in Figure 1, knowledge of developers in terms of effort estimation, limitation and obstacles against accurate estimation, importance level of project attributes as well as management issues are the main issues that must be assessed inside the organizations to clarify the situation of organization in terms of effort estimation. In the following section, some indicators are proposed to assess the different parts of the mentioned framework.
Figure 1. The investigation framework
For the issues mentioned as the important parts of effort estimation inside the organizations, the measurement procedure must be explained to ensure the applicability of the method. The indicators are utilized to assess and investigate the related case. In order to find the most suitable indicators several critical questions are considered. For example, how the survey wants to examine the knowledge of developers in field of effort estimation? Which limitations and obstacles are considered in the survey? Which project attributes are involved in this research? and so on. The indicators are determined so that the investigation results can answer the questions. Figure 2 displays the indicators we have determined to evaluate the different parts of effort estimation.
Knowledge of Developers
Regarding the knowledge of developers, it is very important for managers to know how developers are familiar with the different aspects of effort estimation. This can be known by investigating the knowledge of developers in terms of the process of effort estimation. In addition, the familiarity level of developers with the latest effort estimation methods is a critical issue to examine the capability of developer for effort estimating. Finally, the prior experience of developers is an undeniable factor determination of developer's capability in this field.
Management Issues
Regarding the management issues, it must be evaluated that how managers believe to effort estimation. If they do not believe the estimation, they may force the team to determine the effort less than the most likely effort. Managers must be aware of the benefits of accurate effort estimation.
Attention to effort estimation through the management activities must be evaluated inside the organizations. Some indictors such as clearly define activities for effort estimation, allocate staff to conduct the effort estimation, define milestone and plan for effort estimation and continuous training of developers in field of latest effort estimation tools and methods must be considered here. Creating a database of historical project effort factors and documenting the process of effort estimation are the other factors in this field. Team organization and coordination as the other important indicators are considered in the proposed framework. Analysis and determine the possible factors lead to inaccurate estimates is the other indicator must be considered by managers. The mentioned factors can be unstable demand, change the development process, lack of historical project information as a basis for estimation, the lack of monitoring of the effort. The last indicator in this group is monitoring. Timely adjustments must be performed to estimate the target. According to the software project's progress, the estimated effort must be adjusted to achieve the required accuracy. Effort estimates must be evaluated by an independent person. In addition, effort estimates must be accurately recorded and the change of accuracy and improvement must be continuously controlled.
Limitations and Obstacles
As stated in the previous sections, effort prediction is a challenging and complicated process in the software projects. There are some factors and reasons which make the effort prediction to be very difficult. This group of indicators includes some of the most important factors and obstacles lead to inaccurate estimates inside the organizations. These indicators have been divided into three main groups: product, people and tools. Frequent changes in a software requirements, unclear and vague software requirements and lack of historical project data are the obstacles related to the product group. Lack of appropriate estimation methods or estimation process, lack of the support of the estimation tools and lack of required information which must be used by tools are obstacles related to the group of tools. Not enough time or manpower to carry out the effort estimate, pressure from senior managers, customers, or others, directly specify or modify the estimation results, lack of participation of application developers, lack of timely supervision and control of cost according to plan, lack of analysis of software systems and the associated risks, lack of coordination among the relevant stakeholders of the customers, users, system design and development and lack of risk analysis and management of software projects are the obstacles related to the people group.
Importance of Attributes
There are several standard and defined attributes for any software projects, which include organization type, development type, development technique, development style, application type, programming language, CASE tools as well as size. These attributes need to be investigated in order to clarify that how they influence on project effort. In order to discover the effect of these attributes on project effort, a comprehensive analysis must be performed inside the organization. Various types of the software projects and the large number of attributes make the analysis to be complicated and time consuming. In order to overcome the complexity of this problem, we have classified the related attributes into three main groups of development, product and technical. Selection of attributes has been performed based on the importance and worth of each attribute in terms of project effort. Prior studies and interview are the main instruments helped us to select the attributes.
CONCLUSION
Software is an important concept in the modern business, government and military operations. This indicates that hundreds of new applications are produced and hundreds of existing applications are modified every year either by a corporation or a state government. Huge host of software projects in the today's business world means that software effort estimating is now a significant activity for any company that produces or develops software. Combined with software development process, software effort estimation process can help projects to provide credible and reliable plans to develop the software requirements and satisfy agreements. It can also help other project activities particularly management issues, by presenting accurate and timely effort estimates throughout the project. Lack of the analytical and survey-based studies is the problem behind the inaccurate estimation of software development effort. The numeral and quantitative estimation methods cannot overcome the non-normality of software projects because the accuracy of estimates strongly depends on the management issues which must be evaluated and improved inside the organizations. The management issues are different from one organization to another one and a unified evaluation framework can be a suitable solution to this problem. This paper proposed a framework including several indicators to evaluate the real situation of effort estimation process in organizations. The indicators were classified into four main groups so that they covered the most important issues related to the effort. The measurement procedure for the indicators located in four groups was explained separately to ensure the ability of the framework to be implemented. This framework can be helpful for managers to know the strengths and weaknesses of organization regarding the process of effort estimation. On the other hand it can be suitable to find a unified method to evaluate and improve the status of effort estimation in different organizations. Conduction of a survey using the framework proposed in this study is the future work we are going to do. | 2012-09-12T01:09:51.000Z | 2012-09-12T00:00:00.000 | {
"year": 2012,
"sha1": "02555c55eeb1ae6c8cb6b2be56e66f60c3d9c1b2",
"oa_license": null,
"oa_url": "https://doi.org/10.5121/ijmit.2012.4303",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "02555c55eeb1ae6c8cb6b2be56e66f60c3d9c1b2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
117878588 | pes2o/s2orc | v3-fos-license | Deriving Bell's nonlocality from nonlocality at detection
It is argued that Bell's nonlocality is a particular case of nonlocality at detection, which appears already in single-particle interference experiments. The unity of nonlocality and local causality is crucial to provide a consistent description of the world.
Introduction
In a previous paper an experiment has been proposed to demonstrate nonlocality at detection using a setup that can be used to reproduce the Michelson-Morley experiment as well [1]. Both experiments happen under exactly the same conditions and both are supposed to falsify equivalent predictions for changes in the detection rates. Thus, the demonstration of nonlocality can be considered as "loophole free" as that of relativity.
The proposed experiment also shows the importance of assuming the "experimenter's freedom" and other basic quantum mechanical principles to derive relativity from the Michelson-Morley negative result.
Another interesting point is that the alternative local theory the experiment aims to falsify, predicts that the energy is conserved in the average but not in each individual detection event: One single photon produces two counts in a number of runs, and no count at all in other runs. If nonlocality has to respect local causality to avoiding signaling, local causality without nonlocality violates energy conservation. In this sense the experiment is expected to uphold conservation of energy in each individual detection event, and thereby to prove the importance of keeping united nonlocality and locality to provide a consistent description of the world.
In summary, the experiment demonstrates that both relativity and quantum mechanics share the very same experimental basis and stresses the importance of the following principles for both theories: Axiom I: "The experimenter's freedom": Any measurement settings can be chosen such that they are uncorrelated with anything in their past half space. This also means a limit for freedom: The experimenter is not allowed to change the past at will; otherwise causal oddities would result. In other words, the axiom also includes the principle of "no-retrocausation".
Axiom II: "One photon one count": The energy is conserved in each single detection event, and not only in the average. This limits both local causality and nonlocality.
Axiom III: United nonlocality&locality: local and nonlocal steering of detection outcomes are two operating ways of the same resource.
The present paper shows how the main quantum fea- tures derive from these axioms, strengthening the conclusion drawn in [1]. The arguments are presented in the context of experiments rather than using a general formalism, in order to enhance their physical meaning. For the coming analysis it is worth stressing that "noretrocausation" (Axiom I ) directly implies the invariance of the light velocity upon the path length. Such invariance plays a key role in the interpretation of the Michelson-Morley negative result, as shown in [1], and this result entails the "no-signaling" condition. In this sense the assumption that the light does not change velocity depending on how far it has to go is more basic than "no-signaling".
Deriving time uncertainty and linear-unitary quantum measurements
On the one hand, nonlocality at detection is not compatible with the classical view of a particle as something well located in space-time. Indeed such a view would entail that light travels with different velocity through path l and path s in the experiment of Figure 1, and an experimenter can decide how light behaves in his past.
On the other hand "particles" as entities traveling a well defined trajectory are an essential ingredient to make a material world we can control by means of detectable signals propagating in space-time.
Thus the Axioms I-III in Section 1 yield the idea that the detectors decide always taking account of information about all possible paths reaching them. However, depending on certain parameters, the resulting distribution is the same as if material "particles" were traveling the paths, and the interferences disappear. This idea can be expressed mathematically by means of a function of the optical path difference τ = |l−s| c : where ω defines a parameter of the source (the emitted monochromatic light frequency), a ∈ {+1, −1} defines a value characterizing the detection outcome according to which detector clicks, Φ labels the phase parameter ωτ , and P (a|Φ) the probability of getting outcome value a for the phase Φ. The form of the right-hand side in (1) is chosen for convenience.
The assumption of "particles" means that local causality and nonlocality are bounded by the Axiom II ("one photon one count") [1]. Accordingly the probabilities fulfill: where P (1, 1|Φ) labels the probability of getting jointly one count in each of the two detectors ( Figure 1), P (0, 0|Φ) no count in any detector, P (1, 0|Φ) one count in D(+) and no count in D(−), and P (0, 1|Φ) one count in D(−) and no count in D(+). Notice that in the preceding expressions ′ 1 ′ does not refer to the detection value ′ + 1 ′ . Suppose the pump emits a "packet"of frequencies δ(ω) with bandwidth ∆ω. Then, the probability P (a|Φ) is given integrating (1) over ω within ∆ω: where K is a normalization factor. The Axioms I-III above impose that f (Φ) in (1) is a periodic oscillating function such that: and period given by: This way, over a period the different frequencies contribute destructively to the integral in (3) and one gets P (a|Φ) = 1/2, that together with (2) reproduces the classical particle behavior.
By contrast, if: then the different frequencies contribute constructively to the integral and (3) can be approximated by: that is, one gets interferences. Thus the probability P (a|Φ) is a function oscillating periodically and fulfilling: The properties (8) and (9) characterize the sine wave and convey the expression: Time uncertainty. To respect the invariance of the light velocity, the only thing it remains to do is to give up the view that the time of emission is defined with arbitrary precision, and instead introduce a time interval τ c where emission can happen, which is given by: τc ≥ 2π/∆ω = 1/∆ν (11) τ c defines an uncertainty in the time of emission and is also called "coherence time".
Then the condition for having interferences expressed in (6) becomes: τc >> τ (12) which is the usual condition. From (11) one can derive straightforwardly Heisenberg's relationship between the uncertainty in the time of emission τ c and the uncertainty in the energy of the emitted photon ∆E: This derivation shows that the uncertainty principle is not a primitive of quantum theory but results from the more fundamental conditions required to have nonlocal detection outcomes (interference) with light traveling paths of different length at equal velocity.
Linearity. Suppose now in the experiment of Figure 1 the detectors are set to monitor directly the output ports of BS0. Then there is only one path leading to each detector, and the probabilities have to be the same as for classical "particles". This inspires the idea that each path is mathematically characterized by a complex number (an amplitude), and the probability results by squaring its absolute value. In case of two paths the resulting amplitude is given by summing over the amplitudes of each path. The function 1/2(1 + af (Φ)) in (1) results from squaring the absolute value of the sum of the two path amplitudes. This is the property of linearity characteristic of the Hilbert space algebra used to formalize quantum mechanics.
Unitary measurements: I now prove that the Axiom II impose unitary transformations at the beam splitters (unitary transformations of quantum states), another crucial feature of the Hilbert space algebra: Before proving let us give an example: Suppose that reflection on the beam-splitter entails a phase shift of exp i π 4 ) instead of the quantum mechanical exp(i π 2 ). Then from (7) and (10) one would get: The expressions in (14) mean that energy is not conserved for each single phase but only averaging over all phases, and violate the condition (2). Let us now label L and S the amplitudes of the paths l and s reaching BS1, L * and S * the respective complex conjugates, and [a ij ] 2×2 the complex matrix characterizing the measurement at BS1. Then, from (2) and (7) it follows that: Since the term LS * is a complex function and can also have values '1' and '−i', the Equation (15) imposes: which means that the measurement at BS1 (Figure 1) is unitary. Conversely, Equation (16) is a sufficient condition to get nonlocality at detection and local causality respecting "one photon one count".
Hence the Axioms I-III impose measurements that are linear and unitary, that is, the distinctive properties of quantum measurements (also called POVMs).
Usually these properties appear as postulates of the Hilbert space algebra, and from them one derives nonlocality. Here we have gone the other way around, and shown how nonlocality and local causality united convey the quantum algebra.
In the preceding analysis we have calculated the contribution of the paths assuming the same frequency ω for l and s. However one could very well have a phase shift Alice's interferometer Bob's interferometer Mobile mirrors FIG. 2: Diagram of a 2-particle experiment using interferometers: The source emits photon pairs produced by down conversion. Photon A (frequency ωA) enters Alice's interferometer to the left and gets detected after leaving the beam-splitter BSA1, and photon B (frequency ωB) enters Bob's interferometer to the right and gets detected after leaving the beamsplitter BSB1. The detectors are denoted DA(+), DA(−), and DB(+), DB(−), and correspondingly we say that the detections give the values (a, b ∈ {+1, −1}). Each interferometer consists in a long arm of length li, and a short one of length si, i ∈ {A, B}. Bell experiments use N different values of lA (l0, l2, ..., l2N−2) and N values of lB (l1, l3, ..., l2N−1), with N ≥ 2. Φ is the phase parameter depending on settings lA, lB on both sides of the setup. In order to have entanglement exhibiting nonlocal correlations in Alice's and Bob's labs only the path pairs: (sA, sB) and (lA, lB) can constructively contribute to the correlated outcomes, where (sA, sB) denotes the path defined by the two short arms, and (lA, lB) that by the two long arms. This imposes conditions to the frequency bandwidths and path alignments.
between the frequencies of the paths, for instance if one uses acousto-optic modulators (AOM) as beam-splitters like in the experiments presented in Reference [4]. This possibility leads to interesting insights as well, but they are not relevant for the scope of this article, and I postpone their discussion to a forthcoming paper.
To derive the quantum features above we have invoked that light does not changes velocity depending of how far it has to go. As said, this assumption is also crucial to derive relativity from the Michelson-Morley negative result [1]. Hence one can conclude that relativity and quantum uncertainty are two aspects of the same principle: the light speed invariance upon the path length. However, without uncertainty one cannot have interferences, and without interferences one cannot have Michelson-Morley: In this sense one cannot have relativity without quantum mechanics.
Deriving entanglement and the enlarged uncertainty principle
Consider now the 2-particle experiment sketched in Figure 2. The experiment uses N different values of l A (l 0 , l 2 , ..., l 2N −2 ) and N values of l B (l 1 , l 3 , ..., l 2N −1 ), with N ≥ 2. The conventional Bell experiments correspond to N = 2, that is, 4 measurements. N > 2 allow us to perform so called "chained Bell experiment" we will refer to later in Section 6.
We denote P (a, b) the probability of getting the joint outcome (a, b).
According to the result in the preceding section the measurements of Alice and Bob at each side of the setup are nonlocal, linear and unitary. We now extend this result and assume outcomes (a, b) between the detectors at both sides of the setup that are nonlocal, linear and unitary. That is, the nonlocality appearing in the joint outcomes is basically the same as that appearing in the outcomes at each side of the setup.
In the setup of Figure 2 there are four possible paths leading from the source to each possible pair of firing detectors: Suppose the pump emits a wave of frequency ω, and the down-converted photons have frequencies: The phase parameter corresponding to the path pair (l A , l B ) and (s A , s B ), is given by: where φ A = ω A τ A denotes the phase of Alice's interferometer, and φ B = ω B τ B that of Bob's one.
The phase corresponding to the path pair (l A , l B ) and (l A , s B ), is given by: Similarly one gets the phases for the other four path pairs (l A , s B ) and (l A , l B ), (s A , s B ) and (s A , l B ), (l A , l B ) and (s A , l B ), (l A , s B ) and (s A , s B ).
For each possible path pair the probabilities share properties similar to (8) and (9): and Suppose now there is a way to take account only of the contribution of the path pair (l A , l B ) and (s A , s B ) to the joint outcomes, ruling out that of all the other pairs. Then one has the so called entangled state, which can be maximally or non-maximally entangled, and bears nonlocal correlations. In case of maximally entanglement one has the characteristic probability distribution given by: Entanglement can easily result by imposing the following conditions to the frequency bandwidths ∆ω and ∆ω ph : Introducing the coherence times (uncertainties in time of emission) τ c = 2π/∆ω and τ ph c = 2π/∆ω ph , the relations in (23) lead to the usual coherence conditions to perform Bell experiments with Franson's type interferometers [4]: Nonlocality with multiparticle entanglement requires extended uncertainty relations like that in (23). The observer Bob who has access to photon ω B , reduces his uncertainty about the time at which Alice will detect ω A , with relation to an observer Charlie who has only access to information about the emission time of the laser pump in the source [5]. A similar idea is expressed in [10].
The fact that there are real processes like linear down conversion respecting the conditions (23) means that the principle of nonlocality appearing in Axiom III has to be understood as nonlocality at detection with possibility of having multiparticle nonlocal correlations. Indeed, a nonlocal world with uncertainty, linear and unitary measurements but without multiparticle entanglement had been possible in principle. But apparently it has been decided otherwise on the part of nature.
In the experiment of Figure 2 "no-signaling" imposes that Alice's marginal of P (a, b) does not depend on changes of φ B at Bob's side, and Bob's marginal on changes of φ A at Alice's side: The independence of the light velocity on the path length has played an important role in what we have stated, but so far we didn't make any explicit use of the other consequence of local causality, the "no-signaling" condition (26), to deduce quantum properties. Notwithstanding, it is well known that linear and unitary quantum measurements suffice to prevent the use of quantum nonlocal correlations for signaling. This is the often referred to "miracle" that permits the peaceful coexistence between quantum mechanics and relativity. The analysis in the preceding sections shows where the miracle comes from: For the experiment of Figure 2 the Equation (16) becomes: where [b ij ] 2×2 is the complex matrix characterizing Bob's measurement at BS B1 . "No-signaling" (26) imposes the condition: Thus, the same condition (16) that bounds local causality to respect "one photon one count" and nonlocality at detection in single particle interference [1], appears now in (28) bounding nonlocality to respect "nosignaling" and local causality in multiparticle entanglement as well.
It is interesting to compare our derivation with that in [9]. This Reference assumes that the quantum algebra holds for the single-particle system, and then from this axiom and "no-signaling" one derives the quantum algebra describing the 2-particle nonlocal correlations. Additionally, the quantum measurements at each side of the setup are considered to be "local", and thus the linearity and unitaryness of these "local" measurements are introduced as an axiom. By contrast we assume that already the quantum measurements at each side involve nonlocality between detectors, and this nonlocality implies linear and unitary operators. Therefore we don't really need to extra add "no-signaling" to get "non-signaling" quantum correlations.
Bell nonlocality and the pilot wave picture
As stated in [1], the assumption that the decisions happen at the beam-splitters ("pilot wave") was first formulated by Louis de Broglie and permits to escape nonlocality at detection, but at the price of introducing a dualism: the material "particle" propagating by one of the paths and an "empty wave" propagating along the other path. The particle is an observable and detectable thing, whereas the "empty" wave is sort of non-material entity, which is inaccessible to direct observation, and can only be characterized by how the particle behave when observed.
However, already Einstein smelled out that even this way one cannot get rid of the quantum nonlocality, and his suspicion provoked the EPR controversy.
Effectively, further development of the picture by David Bohm clearly revealed that the "pilot wave" has to be considered a nonlocal entity, and so nonlocality reappears between the beam-splitters [3]. This nonlocality violates the well known locality criteria called Bell inequalities [3]. Experiments demonstrate violation of Bell inequalities and confirm nonlocality.
It is important to note that the alternative local model the experimental violation of Bell inequalities rules out, is in fact a local version of Bohm's theory. That is, it necessarily involves local hidden variables that, like the "empty pilot wave", one cannot directly observe or detect. Otherwise, the model would fail to explain singleparticle interferences. Ironically the local explanation bears the concept of entities existing and propagating in space-time that are unobservable in principle. I think this idea is not less odd than that of "one photon two counts" tested in [1]. If the former deserves to be tested by experiment, so deserves the later. In fact both deserve experimental falsification demonstrating how nonlocality helps to well define local causality.
Anyway, the fact that the "pilot wave" picture leads to nonlocality between the beam-spitters strengthens the "non-materiality" of the wave: It is non only unobservable in principle, it is not bound to spatial limits either. In this sense Bohm's theory shares with our Axiom III the motivation of uniting nonlocality and local causality. The difference is that in the derivation of Section 1 both "particle" and the nonlocality are jointly generated by the same procedure, where in Bohm's model they are separately postulated as two essentially different entities.
Nonetheless, there is another important feature of Bohm's model: Although nolocal it is time ordered, it bounds nonlocality to time. The characteristic way of thinking in this model is that one of the "particles" (say Alice's one) arrives first to the corresponding beamsplitter. Alice's outcome happens before Bob's one, and thereafter Bob's outcome takes account of Alice's one to yield the quantum correlations. This way the model gives up covariance, but saves the "relativistic" feature of time-ordered causality. Implicitly the model works with a preferred frame, which in the conventional Bell experiments is identical to the laboratory frame. Nonetheless, by accepting that the choice-devices are the beamspitters Bohm's theory clearly highlights that these devices' frames are the relevant ones to measure when detections happen. Models based on this assumption allow us to decide the question of whether nonlocality is time ordered or not. We discuss this point in the coming section.
Deriving time-independent nonlocality
The Suarez-Scarani extension of quantum theory provides a criterion of time-ordered nonlocal causality that allow us to decide the question of time-independence by setting apparatuses in motion: If each of them in its inertial frame decides before the other (before-before timing), then the nonlocal correlations should disappear ( [7], [6] and References therein). In the experiment of Reference [1] this would mean that in 25% of the runs one photon produces two counts, and in 25% no count, in contradiction again with Axiom II. Thus the issue can be decided by the same experiment proposed in [1] without need of putting detectors in motion.
Additionally, the assumption of frame-dependent nonlocality between detectors leads to signaling in the case of entanglement experiments [4], and thus contradicts the Michelson-Morley experiment as well. Interestingly, this result was discovered during the work to perform a before-before experiment with detectors in motion, which in fact was also done.
But astonishingly enough, the same "pilot wave" picture that helped to escape nonlocality at detection helps to escape its independence of the time-order as well. The Suarez-Scarani extension is actually nothing other than the falsifiable version of Bohm's theory: On the one hand it respects the relativity of simultaneity (in agreement with the Michelson-Morley experiment), but on the other hand it predicts probability distributions depending on the time-order, and therefore it is non-covariant [6].
The core of the Suarez-Scarani model is the operation "suppression of nonlocal correlations with maintenance of possible local ones" If the beam-splitters are the choicedevices, in order to prove this operation signaling one should prove the following conjecture wrong: Conjecture: There is no state for which the nosignaling condition imposes that at least one marginal violates Bell inequalities. [6] All quantum states I know fulfill this Conjecture [11]. Nonetheless "suppression of nonlocal correlations" is not a "quantum measurement" (POVM) and in fact it bears observable predictions conflicting with quantum mechanics. So, once again it is fortunately possible to decide by experiment. The before-before experiment demonstrates the frame-independence of the quantum correlations [4]. Thereby it can be considered a proof of the covariance of quantum mechanics: Although nonlocal, the quantum measurements (POVMs) are covariant [6].
The experiment also refutes Bohm's model in its falsifiable version, the Suarez-Scarani model. So, the "pilot wave" neither escapes nonlocality nor time-independence after all. Nonetheless it is this picture which decisively inspired the work leading to Bell and before-before experiments, and contributed to unite nonlocality and local causality. This is undoubtedly of great merit.
In summary, the nonlocal correlations do not depend on the time-order, and in this sense come from outside space-time: If the detectors are the choice-devices, then covariant nonlocality follows from "one photon one count" (and also from "no-signaling"); if the beamsplitters, then it is an axiom, backed by experiment.
We can never enough admire the fact that it is possible to demonstrate nonlocality and its time-independence, as well if one assumes decision of outcomes at detection, as if one assumes it at the beam-splitters: Who ever makes the world seems really eager to show us that "the spacetime does not contain the whole physical reality" (Nicolas Gisin).
Another interesting point is that it is the demonstration that quantum correlations come from outside spacetime, what allow us to establish freedom on the part of nature, and therefore true randomness [15]. This means that using devices to implement settings chosen at random in Bell experiments begs the question and does not contribute to close the "loopholes". 6. Why isn't nature more non-local? "Nonlocal (NL or PR) boxes" illustrate very well that it is possible to have a type nonlocality that seems "stronger" than the quantum one, while respecting always the "no-signaling" condition. Thereby the NL resource suggests that "no-signaling" is not the reason for quantum "bounded nonlocality", and has raised considerable interest on finding which the motivation for the quantum limit (Tsirelson bound) may be (see [9] and References therein).
Before proposing an answer to this question it is useful to see what precisely "maximal nonlocality" means.
where P (a = b|Φ(l 0 , l 2N −1 )) means the conditional probability that Alice and Bob get the same outcome if the phase's value results from long interferometers' arms set to l 0 , l 2N −1 , and P (a = b|Φ(l i , l i+1 )) the conditional probability that Alice and Bob get different outcomes if the phase's value results from long interferometers' arms set to l i , l i+1 ; depending on i, l i denotes the arm of Alice's or Bob's interferometer.
For convenience we assume in 29 that any two values l i , l i+1 , with i ∈ {0, 2N −2}, in (29) define the same phase parameter, resulting from the equipartition of a value Θ: Substitution of (30) into equation (29) gives: where now we use the notation I(N, Θ) to indicate that I(N ) depends on the variable Θ as well.
In Equation (29), for each N , I(N ) ≥ 1 defines a Bell inequality or locality criterion. I(2) ≥ 1 represents the well known CHSH inequality for experiments with 4 measurements. Accordingly, I(N ) < 1 defines correlations that cannot be explained by means of local relativistic influences.
If one interprets decreasing I(N ) as an indicator of increasing nonlocality, maximal nolocality I(N ) = 0 is reached with N = ∞ [8]. One says that a theory or resource is "bounded nonlocal" if I(N ) > 0 for finite N , and "maximal nonlocal" if I(N ) = 0 for finite N [8].
In this sense quantum theory is bounded nonlocal. By contrast NL-boxes provide maximal nonlocality for I(N ) = 2.
One can now prove that probabilities sharing the properties (20) and (21) necessarily imply "bounded" nonlocality: Suppose one could have I(N, π) = 0. Taking account of (31) it holds that: From (20) and (21) one is led to: Equations (32) and (33) contradict each other. As a matter of fact, the particular quantum limit arises from the probability distributions like that in (22). And we have seen that such expressions come from the linear and unitary phase-dependent probabilities (the quantum POVMs) and the conditions to have entangled states. In the derivation of these features, nonlocality at detection and the invariance of the light speed upon the path length it has to travel, were of decisive importance.
Accordingly, the answer to the question in the title of this section lyes at hand: Nature is not more nonlocal because "no-signaling" is not the whole thing. More important is to assume outcome distributions depending on phases (or similar parameters), and that the velocity of light does not depend on how far it has to go. Since NL boxes describe nonlocality only in the context of multiparticle resources and ignore nonlocality at detection in the context of single particle interference, they overlook the real nonlocality.
Another interesting point in this respect is that the relationship between time uncertainty and bandwidth makes obviously sense only for probabilities depending on phases (or similar parameters), and this dependence implies "bounded nonlocality", as shown above. Therefore "maximal nonlocality" disposes of probabilities depending on phases, and thereby of the uncertainty principle. The same conclusion is reached in [12] with an information theoretical argument.
Covariant extensions of quantum theory are logically inconsistent
On the one hand in Section 5 we have seen that noncovariant extensions of quantum theory can be considered wrong, either because they are signaling or have been falsified by experiment. It is obvious that such extensions are not refuted by arguments assuming covariance as an axiom [6].
On the other hand, in Sections 3-6 we have shown that the mere assumption of united local causality and nonlocality at detection with possibility of entanglement conveys the covariant quantum theory.
This means that any covariant alternative theory has to share the probabilities given in (8) and (9) for singleparticle interference experiments, and that given in (20) and (21) for maximally entangled states.
If this is the way the world is, is it still possible to build an alternative to quantum theory in some other way?
In interference experiments the probability of getting a count in each of the two detectors exhibits a pattern oscillating between 1 and 0. In 2-particle experiments with maximally entangled states the joint probabilities P (a = b|Φ) (concordance) and P (a = b||Φ) (discordance) oscillate the same way, whereas the single outcome probabilities do not depend on Φ: P (a|Φ) = 1/2 and P (b|Φ) = 1/2.
According to standard quantum mechanics Alice's outcomes exhibit a uniform random distribution, and the same for Bob.
Consider now the following assumption: Alice's outcomes are distributed in different subensembles but in such a way that the value P (a|Φ) = 1/2 holds for the whole ensemble, and similarly for Bob.
This assumption characterizes covariant extensions of quantum theory, and in particular Leggett type models [6,14].
We denote D the statistical distance between the biased distribution of the Alice's outcomes predicted by the model and the uniform random distribution. In case of Leggett-type extensions tested to date D measures a dependence on a local hidden polarization [14]. However D can come from some general system, not necessarily local hidden variables [13].
I prove that the very assumption of nonlocality excludes biased random outcomes: It has been proved in [13] that the non-signaling condition implies: Taking account of (20) and (21), Equation (31) implies: I(∞, π) = P r(a = b|π) Hence, I(N, π) takes all values between I(2, π) and I(∞, π) = 0, and therefore for any D it is always possible to find an N such that: The expressions (34) and (37) contradict each other. In summary, any covariant extension has to match the conditions (20) and (21), and fulfill the Colbeck-Renner inequality (34). These two requirements exclude extensions with variational distance D > 0. Consequently, covariant extensions assuming such a distance can be considered falsified already on the basis of the experiments refuting local extensions and signalling, and in this sense are logically inconsistent. This holds in particular for the Leggett-type models tested in [14]. 8. Can the United Nonlocality&Locality description be considered complete?
A possible reason for the little attention payed to nonlocality at detection so far, may be reluctance towards the (Copenhagen) "subjective" interpretation of the "collapse" as requiring the presence of a conscious observer (Schrödinger cat).
I would like to stress that it is possible to have a view combining the subjective and the objective interpretation of measurement: On the one hand no human observer has to be actually present in order that a registration takes place; on the other hand one defines the collapse or reduction with relation to the capabilities of the human observer. In fact, for measurement to happen it is not necessary at all that a human observer (conscious or not) is watching the apparatuses. However the very definition of measurement makes relation to human consciousness: An event is "measured", i.e. irreversibly registered, only if it is possible for a human observer to become aware of it [15].
In a sense we consider the "collapse" to be something as objective as "death", which physicians define as the irreversible breakdown of all the brain functions including brainstem ones. For someone to die it is not necessary he to be watched by some conscious observer. However the conditions defining "death" relate to the limit of the human capabilities to reverse a process of decay.
Even if measurement is basic to quantum mechanics, for the time being, the theory in any of its interpretations does not define consistently which conditions determine when measurement happens (certainly, medicine does not achieve better in defining when "death" happens). This state of affairs ("measurement problem") clearly shows a point where the unity of relativity and quantum theory as we know it today can and must be completed. And to do it, it may be that we have to understand better how consciousness and free will happen in the brain. 9. Conclusion 75 years after the EPR paper, the ongoing work on nolocality is helping us to better understand the relationship between quantum theory and relativity. Initial misunderstandings and controversies hid a deep unity which is now appearing.
Relativity and quantum theory share the very same experimental basis, and derive from the same principles. They are two inseparable aspects of one and the same description of the physical reality. Both seem to respond to the motivation of making a world characterized by the unity of local and nonlocal steering of detection outcomes. If nonlocality without nonlocality bears the oddity of signaling, locality without nonlocality violates the conservation law of energy and bears the strange concept of "inaccessible local hidden variables".
The unity of nonlocality and local causality provides physics with a more consistent basis and makes it capable of tackling the likely greatest challenge in the history of science: Understanding the brain. | 2010-09-03T15:23:21.000Z | 2010-09-03T00:00:00.000 | {
"year": 2010,
"sha1": "88c7f0a28632501fce654d7273c774c31f627054",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "88c7f0a28632501fce654d7273c774c31f627054",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
191768778 | pes2o/s2orc | v3-fos-license | Choral Performance and Geometric Patterns in Epic Poetry and Iconographic Representations
The aim of this paper is to consider the relationship between some descriptions of choral performances in Greek archaic epic and the pictorial patterns and functions of artistic artifacts, mainly pottery, of the Geometric period.1 The relationship between Homeric and Hesiodic epic and Mycenaean or Geometric art has been a frequent subject of research.2 As a general methodological point, I wish to state from the beginning that I am not concerned here with the search for influences one way or another, trying, for example, to individuate specific mythic representations from the epic poems on some vases, as illustrations of certain passages from the poems which the artist might have had in mind. I view text and image as parallel means of expression, in this case of a cultural pattern fundamental for Greek culture, especially in the Archaic and Classical periods, that I would call ‘chorality’. By this term I mean a symbolic construction which found various expressions in actual performances with different combinations of dance, music and song, eventually crystallizing into specific genres or subgenres of choral lyric. Beyond choral performance, however, chorality also functions as a cultural paradigm which informed different fields of the community’s experience, such as agonistic or juridical procedures or, in the present case, other artistic discourses such as epic poetry or pictorial art. As Barbara Kowalzig puts it in her recent book on performances of myth and ritual, ‘the chorus (…) supplies the fundamental communal aspect of
1 For a complete list of descriptions of choral performance in epic poetry see Richardson (2011); for a study of the iconographic representations of choruses in early Greek art see Buboltz (2002).The argument presented here relies partially on the assumption that the epic passages I am analysing were functional in the Geometric and Orientalizing periods (8th-7th c. bc), probably-but not necessarily-in the context of the poems more or less in the form in which they have come down to us.In any case, the correspondences here observed between the epic texts and the iconographic motifs and patterns from the Geometric period suggest a relevant connection, which even a low chronology for the Homeric poems would not rule out, given the traditional nature of oral poetry. 2 Cf.Snodgrass (1998) among others.religious ritual, and perhaps of many other aspects of Greek religion and history (…) without the chorus, neither community nor communal re-enactment could exist ' .3Obviously, the chorus by definition implies a communal aspect, in that it involves a plurality of people taking part in it, but also in that it requires a community of spectators watching it.The audience is a fundamental part of the performance; accordingly, it appears in all the epic descriptions of choral song and dance, and each time it is the visual dimension of the watching which is emphasized.But what is the nature and the object of this looking which establishes the essential bond between the chorus and its audience, the channel through which the performative action of song and dance operates on the onlookers?And how can this be compared to the iconographic record that has come down to us?
Let us consider in the first place the description of the shield of Achilles in the Iliad.4As for its shape and layout, we may safely assume it is a round shield, with concentric circles made up of iconographic bands.There are three choral performances represented on it, all of which combine song and dance: a wedding procession, a vintage song, and, in the final, most detailed scene, a mixed chorus of boys and girls performing before the whole community.The first two scenes present processional choruses, and their descriptions are embedded in the overall description of their respective occasions, the wedding and the vintage.The last chorus, however, constitutes a scene in itself, surely occupying an independent band of the shield, the last one before the outermost ring (ἄντυγα) with the representation of Okeanos, probably in the form of a snake.This layout is suggested by the introduction of the word marker ἐν δὲ … ποίκιλλε (590, 'therein he inlaid …') at the beginning of the scene, with ἐν δ' ἐτίθει (607, 'therein he set …') introducing the next-and final-ring.These markers are repeated six times (with some further internal repetitions in the third and fourth rings) in the course of the description and they delimit the iconographical bands that make up the shield: 1. ἐν μὲν … ἔτευξε ('he fashioned …'): earth-heaven-sea / sun-moon / constellations: Pleiades-Hyades-Orion-Bear. 2. ἐν δὲ … ποίησε ('he created …'): City at Peace [chorus a: wedding-songjudicial scene] / City at War.
We find in these markers different verbs which allude to the making of the shield and the scenes placed upon it by its maker: τεύχω, ποιέω, τίθημι.But ποικίλλω, used only for the choral scene, introduces an additional meaning: the concept of ποίκιλμα/ποικιλία.In other epic passages, weapons, chariots, jewels, and textiles are qualified as ποικίλα.5 Archaeology has indeed provided outstanding examples of weapons and chariots with figurative decoration (e.g., the magnificent series of votive shields at Olympia).As for the relevance of textiles and jewels in this context, it is interesting to observe that in the Homeric poems, while they share with weapons and chariots the epithet ποικίλος, it is only in describing those objects that explicit mention is made of the representations (ποικίλματα) wrought or woven on them.6Furthermore, by using the imperfect tense ποίκιλλε, which suggests that the representations on the shield are being wrought by Hephaestus before our very eyes, the poet calls attention to the process of creation rather than the finished product, and in the poems this happens mainly with weaving (e.g.Helen weaving a tapestry with the deeds of heroes in 3.125-128).7Thus, the use of ποίκιλλε would have evoked in the audience textiles and weaving as much as weapons and forging, and significantly both kinds of products are found in the description of the chorus that is 5 Il. 3.327 (weapons),4.226,5.239 (chariots),6.289 (peploi),etc. 6 Cf. the peplos offered to Athena by Hecuba: αὐτὴ δ' ἐς θάλαμον κατεβήσετο κηώεντα, / ἔνθ' ἔσάν οἱ πέπλοι παμποίκιλα ἔργα γυναικῶν (…) τῶν ἕν' ἀειραμένη Ἑκάβη φέρε δῶρον Ἀθήνῃ, / ὃς κάλλιστος ἔην ποικίλμασιν ἠδὲ μέγιστος, / ἀστὴρ δ' ὣς ἀπέλαμπεν· ἔκειτο δὲ νείατος ἄλλων (Il.6.288-295) 'She descended into the fragrant store-chamber.There lay the elaborately wrought robes, the work of Sidonian women (…) Hekabe lifted out one and took it as gift to Athene, that which was the loveliest in design and the largest, and shone like a star.It lay beneath the others.'(tr.Lattimore).For the correspondences between the description of the robes of the dancers on the shield and similar passages in the Homeric poems, cf.Taplin (1980) 9-11.Apart from textiles, figurative representations are also mentioned and described on another piece of feminine attire, Aphrodite's girdle (ἱμάς), given to Hera to seduce Zeus: Ἦ, καὶ ἀπὸ στήθεσσιν ἐλύσατο κεστὸν ἱμάντα / ποικίλον, ἔνθα δέ οἱ θελκτήρια πάντα τέτυκτο (Il.14.214-215) 'She spoke, and from her breasts unbound the elaborate, pattern-pierced zone, and on it are figured all beguilements …' 7 Cf.Il. 22.441.The parallel with Helen's weaving evokes a supplementary meaning: the analogy to the composition of the poem which is also relevant in the case of the shield.Cf. below, n. 12.
being introduced by that verb, namely, in the shining daggers and the beautiful robes of the dancing boys and girls (595-598).The shield's ποικιλία is also that of the chorus represented in this particular section, manifest in the beauty of the objects displayed by the dancers as well as in their ability to trace figures on the ground with their movements.At the same time, the term ποικίλλω carries also a connotation of attraction, seduction, even deceit, equally relevant to the presentation here of the chorus as a variegated creation designed to make a visual impact on those watching it.In this sense, the chorus, too, is conceptualized as an ἄγαλμα, it is itself a precious artefact, just like the shield on which it is represented, the beautiful robes of the dancers, and the garlands and daggers they carry.The chorus is an ἁρμονία, also in the physical sense of an ensemble made up of several pieces which are artfully assembled through the interlocking hands of the dancers (594: ὀρχεῦντ' ἀλλήλων ἐπὶ καρπῷ χεῖρας ἔχοντες, 'dancing, and holding hands at the wrist').8 This constructed character of the chorus is brought out from the very beginning of the passage through a simile (591-594).This simile is typically choral rather than epic, as it provides not a parallel scene out of everyday life, but a mythical paradigm, by comparing this figured χορός to that built for Ariadne by Daedalus.It is normally observed, and rightly so, that the allusion here is primarily to the χορός as a space, the dancing-floor, an architectural space where the dance takes place.9However, the passage is better understood in all its richness if we do not separate the two senses of the word χορός, as it is precisely through the choral performance that the space is defined as a χορός.This is paralleled, on the larger scale, in the fact that the description of the shield is not the description of the final product, but rather of the process of its making in Hephaestus' forge.
But this comparison contains another ambiguity, also intrinsic to the choral performance, as it refers at the same time to the making of the shield and to the dance itself, for both of which the Cretan χορός functions as a mythical model.The first aspect, the chorus as a δαίδαλον, a technical, constructed reality, is brought out by the first part of the comparison, Δαίδαλος ἤσκησε ('Daedalus fashioned'), which recalls in a personified form the terms δαιδάλλω and δαίδαλα πολλά, used at the very beginning of the ekphrasis of the shield to describe, respectively, the fabrication of the object and the representations wrought on its surface by Hephaestus (479-482: ποίει δὲ πρώτιστα σάκος μέγα τε στιβαρόν τε / πάντοσε δαιδάλλων (…) αὐτὰρ ἐν αὐτῷ / ποίει δαίδαλα πολλὰ ἰδυίῃσι πραπίδεσσιν, 'First of all he forged a shield that was huge and heavy, / elaborating it about, (…) and upon it / he elaborated many things in his skill and craftsmanship').10But the second part of the comparison, through the allusion to Ariadne as the maiden who is to dance in Daidalos' chorus, refers to the χορός as performance, and to its function as the mythical archetype re-enacted by the boys and girls represented on the shield, who of course are themselves the paradigm for any choral performance in the city (note the chiastic correspondence in the relative position of name and epithet between καλλιπλοκάμῳ Ἀριάδνῃ, 'Ariadne of the lovely tresses' , and παρθένοι ἀλφεσίβοιαι, 'young girls, sought for their beauty with gifts of oxen' , in the same metrical place at the end of successive lines, 592-593).So, on the one hand the construction of the χορός of Ariadne by Daedalus is the model imitated by Hephaestus fabricating his shield, which in turn constitutes the model for the Homeric singer composing his poem.11On the other hand, the beautiful Ariadne dancing in her χορός is herself the model for the boys and girls dancing in the chorus on the shield of Achilles, which, given its status as an ideal representation of the polis at peace, itself constitutes the paradigm for every choral performance in the real world.In this way, this pattern defines, in a kind of mise en abîme, a multiple mimetic relationship, both inwards and outwards from the chorus.
Thus, in the first of the two comparisons used to describe the chorus on the shield of Achilles, the mimetic nature of the choral performance comes out (note ἴκελον).The chorus is a mimetic reality in the active, performative sense that it endlessly bridges the divide between the inside world of the representation and the outside world of the public watching it, projecting the one onto the other and making them interchangeable in a mirror-like way.We may find an example of this pattern being explicitly worked out in a true choral lyric text, at the end of Pindar's Pythian 9.12 There, the chorus that greets Telesicrates as he returns home victorious from Delphi re-enacts the victory of his homonymous ancestor in a wedding contest set up by a Libyan king, who in turn was imitating the competition designed by Danaos to marry his daughters-who are explic-10 On the values attached to the δαίδαλα in Greek thought cf.Frontisi-Ducroux (20002), Morris (1995).11 The multiple parallels and correspondences between the shield and the whole Iliad are brought out in different ways by Taplin (1980) and Nagy (2003).12 On Pythian 9 see Carey (1981) 65-103, Carson (1982).For a choral reading of the Danaides' myth in this ode cf.Myers (2007).
itly referred to as a χορός-, bringing out the mimetic relationship between the Panhellenic mythical event, the local ancestral past and the present choral performance.13This makes the chorus not only a synchronic chain of performers but also a diachronic chain of successive re-enactments imitating one another, a symbolic model that, following this mimetic logic, projects into and fashions the future chain of reperformances.In the same way as the heroic myth (Danaos) and its imitation by Telesicrates' ancestor have functioned as paradigms for Telesicrates' victory, the description of this victory and the allusions to the occasion of the first performance of the poem will function as symbolic paradigms for future reperformances and their eventual occasions (which may not necessarily be epinician, just as Telesicrates' Pythian victory is not a marriage).
A further example of this mimetic mise en abîme of the choral performance is provided in the Homeric Hymn to Apollo by the description of the dances of the Ionians gathered at Delos.14By setting up the ἀγών, which includes song and dance, the Ionians give pleasure (σε τέρπουσι) to Apollo, but as they gather to watch these performances, which of course include hymns to Apollo and all the gods (where the gods may be shown taking part in the dance themselves, as they are later in this particular hymn, ll.189-206), the Ionians are in turn being watched, as if they had become gods themselves, by anyone (in primis the public of the Homeric Hymn, through the voice of the blind man from Chios) who looks at them (εἰσορόων), sees their beauty (ἴδοιτο χάριν) and, like Apollo himself, takes pleasure in this contemplation (τέρψαιτο θυμόν).Thus, the choral A man might think they were the unaging immortals if he came along then when the Ionians are all together: he would take in the beauty of the whole scene, and be delighted at the spectacle of the men and the fair-girt women, the swift ships and the people's piles of belongings' (tr.M.L. West).
gaze acts both ways, by giving and taking pleasure (τέρψις) through the creation and contemplation of beauty (χάρις); and this action works not only in a dual, reciprocal relationship, but in a chain of successive links that is also a chain of successive performances, as implied by the different levels involved: the gods, the Deliades, the Ionians and, through the appropriation of the choral model by the epic singer which is at work here (as in the Iliadic passage), the successive audiences of the present poem.15But let us return to the chorus on the shield of Achilles.If we now proceed from the mythical model (Daedalus' and Ariadne's chorus) to the description of the performance itself and focus our attention on its perception by the spectator, which is precisely the position in which the describing voice of the poet puts himself and his audience, we can observe two relevant aspects: (i) as we have just seen in the Homeric hymn, the beauty of the chorus impresses itself upon the community watching it through the desire it awakes in them (ἱμερόεντα χορόν), a force of attraction which in turn provokes a general state of pleasure (τερπόμενοι); and (ii) this beauty is perceived on two levels: on the one hand, through the contemplation of the physical beauty of the dancers, of their robes, and of the objects they carry (garlands and daggers); on the other hand, through the contemplation of what we might call the syntax of the chorus, which manifests itself in two ways: (a) in the interlocking hands forming a chain (594: ὀρχεῦντ' ἀλλήλων ἐπὶ καρπῷ χεῖρας ἔχοντες), which elsewhere can be described as a σειρά or a ὅρμος, and in some cases even doubled by an actual rope; and (b) in the patterns their movements trace on the ground.These are basically geometric, abstract patterns, namely: (i) the circle, compared here to a potter's wheel (and κύκλος will become almost a by-word for chorus, particularly, though perhaps not exclusively, dithyrambic choruses); (ii) the straight lines (στίχες), which through their intersection form a grid or a web (602: θρέξασκον ἐπὶ στίχας ἀλλήλοισι, 'would they run in rows toward each other'); or (iii) the sinuous or rotating lines traced by the tumblers evolving in the middle of the space, whether circular or quadrangular, which has just been defined by the chorus in the previous lines (605-606: κυβιστητῆρε … ἐδίνευον κατὰ μέσσους, 'two acrobats … revolving among them').
15
It may be noted, of course, that, this being an epic poem, these performances can only be choral in a metaphorical sense, unlike in the previous example from Pindar.Chorality functions here as a symbolic paradigm, transferring the articulating and self-repeating dynamics of choral mimesis to other media, such as epic poetry, and thereby conferring to them the authority and efficiency of actual ritual performance, cf.Carruesco (2010).This observation, in turn, could help us understand the symbolic mechanism at work in the reperformance of choral lyric in non-choral (e.g.sympotic) contexts.
To these two levels of the visual projection of the choral performance upon the spectators, namely the beauty of the dancers and the patterns they define with their movements, we must still add a further one, which is also conceptualized as a visual aspect of the performance: the song itself, and especially the images, symbolic or narrative, which it evokes in the public.The song is here alluded to by the term μολπή, which is introduced by the tumblers (μολπῆς ἐξάρχοντες), and which, depending on whether or not we accept as genuine (in whatever sense of the word) the problematic lines 604-605, is to be assigned either to the ἀοιδός, singing the μολπή and playing the φόρμιγξ, or else, implicitly, to the chorus itself-unless, as Revermann proposes, we postulate a lacuna here, a musical instrument being perhaps necessary in this context.16 As for the images the choral song and performance can bring before the eyes of the public, they can be found in the other, non-choral scenes of the shield, at two levels: on the one hand, in the description of the very occasions for the choral performance in the life of the community (the wedding, the war and its outcome, be it the triumph song or the mourning for the dead, and the main events of the agricultural year); on the other hand, in some symbolic motifs very frequent in choral song as self-referential images, particularly the astral and the animal imagery.A cursory glance at Alcman's first partheneion (pmgf 1) is enough to provide us with representative examples of both, with the choregoi being compared to racing horses and the Pleiades presented as a rival chorus.1716 Revermann (1998).But the absence of an ἀοιδός in this chorus would perhaps not be out of place in a text in which the epic singer is appropriating the choral model, if we take into account other such cases, such as the proem to Theogony, in which the chorus of the Muses, without a solo singer, confers to Hesiod and to all epic ἀοιδοί their voice; or the meeting of the girls at Delos and the blind man from Chios, in the Homeric Hymn to Apollo.17 The passage concerning the Pleiades (ll.60-63) has been the subject of much debate: the Pleiades can be a rival chorus of girls (with perhaps the implication of the speaking chorus presenting themselves as the Hyades), or the constellation itself (Priestley [2007] 190-193); but the name could also simply mean 'the doves' , and allude to the two choregoi (Puelma [1995]).In any case, the astral reference here rests assured by the comparison to the σήριον ἄστρον, whether it refers to this chorus of Pleiades/doves or to the robe (φᾶρος) which the girls are offering to the goddess, as Priestley has recently argued (I am grateful to the anonymous referee for this reference).As for the Pleiades, I take the name to refer both to the rival chorus and the constellation (similarly Segal [1998]).Cf.Call.fr.693 Pf., where the Pleiades are presented as the first to have set up a chorus of parthenoi, thus embodying the mythic paradigm for any chorus of young girls: πρῶτον δ' αὗται χορείαν καὶ παννυχίδα συνεστήσαντο παρθενεύουσαι.For the importance of astral references in Alcman's work cf.Ferrari (2008).Thus, the two aspects of the visual perception of the choral performance we have defined, the beauty of the dancers and the movements of the chorus, can function also at a second level, that of metaphoric imagery.On the shield, this is brought out in a whole series of textual parallels that mimetically link the three choral scenes to other passages of the poem, particularly those within the shield itself, which can be summarized by the diagram above (table 1).
On the shield, the astral band (483-489) immediately precedes the ὑμέναιος, where astral images are a regular feature (Hesperos, the stars, the moon, the sun), while the animal band, which is composed of two scenes-one narrative (573-586), the other emblematic (587-589)-is framed, significantly, by the second and third choral scenes, both of which contain terms that allude to the link between dancers and animals (572: σκαίρoντες ἕποντο, a verb usually applied to animals; 593: παρθένοι ἀλφεσίβοιαι, an adjective stressing the equivalence of the maids and the cattle).
These links between the choral and the non-choral scenes on the shield apply not only to the beauty of the dancers, but also to the second visual level mentioned above, that is, the geometric patterns defined by their movements.Since this is at the end of the ekphrasis, the audience of the poem could not fail to notice in the description of these patterns some verbal echoes from previous scenes, which are also relevant here.The verb δινέω has already appeared twice: in the first choral scene, in which it is applied to the dancing boys in the wedding procession (494: κοῦροι δ' ὀρχηστῆρες ἐδίνεον, 'the young men followed the circles of the dance'); and later on, in the ploughing scene, in which it describes the change of direction of the team when the ploughman arrives at the end of one furrow and turns to go down the next (ζεύγεα δινεύοντες).18 The pattern described here is the meander, which, when applied to writing, is known precisely as βουστροφηδόν, and indeed δινεύω is in this passage associated with στρέφω, used twice in quick succession, in the aorist participle στρέψαντες (544) and in the frequentative form στρέψασκον (546).Στρέφω, in its turn, had appeared in the first, astral scene of the ekphrasis (488: στρέφεται),19 applied to the movement of the Bear, as a maiden turning round always in the same place, out of fear of an assault by Orion, an erotic scenario that is a characteristic motif in chorus descriptions and representations, namely, the abduction from the chorus.20As for her endlessly circular movement, the mention of her alternative name, the Chariot, calls forth the image of the wheel, which in turn anticipates the potter's wheel to which the circular dance of the last scene will be compared.Furthermore, this circular motif echoes the previous image of the stars forming a heavenly garland (485: ἐστεφάνωται), at the centre of which we must picture this revolving maiden, who never gets to bathe in the Ocean (another choral motif: the girls bathing in the spring or the river 18 Il. 18.542-547: πολλοὶ δ' ἀροτῆρες ἐν αὐτῇ / ζεύγεα δινεύοντες ἐλάστρεον ἔνθα καὶ ἔνθα./ οἳ δ' ὁπότε στρέψαντες ἱκοίατο τέλσον ἀρούρης, / τοῖσι δ' ἔπειτ' ἐν χερσὶ δέπας μελιηδέος οἴνου / δόσκεν ἀνὴρ ἐπιών· τοὶ δὲ στρέψασκον ἀν' ὄγμους, / ἱέμενοι νειοῖο βαθείης τέλσον ἱκέσθαι 'with many ploughmen upon it / who wheeled their teams at the turn and drove them in either direction./ And as these making their turn would reach the end-strip of the field, / a man would come up to them at this point and hand them a flagon / of honey-sweet wine, and they would turn again to the furrows / in their haste to come again to the end-strip of the deep field' .19 Il. 18.485-489: ἐν δὲ τὰ τείρεα πάντα, τά τ' οὐρανὸς ἐστεφάνωται, / Πληϊάδας θ' Ὑάδας τε τό τε σθένος Ὠρίωνος / Ἄρκτόν θ' , ἣν καὶ Ἄμαξαν ἐπίκλησιν καλέουσιν, / ἥ τ' αὐτοῦ στρέφεται καί τ' Ὠρίωνα δοκεύει, / οἴη δ' ἄμμορός ἐστι λοετρῶν Ὠκεανοῖο 'and on it all the constellations that festoon the heavens, / the Pleiades and the Hyades and the strength of Orion / and the Bear, whom men give also the name of the Wagon, / who turns about in a fixed place and looks at Orion / and she alone is never plunged in the wash of the Ocean' .20 Cf. Il. 16.179-186; HHAphr.117-121.The motif has been analyzed most recently by S. Langdon, in her study of Geometric iconography: Langdon (2008) 197-233.before going to the χορός, cf.Hes.Th. 5-6).The garlands will also reappear in the final choral scene, worn or carried by the dancing girls, just as the mention of Okeanos in this innermost band anticipates his final appearance as the outermost ring of the shield.The pertinence of this pattern of concentric circles-one of the most common motifs in the contemporary iconographic repertoire-in relation to the perception of the choral performance is confirmed by the description of a similar scene in the longer Homeric Hymn to Aphrodite (119-120: πολλαὶ δὲ νύμφαι καὶ παρθένοι ἀλφεσίβοιαι / παίζομεν, ἀμφὶ δ' ὅμιλος ἀπείριτος ἐστεφάνωτο 'There were many of us dancing, brides and marriageable girls, and a vast crowd ringed us about').Here, the outer circle of spectators watching the chorus becomes itself a garland (ἐστεφάνωτο) through this contemplation, a clear example of the articulating power of choral performance to project the images it creates upon the surrounding space-physical and social space alike-and to fashion it into the articulate order those images evoke.In this sense we may interpret the placing of the first choral scene after the description of the heavenly bodies as projecting this image of cosmic order upon the following scene, the judicial scene in the agora, to which it is inextricably linked through the opposition κοῦροι/γυναίκες-ἄνδρες (493,495,497).The whole population of the city is thus distributed into two halves, with the women with their children and the young people taking part in the wedding scene as audience and actors, respectively, while the male adults (including the elders, acting as judges) participate, again both as witnesses and protagonists, in the judicial procedure, so that both episodes are presented as two symmetrical parts of a whole representation of the δῆμος or λαός forming the city.21In the judicial scene, we find again the spatial order of the inner sacred circle where the elders sit, surrounded by the outer circle of the community watching and taking part in the proceedings.The polarity of the circular disposition here, with two semicircles of supporters to each of the contenders, reflects the tension (νεῖκος) of the occasion, but at the same time the two talents, which are to be the prize for the victor, foretell, at the outcome of the agonistic procedure, the resolution of this polarization into a renewed unity for the community.But this whole process has already been prefigured in the choral celebration of the wedding scene, which overcomes the sexual tension previously alluded to in the Bear's fear of abduction and rape by Orion, following the pattern rape-marriage so common in Greek myth.This constitutes also a well-known ritual pattern inherent to the initiatory nature of many choruses in Archaic Greece: the symbolic representation in song and dance of tension, rivalry, and conflict, to be finally solved or averted through choral performance itself (e.g. through the choral sequence that accompanied the maturation of young girls into wives and mothers, i.e. partheneion-hymenaios/epithalamios).Thus, the disposition of the first episodes of the description of the shield suggests that a choral ritual pattern projects onto and fashions a political procedure, perhaps reflecting or creating an historical, extra-textual reality, if we consider this text to be contemporary of the complex processes that, for the sake of simplification, are usually named as 'the rise of the polis ' .22Let us leave now the circular pattern of the dance and consider the straight line.In the first two choral scenes this constitutes the very structure of the performance, as they are processional dances defining important spatial axes, the first inside the city, linking the οἶκοι brought together by the wedding (ἠγίνεον ἀνὰ ἄστυ, 'they were leading the brides along the city' , opposed to the static scene in the agora that follows: λαοὶ δ' εἰν ἀγορῇ ἔσαν ἀθρόοι, 'the people were assembled in the market place'); the second from the extraurban space of the vineyard (itself defined by a fence, ἕρκος) back to the city.Elsewhere, we find two paradigmatic examples of these processional dances and of the foundational power attributed to their movement: (i) at the beginning of the Theogony, the chorus of the Muses marching (στεῖχον, Th. 10) from Mount Helicon to Olympus while they sing the divine order of the cosmos, which is the poem itself to which this choral scene is the prelude; and (ii), at the end of the Homeric Hymn to Apollo, the processional paean of the Cretans turned Delphians as they follow the god from the shore at Chrysa to the site of Delphi on Mount Parnassos.23In the description of the shield of Achilles, the στίχες traced by the final chorus recall the previous scene, in which the shepherds march alongside the cattle while the dogs follow them, an appropriate image 22 Cf. Nagy (2003).23 This movement, with which the hymn closes, completes the articulation of geographical space centred on Delphi that began with the god leaving Olympus.Thus, the maritime axis Pylos-Chrysa-Delphi corresponds to the inland axis Olympus-Telphoussa-Delphi previously traced by the god alone.It is interesting to note that the specifically choral part of all these movements is the last and final one, from Chrysa to Delphi, which ritually marks the actual foundation of the sanctuary.For spatial articulation through choral movements in the Homeric Hymn to Apollo cf.Reig and Carruesco (2012).An iconographic counterpart to this choral scene is the Delian chorus led by Theseus from the ship to the Horn Altar at the centre of the sanctuary, which occupies the uppermost band of one side of the François Vase.
for the processional chorus, led by the choregoi, as it has been described in the previous scene, the vintage song (579 = 572: ἕποντο).24On the other hand, in the final chorus, being as it is a static dance, the στίχες wind around or intersect each other, defining a space (probably quadrangular, as opposed to the previous wheel) which we can picture as a grid, a web, or a maze.This last form would be especially appropriate for a dance that has as its model Ariadne's chorus in Crete, and we can perhaps suppose here, as the scholiasts point out, an implicit allusion to the labyrinth, as it was traced at Delos by the dancers of the γέρανος.
As for the web image, we can mention the dance of the Phaeacian boys accompanying Demodocos' song of the adulterous union of Ares and Aphrodite, caught under the view of the rest of the gods in a web fabricated by Hephaestus.I would argue that the swift, sparkling movements of the feet of the dancers which are the object of Odysseus' admiring gaze (Od.8.265: μαρμαρυγὰς θηεῖτο ποδῶν, θαύμαζε δὲ θυμῷ, '[Odysseus] gazed at the flashing of their feet and marvelled in spirit') are to be related to the image of the web imprisoning the two divine lovers in Demodocos' song.25The description of the web makes it clear that it defines a circular space (Od.8.278: ἀμφὶ δ' ἄρ' ἑρμῖσιν χέε δέσματα κύκλῳ ἁπάντῃ, 'and threw the netting right round the bedposts'), like that in which the performance takes place, the Phaeacians' agora, described as a χορός for the occasion (Od.8.260, 264; for the ideal circular form of the agora, cf.Il. 18.504).Bearing in mind the description of the final chorus on the shield of Achilles, we may perhaps visualize the pattern of this dance as the winding or intersecting στίχες evoking the unbreakable bonds of the web (ἄφυκτοι δεσμοί), which, like the shield of Achilles, is in and of itself a δαιδαλέος artifact, made by Hephaestus himself.Being at the same time a circle, as we have seen, this web can provide us with a clue to the understanding of the alternating forms, κύκλος and στίχες, of the dance in the last choral scene on the shield.
On the other hand, the web's bonds (δέσματα), invisible even to the gods, constitute an abstract underlying pattern to the sexual bond (μῖξις, φιλότης, δεσμός) of Ares and Aphrodite, paradigmatic κοῦρος and κόρη in their beauty, contemplated by the rest of the gods summoned there by Hephaestus himself, an aspect that is strongly emphasized by the abundance of visual terms present in this passage.Since this whole scene is a performance in the Phaeacians' 24 Il. 18.578-579: χρύσειοι δὲ νομῆες ἅμ' ἐστιχόωντο βόεσσι / τέσσαρες, ἐννέα δέ σφι κύνες πόδας ἀργοὶ ἕποντο 'the herdsmen were of gold who went along with the cattle, / four of them, and nine dogs shifting their feet followed them' .
agora, with a dance of youths accompanying the song, we can observe a mirroring motif in this disposition, with the Phaeacians and their guest Odysseus looking with admiration at the dazzling movements of the boys' feet just as the gods are watching with pleasure beautiful Aphrodite caught in the web with Ares-and, by implication, just as we admire the whole scene as narrated by the singer of the Odyssey.We have here, then, another example of the pattern of mise en abîme characteristic of the choral performance, adopting the ideal form of concentric lines ceaselessly expanding outwards, reaching out from the world represented by the song and the dance to the real world of the people watching it.
As for the actual effect of this contemplation on the viewers, we can single out three reactions among the gods watching the surprised lovers: a) a general reaction among the gods remarking on how the transgressor, however beautiful or strong, gets caught in the end; a moral reaffirming of social values, albeit expressed in a humorous way; b) the playful dialogue between Apollo and Hermes, two handsome κοῦροι themselves, with Hermes gazing at Aphrodite's naked beauty and wishing he were Ares embracing the goddess,26 a wish, full of desire, which corresponds to the reciprocal mimesis of choral performance, achieved through the desire (ἵμερος) that the contemplation of the beauty of both the dancers and the dance arouses in the spectator; c) Poseidon's reaction, in contrast to the preceding gnomic and erotic remarks, takes us to the judicial aspect of the scene, in which the outrage (χόλος) and the potential conflict (νεῖκος) the adultery has provoked is solved by the agreement to pay compensation.
It is difficult not to be reminded here of the sequence, in the shield of Achilles, of the wedding-scene, with the singing of the hymenaios-a context where the first two reactions to Ares' and Aphrodite's predicament just mentioned would not be out of place-followed by the litigation scene in the agora, where the νεῖκος can be avoided by the acceptance of a compensation (ἄποινα) by the injured party, a procedure held under the active contemplation of the community.In his study of this scene, Nagy has isolated the same pattern of 26 It is interesting to note here the exact inversion of this wish-again in the positive moral sense-in Alcman's partheneion, with the warning against a mortal aspiring to lie with golden Aphrodite (pmgf 1.17), or the narratization of this choral motif in Anchises' story, as told in the Homeric Hymn to Aphrodite, where the hero accepts to lie with the goddess believing her to be a maiden abducted from the chorus (117-118).
expanding concentric circles that I have found in Demodocos' song.27But I would further develop his analysis by arguing that also in the Iliad passage the source of this pattern and its performative function derives ultimately from a choral matrix.If we follow the logic of expanding concentric circles, each one spilling over into the next, as described by Nagy, we must not forget that the sequence has begun at the centre of the shield, the choral movements of the astral bodies, thence spilling over into the double scene of the wedding song and the litigation in the agora.Since at the other end of the chain, as Nagy rightly observes, we find the epic poet and his public (or rather an endless succession of epic performances), the importance of this choral matrix for the representation of epic as a genre and particularly for its function of creating a collective, articulate identity for its audience becomes evident.28 In the Phaeacian occasion we find a similar social function to that described in the litigation scene on the shield, as Demodocos' performance, like the dance of the βητάρμονες that follows, solves the νεῖκος that had arisen between the guest and Euryalos, and integrates the guest into the community by the gift of ξένια, consisting here of richly woven robes and a talent of gold.As for the crowd watching the dance, this contemplation defines, collectively, the communal identity of the Phaeacians, who excel in dancing, in the highly agonistic context of the whole episode; individually, it provokes a pleasure that ranges from the image of youth and dexterity projected upon each spectator, who identifies with the dancers to the point of ideally exchanging places with them, to the satisfaction of the parents looking at their sons creating a better image of them and their οἶκος for all to see and admire, enhancing thereby their social status and preparing an advantageous marriage.29Thus, Demodocos' song and its accompanying dance in the Odyssey provide a useful parallel, at several levels, to the description of the shield of Achilles in the Iliad, and particularly to the meaning and function of its choral scenes.
Let us now return to the description of the shield.It is in the context of these abstract patterns created by the dancers and enjoyed by the viewers that we find the second comparison of the passage, which, like Daedalus' χορός, also regards the spatial dimension of the performance, namely the round form of the dance compared to the potter's wheel (600-601: ὡς ὅτε τις τροχὸν ἄρμενον 27 Nagy (2003).28 On chorality as a pattern underlying epic poetry and providing it with the power to represent and generate order both inside and outside the poem cf.Carruesco (2010).29 This feeling has previously been described in Odysseus' praise to Nausicaa's beauty (Od.6.154-159), and finds its divine paradigm in Zeus and Leto watching their sons dancing in the χορός .
ἐν παλάμῃσιν / ἑζόμενος κεραμεὺς πειρήσεται, αἴ κε θέῃσιν, 'as when a potter crouching makes trial of his wheel, holding / it close in his hands, to see if it will run smooth').Through the epithet ἄρμενον, the wheel, like the shield and the chorus, is itself presented as a complex artifact made up of closely fitted pieces.But the wheel is at the same time a tool for the production of other artifacts: the vases which are the potter's works.Thus, as in the case of Daedalus, here too the comparison bears as much on the fashioning of the shield as on the choral performance being described (as brought up by the use of the agonistic terms πειράομαι, 'to test' , and θέω, 'to run').But by introducing the potter's work as a valid counterpart to the metalwork of Hephaestus and putting both in a symmetrical relationship to the chorus and the patterns it defines, as read by the watching crowd, the poet invites his audience (and us) to draw a parallel between the visual aspects of this representation and the reading of the iconographical language of contemporary artifacts.Since the shield is a piece of metalwork, iconographic comparisons have frequently been drawn between it and similar objects, notably the two series of Cretan shields and Cypro-Phoenician bowls, which indeed offer strikingly close parallels.30However, taking a cue from the comparison of the patterns of the dance to the wheel of the potter, we can now try to read the iconography on vases from the Geometric period from the perspective opened up by the previous analysis of the scenes on the shield.31 In the extratextual world, the specific relationship between the potter and the chorus lies in the need for the former to supply vases that are to be used on the occasions where choral performances take place, such as weddings, funerals, even banquets, or that are destined to become votive offerings or prizes awarded to the best dancers in agonistic festivals.This is precisely the case for the famous Dipylon oinochoe that bears what may be the oldest inscription (ceg 432, c. 740 bc) we have in the Greek alphabet: hος νῦν ὀρχεστο͂ ν πάντον ἀταλότατα παίζει, / το͂ τόδε κλ[.]μιν[, 'whoever of all dancers now plays most delicately, to him this …' (cf.Il. 18.567: παρθενικαὶ δὲ καὶ ἠΐθεοι ἀταλὰ φρονέοντες, 'young girls and young men of delicate spirit' , my translation) (figs 4.1a and b).It is significant to remark that the inscription alludes to the dance in the context of a self-referential statement (τόδε) that links the vase and its function to a specific (νῦν) performance-though this means any specific performance tak-30 Edwards (1991) 203-206. 31 On the iconography of Geometric pottery, cf.Coldstream (1968), Himmelmann-Wildschutz (1968), Schweitzer (1971), Ahlberg (1971), Carter (1972), Whitley (1991), Rystedt and Wells (eds) (2006), Langdon (2008).
fig. 4.1a
Attic oinochoe, c. 740 bc, from Dipylon © hellenic ministry of culture and sports-archaeological receipts fund ing place in the here and now of repeated ritual.In a sense, the temporality of the dance (νῦν) materializes into the physical presence of the vase (τόδε), and the correspondence between the adverb and the pronoun of proximity emphasizes that the object is somehow the equivalent or the substitute of the dance.
Equally noteworthy is the display of the inscription around the vase, which, through the form and direction of the letters and by echoing the movement of the geometric bands below, seems to be trying to imitate the movement of the dance to which it alludes.A later, more explicit example of this choral layout of the letters on a vase can be seen on a Corinthian aryballos depicting a dancer and a flute-player (fig.4.2), on which the movement of the dancer is suggested through the sinuous disposition of the letters of the inscription, which again alludes to the dance itself.It could be objected, of course, that the primitive, tentative character of such an early example of writing as that on the Dipylon oinochoe (the inscription even seems to have been left unfinished) would be at odds with the attachment of a conscious symbolic meaning to its spatial layout.On the contrary, I would argue that precisely because this is a pioneering effort, it would have been all too natural that the writer would have borne in mind a visual, not just textual, parallel, one familiar to him from the 'reading' of the geometric patterns he was accustomed to, namely a choral reading, as I will try to argue in what follows, particularly as it is naturally evoked here by the content of the inscription.The name βουστροφηδόν, applied to a particular (and particularly early) layout of an inscription, bears witness to a similar understanding of the visual aspect of writing through a metaphoric image, one which, as we have just seen, also provides a model for the reading of dance movements (δινεύω, στρέφω).We could thus argue that the choral paradigm offered a mental frame that could make intelligible in the first place the possibility of visualizing speech in space and fixing it in time, in the same way as the abstract patterns the dancers make visible with their movement and even draw on a soft ground (cf.Od. 8.260: λείηναν δὲ χορόν, 'smoothed a dancing space').The last observation takes us back to our main argument in this paper, the relationship between epic descriptions of choral performance and Geometric iconography.
The high frequency of representations of choral performances on Geometric pottery, which sets a trend that will continue in later periods, testifies to the importance accorded to the chorus in this cultural context, a point I hardly need to develop here.32But the point I would like to make is that the multiple levels of the visual apprehension of the performance by its audience, as we have seen in the epic texts, may have a close parallel in the reading of the iconography of the Geometric period (1000-700bc), especially the Late Geometric (second half of the eighth century), during which the figured image coexisted with the geometric decoration.
We have seen that the spectators of the dance took pleasure in the simultaneous contemplation of (i) the physical beauty of the dancers and their robes and ornaments, and (ii) the abstract patterns traced by the dancers, and that both levels could also be contemplated through a limited range of fixed symbolic images (which, at least in the case of choral lyric, are often evoked in the song as self-referential metaphoric allusions): the astral bodies, some specific animals and their movements, the form or decoration of a precious object (e.g. a necklace), a mythical place, such as the labyrinth, or, in Demodocos' song, Hephaestus' web imprisoning Ares and Aphrodite.If we now apply this perception to a particular case, such as a Late Geometric krater from Argos (fig.4.3), we can observe a similar relationship between the figured panels, which depict a female chorus, and the central one, occupying a privileged iconographic position, which can be read as the representation of a dance pattern.Similarly, in the bands of birds framing the dancing women a parallel can be drawn between both images, based on the bird imagery that is common also in textual descrip-
fig. 4.3
Argive crater (Late Geometric), from grave t45 in Argos © hellenic ministry of culture and sports-archaeological receipts fund.drawing: paloma aliende (catalan institute of classical archaeology).
tions of choruses such as in the texts of choral lyric.These bands have their counterpart in the zigzag lines framing the central panel, which in turn recall the zigzag line of the dancer's arms, forming a ὅρμος or a σειρά.The rest of the space is filled with similar strings of linear motifs that become circular through their continuous display around the vase, and this is precisely the point of the comparison of the shield to the potter's wheel.33In the Argive krater, other figurative meanings could conceivably be attached to the lines in the central panel, such as a watery surface, like a water meadow or a pond, which would then suggest an identification of the dancers as a chorus of nymphs.Nevertheless, this possibility does not exclude the interpretation that I have proposed, since, as we have seen, the geometric patterns of the dance were open to representational imaged readings, which were, nevertheless, neither compulsory nor necessarily restricted to one fixed meaning.I would argue that the decoration of the vase, with its different visual levels encouraging an equally multilayered reading, could be looked at and read in a similar way as we are told by the epic text the choral performance was, appropriating and transposing its efficacy and portability to another medium and to changing social contexts, like marriage, ritual performance, the banquet and, ultimately, the grave.But let us now consider some of the methodological and cultural implications of this interpretation and test its viability (and thence its validity) for the reading of a larger corpus of vases.The sources of the iconographical motifs of Geometric art can be fairly easily traced to Bronze Age Greece in some cases, to the Near East in others.What is specific to Greek Geometric art, as its very name implies, is the extraordinary development of the geometric patterns, which tend to occupy all the available space, and the rigour with which they are displayed over the surface of the vase with an exceptional sense of spatial articulation.It is not coincidental, I think, that this is precisely the fundamental feature of choral performance as a very powerful cultural tool to articulate the physical as well as the social space, particularly at the beginning of the polis, as numerous studies have pointed out.34As for the origin of the motifs, the importance of textiles and weaving patterns has often been pointed out,35 and it is surely significant that this is also an essential symbolic, mythic, even etymological (cf.ὕμνος) referent for choral song and dance.36 The attribution of a meaning to this all-important geometric decoration in Dark Age Greece, which recurs on a number of objects besides vases, has frequently occupied scholars.Thus, Himmelmann-Wildschutz has interpreted the geometric patterns, such as the ubiquitous Kreisornamente, as vegetal motifs, reading, for instance, the meander as a stylized rendering of the wreaths crowning sympotic vessels.37Similarly, Ahlberg, in her important study of the iconography of prothesis and ekphora scenes, sees every motif as representational (cf.fig.4.4).38On the opposite extreme, Whitley has argued for 34 Cf., e.g., de Polignac's analysis of the importance of processional choruses in defining the main organizing axes of the territory of the polis, especially those linking the extraurban sanctuaries to the ἄστυ: de Polignac (1995) passim.Cf. also Calame (2001), Langdon (2008).35 Schweitzer (1971) 30: 'A series of phenomena suggest that (the Geometric style) developed alongside a lost textile art and this may even have been the origin of the Geometric art before 900 bc.(…) Surface ornaments such as the checkerboard, saw-tooth and lozenge patterns seem to be developed directly from weaving techniques.' 36 Though the precise etymology of ὕμνος is uncertain, it is usually linked to ὑφαίνω ('to sew') and/or ὑμήν ('membrane' , with etymological relations with words meaning 'sewing seam'; cf.Chantraine, delg s.v.), which in turn could be related to ὑμέναιος.Whatever the true etymology, though, a Greek perception of the relationship is apparent in, e.g., Bacchyl.5.10: ὕμνον ὑφάνας.37 Himmelmann-Wildschutz (1962).38 Ahlberg (1971); for example, according to her, 'the emblem zones denote a locality outside a purely social, non-iconographic interpretation, a non-representational code whose meaning lies in the conferral of status to the owners of the object.39While I am in total agreement with him on the necessity for a social approach, rather than a purely iconographic one, from an analysis of the texts I would not exclude at least the possibility of a level of representation in the contemporary reading of these motifs, in the same way that the patterns of the dance could sometimes be read as imitating some element of reality, or simply as abstract images of kosmos or harmonia, beautiful in their own structural complexity.It is not, however, a naturalistic representation, albeit stylized, but rather a ritualized one.Since by and large we recognize the importance of ritual in figured scenes, when they exist, I do not think it can be denied when the object's decoration is purely geometric, since, as Kowalzig has recently argued, the chorus provides the main model for ritual as performative action in Archaic Greece.40If we look briefly to the repertory of geometric decoration, we find a striking correspondence between it and the patterns we have found in the epic passages just analysed, especially in the description of the shield of Achilles.The disposition of the patterns can be either linear, frieze-like, or arranged in closed panels, in the manner of a metope or an emblem, a polarity that has its parallel in the distinction between processional (such as the two first dances on the shield of Achilles) and stationary choruses (such as the third one).In the first type, we find straight lines, sinuous lines, zig-zags, meanders or frieze-like repetition of the same motifs, often all the way around the vase.The main motif is the meander, which in its simplest expression traces a βουστροφηδόν or sinuous pattern (like a river or the movements of the ploughman), while in its most complex forms it gives the impression of a maze or a watery surface (fig.4.5).Whatever its form, however, the meander suggests a movement based on the abstract principles of δινεύειν and στρέφειν, which are also choreographic concepts.Sometimes in the form of a key or a knot, rather than a single continuous line,41 the meander can also refer to the interlocking hands of the chorus, the bond (δεσμός) that is a defining feature of its figured iconographical representation.
the house and the circular motifs may be curtains, or some kind of drapery or belonging to the architectural domain' (146).39 Whitley (1991) 17-19.40 Kowalzig (2007) 394-395. 41 The swastika can also be regarded as an isolated, non-linear version of this motif, with the possible reference to a stationary, as opposed to processional, chorus.Also very common are the circular motifs, often in the form of concentric circles, which sometimes encircle another element in the centre, such as a dot, a cross, or a figured motif, such as an animal.Sometimes these circles are linked by tangential lines (fig.4.8: note the bands under the grazing animals), giving concurrently both the impression of movement, like the meander, and of a combination of pieces, like a necklace or, indeed, a dancing chorus.In other cases these circles adopt a quasi-representational form, like a flower or, in a characteristic motif, an object resembling at the same time a wheel and an astral body, like the sun or the star-crowned heaven of the innermost band of the shield (fig. 4.6).Stars are also frequent, as is the swastika, possibly also an astral motif.As for the frequent grid patterns, made up of straight or diagonal lines, chequered or filled with dots, we have already encountered them in the intersecting στίχες of the dance, which could be read, for instance, as a web or a military or ritual formation (fig. 4.7).
It is hardly necessary to point out here the choral character of these patterns.In this respect, we may compare all these geometric motifs and their dazzling display over the whole surface of the vase to the flashing movements of the dancers' feet (μαρμαρυγαὶ ποδῶν), which in the textual descriptions are said to attract the gaze of the spectator, fascinating him and provoking a sense of wonder (θαῦμα) and pleasure (τέρψις).I would like to emphasize, however, that these geometric motifs are not necessarily a stylized representation of the dancers, thus ultimately a figured iconographic language, but rather a visual rendering of the same abstract patterns that the dance traces on the ground for the brief duration of the ritual occasion, representing and fixing down the ritual action in images as the poet of the Iliad does in words.That these patterns could eventually be read as evoking some choral images within a fixed and codified repertoire is suggested by the texts and by those images where figures and geometric motifs echoing each other are juxtaposed, but this attribution of meaning was not compulsory and could vary according to the occasion and even perhaps the individual viewer.Thus, there is no point in trying to assign a single, specific meaning to each of the motifs, but, on the other extreme, neither should they be considered as having no iconographic meaning at all.When it comes to figured though still non-narrative motifs, the most frequent are animals, often repeated to form a continuous band (figs. 4.7 and 4.8).Here, also, the correspondence with the animal imagery of the chorus is significant: the bird, the horse, the cow, the deer, and the goat are the main species represented, frequently shown grazing, suckling, marching or even jumping.We may note here that the verb σκαίρω, used to describe the movement of the dancers in the vintage song on the shield of Achilles (Il.18.572: μολπῇ τ' ἰυγμῷ τε ποσὶ σκαίροντες ἕποντο, 'with singing and whistling / and light dance-steps of their feet kept time to the music'), reappears elsewhere in Homer to denote the jumping of the young calves to their mothers after grazing in the meadow (Od.10.412), a description that combines two very common iconographic motifs in the Geometric period: the grazing and the suckling animal.
It is interesting to note that when unequivocal figured motifs appear, whether in decorative layouts (such as friezes or panels) or in complex narrative scenes, there is often a perceived effort to establish morphological parallels between the figured and the geometric motifs on the vase, as, in the Argive vase considered above (fig. 4.3), between the angular arms of the Argive dancers and the zigzag lines framing the panels, or, in a funerary scene on an Attic Late Geometric krater in the Metropolitan Museum (fig. 4.9), between the eyes of the mourners and the dotted circles filling the void space around them.The same parallel can be seen in a strange representation on an oinochoe in Boston, dated c. 735-720 (fig. 4.10), which has been interpreted, rightly I think, as an acrobatic dance (like the 'revolving acrobats' , κυβιστητῆρες δινέοντες, in the epic texts).42Here, the important feature of the open eye of the central inverted figure seems to correspond to the frame of dotted circles surrounding the scene.
It is very tempting to read these cases as representations of the crowd watching the ritual, be it the prothesis or the dance, but in accordance with our analysis it seems preferable to consider this reading as merely a possible one, for us as it probably was for the contemporary viewer.In any case, already in the Geometric phase, we can observe the importance of the eye as an iconographical motif, which is well known for later periods of vase painting (cf. the eye cups in Attic black-and red-figure pottery), and link it to the choral performance to which it primarily belongs (fig. 4.11).Indeed, references to the eyes and the gaze, both the dancers' and the spectators' , are, not surprisingly, ubiquitous in the description of the chorus, in epic as in lyric texts: expressions like ὁφθαλμοῖσιν ἰδών are frequent in choral contexts (cf.e.g.Il. 16.182), and an excellent example from choral lyric is provided, in Alcman's first partheneion, by the apostrophe ἦ οὐχ ὁρῆις; (pmgf 1.50).But we can further observe here how the mirroring and projecting power of the choral gaze is expressed through the attribution to the eye of the beholder of some images appropriate to the chorus itself: thus, the terms δίνω and στρέφω can be applied to the movements of the eye, even combined in a single verb to express the moment of Patroklos' death (16.792: στρεφεδίνηθεν δέ οἰ ὄσσε, 'his eyes spun'; cf.Theseus' eyes in Bacchyl.17.18: δίνασεν ὄμμα).Similarly, the epithet ἑλικοβλέφαρος, also denoting beauty, attributes to the eye and the gaze a spiral movement that we find often in the movements of the chorus.Most significantly, the two terms for the pupil are associated to female attractiveness leading to marriage or intercourse, a concept that in early Greek thought finds a ritualized expression in the chorus: (i) the word γλήνη, denoting brightness, reappears in a derived form in the expression τρίγληνα ἕρματα, three-eyed earrings, mentioned among other female attires, jewels or robes, in contexts of wooing and seduction,43 and (ii) the pupil can be simply called the κόρη, evoking the image of a maiden at the centre of a circular 43 Il.14.182-183 (the earrings worn by Hera to seduce Zeus), Od. 18.298 (earrings among the gifts of wooing offered by the suitors to Penelope).In both cases, the link with the concept of χάρις is made explicit through the same formulaic line: τρίγληνα μορόεντα, χάρις δ' ἀπελάμπετο πολλή.In the first example, moreover, an association can be made with the previous episode in which Aphrodite sends Helen to lie with Paris (Il.3.385-447), since Paris's words in seeing her (441-446) are very similar to those of Zeus seduced by the vision of Hera's beauty .In this case the choral connection is made explicit, though it is here referred to Paris, irresistibly attractive as though he was going to or coming from the chorus (393-394: χορὸν δὲ / ἔρχεσθ' , ἠὲ χοροῖο νέον λήγοντα καθίζειν, 'he was going / rather to a dance, or rested and had been dancing lately').The inversion of gender roles in that context is justified by the thematization of Paris' cowardice (cf. the insults of Priam to his surviving sons, linking them to the chorus in Il. 24.261), but the relevance of the choral association is none the less clearly established for both episodes (cf.following note).
space, be it the chorus or the θάλαμος (as in Empedocles, vs 31 b 84).Similarly, the μαρμαρυγαί of the dancers' feet (Od.8.265; HHApoll.203) correspond to the μαρμαίροντα ὄμματα, the flashing eyes by which Aphrodite is recognized by Helen under her disguise as an old woman (Il.3.397).44Other images frequently used to describe the eye, based on its circular shape, include the astral bodies (sun, moon or stars) and the wheel.These verbal correspondences linking the eye and the chorus are also paralleled in the Geometric iconographic language, in which visual links are established, for example, between the eyes, the shields, the chariot wheels, and the garlands, in the figured scenes (figs.4.11 and 4.12),45 as well as with the geometric dotted circles just mentioned, in the purely abstract fields, suggesting a reciprocal mirroring, both inside the iconographic space, between the different elements there, and outside, between the image and its viewer.Thus, the importance of the eye motif in choral contexts, in texts as well as in iconography, implies a mirroring effect, a dynamic relationship that opens up the field of representation to reach out to the beholders surrounding it, at the same time drawing them into it and projecting itself upon them.
With the appearance and generalization in the Late Geometric period of complex figured and narrative scenes, the geometric patterns tend to occupy the margins of the iconographic space, often adopting the function of frames, but also, occasionally, placed inside specific panels, though now in a marginal position to the central scenes.When the chorus appears in the representation, it tends to occupy this same position, often in or near the neck of the vase, as a framing or encircling motif of the whole iconographic space.46This disposition became a recurrent iconographic feature, running from the Late Geometric period (e.
45
A perfect match between the eyes of the human figures (female mourners above and charioteers below) and the central point of the chariot wheel, where the axis is fixed, can be found in a Late Geometric neck-handled amphora from the Kerameikos (Schweitzer [1971] figs.49-50).46 We can probably include in this group the representations of mourners in procession, as they are to be considered as singing the threnos (cf., e.g., an Attic amphora reproduced in Coldstream [1968] pl.11d).Also here belong the frequent representations of friezes of grazing or marching animals around the neck of the vase (cf., e.g., an Attic Late Geometric amphora in the Benaki Museum, with a frieze of grazing horses and waterbirds: Schweitzer (1971) pl.30).It is important to note that this liminal disposition of the chorus is precisely that of the choral scenes in the shield of Achilles, with the first chorus opening the band representing the City at Peace and the City at War, the second chorus closing the band with the three seasonal agricultural works, and the last choral scene both occupying an independent space of its own and functioning as the closing band of the whole shield (with the exception of the non-narrative rim with the representation of the Ocean).We may also mention in this framing or liminal position the chorus of the Muses in the proems to the Catalogue of Ships the spectators enter the world of the chorus, believing that they are the ones singing, and by this mirroring look they are conferred with the quasi-divine beauty of the κόραι as well as their collective identity as Ionians.But at the same time the chorus connects also the poet, the blind man from Chios, with his audience, the Ionians to whom the girls will confirm his excellence and that of his poems.I have argued elsewhere that the epic passages I have just mentioned show that the epic poet needs to appropriate the chorus' voice and its mimetic force so that he may be able to order the world through the catalogic and genealogic mode of the discourse.47The three choruses that rhythmically scan the Homeric Hymn to Apollo, for instance, are to be closely related to the prominence of the geographical catalogues in this work, just as it is the invocation to the chorus of the Muses in Iliad 2, as opposed to the single thea at the beginning of the poem, that enables the poet to tackle the articulation of the Greek heroic world in the Catalogue of Ships, as Hesiod does in his works.Similarly, I would argue here that the potter too appropriates the active mimetic power of the choral performance, not only through the frequent representations of choruses, often in liminal spaces, but more generally through the extensive use of geometric patterns, first exclusively, and later in a framing relationship to the figured scenes.In this way, the chorus does not need to be actually represented to give the image its power of fascination, its ability to mirror the world surrounding it and project onto it its ideal images of order or heroic past (though it must also be stressed that, as in Demodocos' song, it can also do so in an inverted manner, as shown by the frequent representations of monsters, shipwrecks, or scenes of fig.4.16 Attic Red figure kylix staatliche museen, berlin (inv.f2279).thetis and peleus.bpk / antikensammlung, staatliche museen zu berlin / johannes laurentius violence).48The κόσμος that is the ornamentation is at the same time the κόσμος as an ordered and articulated world.In this respect, the iconographic language of Geometric pottery is the first episode in a long series of sophisticated experimentations with the power and limits of the image that is so characteristic of Greek art.The analysis of these iconographic strategies, recognized as deriving from the cultural model of chorality, can help us in turn understand better similar strategies in poetry, as they were 'looked at' and 'read' by the audience.
g. a Boetian pythoid jar, [fig.4.13]) through the Orientalizing (e.g. two Attic loutrophoroi by the Analatos Painter, one in the National Archaeological Museum at Athens, the other in the Louvre [fig.4.14]) to the Black Figure (e.g. fig. 4.12 Dancing warriors.lg cup.staatliche museen, berlin (inv.6029).picture: wikimedia commons (public domain).the representation of the arrival of Theseus and his companions to Delos on the François Vase).A kylix from Tarquinia (fig.4.15) offers a very interesting example, with Heracles and Triton at the centre and a chorus of girls (which can also represent Nereids) surrounding them.The centre of the space is occupied by the clasped hands of Heracles imprisoning Triton in a wrestling hold, an ἄφυκτος δεσμός (like that of Hephaestus' web in the Odyssey) represented here as a labyrinthine meander which echoes the locked hands of the surrounding chorus.This becomes even more evident if compared to another Attic redfigure kylix with the battle of Thetis and Peleus (fig.4.16).Here, the place of the chorus is occupied by a geometric band, a περιπλοκή exactly reproducing the central clasped hands motif, which in this mythical context is a nuptial as much as a wrestling motif.It is important to note that this liminal disposition of the chorus is precisely that of the choral scenes in the shield of Achilles, with the first chorus opening the band representing the City at Peace and the City at War, the second chorus closing the band with the three seasonal agricultural works, and the last choral scene both occupying an independent space of its own and functioning as the closing band of the whole shield (with the exception of the non-narrative rim with the representation of the Ocean).We may also mention in this framing or liminal position the chorus of the Muses in the proems to the Catalogue of Ships fig. 4.13Boeotian lg pythoid jar, from Thebes archaeological museum, thebes be469.© hellenic ministry of culture and sports-archaeological receipts fund. | 2019-06-19T13:16:59.354Z | 2016-05-12T00:00:00.000 | {
"year": 2016,
"sha1": "a58f8fe82e32a488dd5c6df2f6214271e7fdf32f",
"oa_license": "CCBYNC",
"oa_url": "https://brill.com/downloadpdf/book/edcoll/9789004314849/B9789004314849_005.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9465a619f8c59c973aeff02bec46f20b8272ef08",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Art"
]
} |
44099697 | pes2o/s2orc | v3-fos-license | Precision Agriculture Design Method Using a Distributed Computing Architecture on Internet of Things Context †
The Internet of Things (IoT) has opened productive ways to cultivate soil with the use of low-cost hardware (sensors/actuators) and communication (Internet) technologies. Remote equipment and crop monitoring, predictive analytic, weather forecasting for crops or smart logistics and warehousing are some examples of these new opportunities. Nevertheless, farmers are agriculture experts but, usually, do not have experience in IoT applications. Users who use IoT applications must participate in its design, improving the integration and use. In this work, different industrial agricultural facilities are analysed with farmers and growers to design new functionalities based on IoT paradigms deployment. User-centred design model is used to obtain knowledge and experience in the process of introducing technology in agricultural applications. Internet of things paradigms are used as resources to facilitate the decision making. IoT architecture, operating rules and smart processes are implemented using a distributed model based on edge and fog computing paradigms. A communication architecture is proposed using these technologies. The aim is to help farmers to develop smart systems both, in current and new facilities. Different decision trees to automate the installation, designed by the farmer, can be easily deployed using the method proposed in this document.
purpose and, therefore, not adapted to the specific needs of each farmer. In this work an user-centred method is proposed to design intelligent and adapted services where each farmer decides its own installation using edge and fog paradigms (distributed computing) on Internet of Things technologies ( Figure 1). This method is designed on different use cases and tested in an automated greenhouse as an example of utility. The work is an expanded version presented in [1]. This paper is organized as follows: Section 2 reviews precision agriculture scenarios, how to use centred design methodologies, IoT technologies and their deployment, the capabilities and potential of edge and fog computing paradigms in these scenarios. Different greenhouses are analysed and farmers are consulted. Section 3 proposes a method to deploy distributed IoT architecture using edge and fog nodes that offer a set of new resources that can be used in any type of installation which facilitates the involvement of the farmer. In Section 4, experiments including services on edge and fog nodes, connected by IoT communication protocols, are performed. Finally, Section 5 provides conclusions and future works.
Related Works
Although more complex definitions exist, the simple description of the PA is a way to "apply the right treatment in the right place at the right time" [2]. Precision agriculture comprises a set of technologies that combine sensors, information systems, enhanced machinery and informed management to optimize production by accounting for variability and uncertainties within agricultural systems. It is a farming management concept based upon observing, measuring and responding to inter and intra-field variability in crops [3]. Methods and technologies to decide how, where and when to use sensors and machinery should involve all the main actors: that is, farmers and information and communication technicians. User-centred methods and IoT communication technologies applied on precision agriculture are revised in this section. Finally, different greenhouses are analysed and farmers are consulted.
User-Centred Design Models
In PA context, User-Centred Design (UCD) describe a design process where farmers influence the process of how the design takes shape. There are several ways in which the user (agricultural specialist) can be involved in the process. This term describes a set of methods to create models on which design adapted solutions. The user-centred design process works against subjective assumptions about user behaviour. It requires proof that the design decisions are effective. If user-centred design is properly done, applications becomes an outcome of actively engaging users. Therefore, any design decisions that were made by observing and listening to them will not be based on personal preferences. User experience (UX) is one of the many focuses of UCD. It includes the user's entire experience with the product, including physical and emotional reactions. UCD is objective and often relies on data to support design decisions [4]. According to [5], user centred design is a development method that guarantees that product, software or web site will be easy to use. The International Usability Standard (ISO 13407) [6], specifies the principles that underlie user centred design: • Requirements gathering, understanding and specifying the context of use • Requirements specification, specifying the user and organisational requirements • Design, producing designs and prototypes • Evaluation, carrying out user-based assessment of the site The design is based upon an explicit understanding of users, tasks and environments. Users are involved throughout design and development. The process is iterative. The design is driven and refined by user-centred evaluation. The design addresses the whole user experience. The design team includes multidisciplinary skills and perspectives.
Internet of Things: Architectures and Protocols
Iot is developed using architectures based on layers capable of connecting a huge number of devices with each other and with the established services. The basic model of IoT has a 3 layer architecture which are of Perception, Network and Application Layers. IoT faces several challenges, especially in the field of privacy and security, so to overcome these issues new standard architectures need to be more focused on many essential factors like Quality of Services (QoS), data integrity, sustainability, confidentiality, etc. The IAB (Internet Architecture Board) has published the RFC 7452 document: Architectural Considerations in Smart Object Networking. This document offers guidance to engineers designing Internet-connected smart objects. A Request for Comments (RFC), in the context of Internet governance, is a type of publication from the Internet Engineering Task Force (IETF) and the Internet Society (ISOC), the principal technical development and standards-setting bodies for the Internet. Table 1 illustrates different works that apply the layer model in the IoT architecture. Business, Application, Service, Object abstration, Objects [10] IoT needs protocols adapted to the new requirements. Traditional protocols are extended and new protocols are proposed offering different options on different contexts. IoT has now a wide range of applications. A smart devices can have wired or wireless connection. As far as the wireless IoT is the main concern, many different wireless communication technologies and protocols can be used to connect the smart device such as Internet Protocol Version 6 (IPv6), over Low power Wireless Personal Area Networks (6LoWPAN), ZigBee, Bluetooth Low Energy (BLE), Z-Wave and Near Field Communication (NFC). They are short range standard network protocols, while SigFox and Cellular are Low Power Wide Area Network (LPWAN).standard protocols. In [11] a review and comparison of different communication protocols in IoT is realised. This comparison aims at presenting guidelines for the researchers to be able to select the right protocol for different applications. Table 2 illustrates different protocols used in the architecture layers. Choosing the most appropriate protocol depends on several facts of which most important are: environmental conditions, network characteristics, the amount of data to be transferred, security levels and quality of service requests [12]. CoAP network is primarily a one-to-one protocol for transferring state information between client and server while MQTT is a many-to-many communication protocol for exchanging messages between multiple clients. CoAP runs over UDP which means that communication overhead is significantly reduced. If constrained communication and battery consumption is not an issue, RESTful services can be easily implemented and interact with the Internet using the worldwide HTTP [13]. If the targeted final applications require massive updates of the same value, MQTT protocol is more suitable. In this work different protocols (MQTT, HTTP, Bluetooth, WiFi, LTE, ...) can be used to develop proposed architecture.
Internet of Things Technologies Applied on PA Scenarios
Advance in electronics, computing and telecommunications are allowing the development of new devices (sensors, actuators and computing nodes) with wireless communication capabilities, installed at any location, smaller, energy efficient, autonomous, more powerful and low cost [14][15][16][17][18]. IoT works using user-driven service modeling is proposed in [19]. Low-cost IoT devices that need to gather and transmit sensor data and receive remote commands is shown in [20][21][22]. IoT uses the connection between devices to improve their efficiency and user experience, being the communication one of the main elements for a proper IoT network. A review of the most common wired and wireless communication protocols, discussion of their characteristics, advantages and disadvantages as well as a comparison study to choose the best bidirectional sensor network composed by low power devices is realised in [23]. Previous works show the degree of development of IoT technology, which has also been experienced in precision agriculture in recent years.
IoT technologies are proposed in PA scenarios. In [24] this paradigm is analysed as a solution in precision farming. IoT Smart farming application include farm parameters tracking, monitoring, field observation and storage monitoring. The work Internet of Things Platform for Smart Farming [25] presents a platform based on IoT technologies that can automate the collection of environmental, soil, fertilisation, and irrigation data; automatically correlate such data and filter-out invalid data from the perspective of assessing crop performance; and compute crop forecasts and personalised crop recommendations for any particular farm. This platform (SmartFarmNet) can integrate virtually any IoT device, including commercially available sensors, cameras, weather stations, etc., and store their data in the cloud for performance analysis and recommendations. An evaluation of the SmartFarmNet platform and the experiences and lessons learnt in developing this system concludes the paper. SmartFarmNet is the first and currently largest system in the world (in terms of the number of sensors attached, crops assessed, and users it supports) that provides crop performance analysis and recommendations.
In [9] a greenhouse with hydroponic crop production was designed, developed and tested using Ubiquitous Sensor Network monitoring and control on Internet of Things paradigm. The experimental results showed that the Internet technologies and Smart Object Communication Patterns can be combined to encourage development of Precision Agriculture. They demonstrated added benefits (cost, energy, smart developing, acceptance by agricultural specialists) when a project is launched. Other related work is shown in [26] with Zig Bee technology: Artificial intelligence and decision support approaches have been developed. This work develop technology for real-time monitoring of citrus soil moisture and nutrients and the research on the integration of fertilization and irrigation decision support system. The results showed that the system could help the grower to scientifically fertilize or irrigate, improve the precision operation level of citrus production, reduce the cost and reduce the pollution caused by chemical fertilizer. A review into the state-of-the-art of Big Data applications in Smart Farming is performed in [27]. Malche et al. [28] proposed a prototype IoT system for water level monitoring which can be implemented in future smart villages in India. Manufacturers of the agricultural sector highlights the importance of IoT in [29][30][31][32]. PA is effectively a suite of methods, approaches and instrumentation that farmers should examine in detail to decide which is the most suitable for their business.
Internet of Things, Cloud and Machine Learning Evolution: Edge and Fog Computing Paradigms
Internet of Things (IoT) aims to bring every object (e.g. smart cameras, environmental sensors, control appliances, machine learning analysis) on line, hence generating massive amounts of data that can overwhelm storage systems and data analytic applications. Cloud computing offers services at the infrastructure level that can scale to IoT storage and processing requirements. However, there are applications such as sensor monitoring, control and analysis response that require low latency therefore, delay caused by transferring data to the cloud and then back to the application can seriously impact their performances. To overcome this limitation, Fog and Edge computing paradigms have been proposed, where cloud services are extended to the edge of the network to decrease the latency and network congestion [33]. Both fog computing and edge computing involve pushing intelligence and processing capabilities down closer to where the data originates from pumps, motors, sensors, relays, etc. The key difference between the two architectures is exactly where that intelligence and computing power is placed: • Fog computing pushes intelligence down to the local area network level, processing data in a fog node or IoT gateway • Edge computing pushes the intelligence, processing power and communication capabilities of an edge gateway or appliance directly into devices like programmable automation controllers (PACs) With IoT implementation now becoming more widespread, devices will generate a lot of data at the end of the network and many applications will be deployed at the edge to process the information. Cisco Systems predicts that an estimated 50 billion devices will connect to the Internet by 2020 [34][35][36]. Some of the applications they run might require very short response times, some might involve private data, and some might produce huge quantities of data. Cloud computing cannot support these IoT applications. Edge and Fog computing paradigms, on the other hand, can do so and will promote many new IoT applications.
The work done in [37] concludes that the wireless sensor and actuator networks based on Edge computing are experiencing fast development and opportunities in the post-Cloud era, and are used in more and more applications. In [38] a Fog Computing Based on radio access networks is proposed for smart-cities services.
Automated Technologies in Greenhouses
Different greenhouses in the south east of Spain have been visited to analyse the type of installation and to ask expert users. The greenhouse with the highest level of automation showed a complete number of sensors and actuators; however, not all the sensors could be related. There are two large subsystems in self-assembled greenhouses that are not interoperable. These subsystems install different types of control and technologies: • Irrigation and nutrition • Air conditioning and ventilation.
In these facilities, an ambient temperature sensor of the air conditioning system is not related to an irrigation water temperature sensor of the irrigation system. Figure 2 shows different automated greenhouses where main subsystems are listed.
User Centred and Computing Method Model: Distributed Computing Architecture Based on Edge and Fog Nodes
The current agricultural facilities are divided in subsystems (irrigation, light, climate, soil, crop and energy) that are not interconnected. Industrial logic programmable controllers and specialised sensors give basic automation services in each subsystem. Internet and electronic devices (smartphones) provide new functionalities based on information access, control and monitoring. Human interfaces on smartphones connected to web servers are examples of new services developed over the past years. Agricultural technician and farmers have knowledge that can be converted on expert rules for device control. These rules are programmed and implemented on actual programmable devices; however, they are static rules which means that do not evolve when new conditions occur, neither do they adapt to the singularities of each installation. The farmer has to decide how to set the rules: what pH the irrigation water must have, how much water should be programmed in irrigation process, etc. Also, each rule only has effects in a subsystem (irrigation, climate), there is no interoperability.
Considering this context, new facilities design and development method are proposed in this work. The aim is that the farmer participates in the automated activities and that the subsystems become interoperable. A method that implement automatic rules and automate the decision making considering the behaviour of the installation itself are also proposed. The phases of the proposed model are shown in Figure 3.
• Analysis: two kind of users are identified in this phase (agriculture user expert and ICT technician). Expert users in agriculture are interviewed to define main processes to control. All these issues are related with ICT expert in a participatory design. The results of this first approach are the things required to design services and control. In this phase an user-centred methodology captures the farmer requirements.
• Design: the model is based on an architecture with three levels: edge, fog and cloud services. In this phase an adapted architecture using these levels is designed. The adapted architecture is shown in Figure 4.
•
Integration and data analysis: Installation and Integration subsystems are developed in this phase. Data analysis is proposed to design machine learning services based on expert rules with farmer. • Start up, measure and feedback: Test and feedback are launched. The first expert rules are integrated with farmer supervision. New rules are designed with feedback processes. Automatic and adapted rules are developed using artificial intelligence systems with machine learning platforms.
User-Centred Analysis and Design
There are two cases treated: • Agricultural installation with some automated facilities already installed • New agricultural installation The method is the same for both. Expert users in agriculture are interviewed to define new processes to control. In this first approach, the things (objects) required, their relationships and the potential services are detected. Once objects and services have been detected, they must be related to the necessary communication and control technologies (IoT protocols). Human Interfacing are adjusted. Expert rules and intelligent services are analysed (Edge and Fog computing proposal). Finally, the installation, maintenance and operation methods are designed. All this is designed between user agricultural technician and information technologies expert.
The results of this first approach are the things required to design services and control. A first set of sensors, actuators, variables, processes and controllers are designed considering production facility. This set of objects will be considers like things in the next stage.
In this description, a thing is formed by an object/entity and a context with data associated. Each thing has a n-tuple data structure (ID, time, date, location, relations, state) where ID, data, location, relations with other things and states are defined. Table 3 represents different things. Expert users design control rules using the things defined. These control rules are part of control processes (climate, soil, irrigation, crop, energy or image) that are distributed in different embedded systems connected to the network (intranet/internet). Things are a virtual representation of all available resources that can be deployed in the different subsystems of the installation.
At this level all objects/things are recognized by designers. Basic control algorithms of all subsystems are designed.
Integration: Architecture Development
In the previous phase objects and their relationship with basic algorithms has been designed. An architecture adapted to the facility available is developed in this phase. Requirements are:
1.
Interconnection and data access of all subsystems data 2.
Facilities and resources to implement expert rules 3.
Configuration, operation and modification processes IoT and AI paradigms provide resources to propose an innovative architecture that can be used in new smart precision agriculture services. Edge computing used on control devices and fog computing nodes installed on local network provide powerful technologies to implement configuration, operation and improvement processes.
IoT protocols provide resources to capture and communique all subsystems data. Each subsystem is composed by objects/things (sensor/actuator) that can be connected and processed using nodes on sensor networks with IoT protocols. The requirements established for PA scenarios are: IoT protocols are designed to work on communication scenarios and requirements established. They are optimised for control and two-way open communication channels. In these works [39][40][41] Message Queuing Telemetry Transport (MQTT) protocol is proposed as communication paradigm between sensors, actuators, communication nodes, devices and subsystems. Some of the features that makes it especially suitable for this project are: • MQTT is a publish-subscribe messaging protocol developed for resource-constrained devices [42], a model already in use by enterprises worldwide, and can work with legacy systems.
•
All messages have a topic path composed of words separated by slashes. A common form is /place/device-type/device-id/measurement-type/status. The subscribers may use wild-cards to subscribe to all measurements coming from a specific class of device.
•
The bandwidth requirements are extremely low, and the nature of the protocol makes it very energy-efficient.
•
The programming interface is very simple, and the client memory footprint is small, making it especially suitable for embedded devices. • Three Quality of Service (QoS) levels provide reliable operations [43].
Ubiquitous networks allow an n-to-m nodes communication model. Any node is able to query and be queried by other nodes. In addition, any node may play the role of a base station (skin node) capable of transmitting its information to remote processing places using a gateway device. USN local nodes can use and process local data, with a gateway these nodes have a global accessibility and they offer extended services on an IoT scenario. Local and global access over the same node (sensor/device/actuator) has different possibilities and benefits. Whereas a local data processing is necessary in basic process control (security, system start-stop, etc.), global processing (analytic) can be used in pattern detection and information generation. In this sense, the proposed platform uses both technologies combined: different USN over a local network area (intranet) connected to cloud-IoT services (internet). A computing layer in local area, called edge computing, will serve as interface between control processes and cloud-services. This layer will be able to process data before communicating to cloud.
Data Analysis: Edge and Fog Computing Configuration
The development of edge and fog computing can be understood in three phases: • Connection: Numerous heterogeneous, real time connections between terminals and devices will serve edge computing, as will automatic network deployment and operation. Additionally, security, reliability, and interoperability of connections should be guaranteed. An application of this phase is remote automatic soil parameters and ambient conditions data reading • Data treatment on edge computing devices: In this phase, data analysis and automatic services develop new capabilities that are implemented on the new edge nodes. Applications of this phase can be data filtering, predictive calculation of climatic data, classification services or detection events • Services on fog computing nodes: Enabled by technologies such as AI and IoT communication protocols. Fog computing nodes carries out smart analysis and computing, as well as implementing dynamic, real-time self-optimization, and executing policy adjustments. Applications of this phase are prediction of water consumption, smart detection or unattended production Figure 5 shows the architecture implemented using edge and fog nodes. When automated subsystems are already installed it is necessary to interleaved embedded devices (edge-nodes) between controllers and sensors/actuators. This devices maintain the initial services and allow to initiate a supervised learning process. New algorithms are tested and approved on edge and fog nodes. In Figure 6 different services are proposed on each node.
Test and Feedback Developing
In machine learning systems the output is not fixed. It will change over time as the solution knows more and as the model on which the machine learning system is built evolves as it is fed more data. This forces the testing professional to think differently and adopt test strategies that are very different from traditional testing techniques. To test machine learning systems is essential:
1.
Obtaining data sets: This refers to a data set with main variables captured and stored to analyse and design the model. In irrigation process data set are: irrigation programming used (time and flow), ambient conditions (humidity, temperature) and soil conditions (humidity, temperature, pH and conductivity) captured by sensors. All this data are monitored and stored.
2.
Training data sets: This refers to a data set used for training the model. It is a subset of the previous dataset. In irrigation process training data set are: irrigation programming automated by the model (time and flow decision) considering ambient conditions (soil, ambient and crop). This data is usually prepared by collecting data in a semi-automated way. The results of this process are validated with agronomists.
3.
Testing data sets: It is a dataset used to to measure the model quality.
4.
Validation test suites on real scenarios. Taking the irrigation example, test scenarios include categorizing needs of water for a kind of crop considering climatic conditions and its growth phase. Automated irrigation decisions by the model are analysed in this phase.
5.
Building validation suites. It is necessary to understand the algorithm. The model has algorithm that analyse the data provided, looks for specific patterns, and uses the results of this analysis to develop optimal parameters for creating the model. The model is refined as the number of iterations and the richness of the data increase. 6.
Communicating test results in statistical terms. Models based on machine learning algorithms will produce approximations and not exact results. Quality of results must be analysed in the same context. The testing community will need to determine the level of confidence within a certain range. 7.
Model evolution. Support to develop new AI services or modifications on algorithms implemented. Supervised and automatic changes are processes to maintain the operating models.
Comparison with Industrial Facilities. Novelty Elements Proposed
Currently, industrial facilities that use PA technologies are based on integration of internet and web services with automation and control using industrial technology. Proprietary systems are designed for monitoring large production plants. Related work analysed show that the Agriculture Control system for production, irrigation, or climate proposes different monitoring and control technologies, based on wireless sensor network and industrial control. Monitoring systems analyse crop environment and the method to improve the decision making by analysing statistics and reactive algorithms. This work proposes two main novelty elements: optimization of architecture levels integration of edge and fog layers and proposes the integration the farmer in the design of new improvements using data analysis obtained with the new architecture developed.
Experimental Work
Different agricultural facilities have been analysed to introduce the method proposed. Three kind of installations summarize the different types: • Installation automated but subsystems not interoperable • Partial automation without any interconnection and non-interoperable systems • Manual control In all of them, the services based on AI are not yet installed. In this context, the method proposed using edge-computing on basic controllers and fog-computing on gateway nodes can design common services for the three types of facilities cited. With this configuration, subsystems interoperability and AI support are achieved. Control signals of already installed controllers in automated installations become inputs to edge nodes and a fog node which acts as an interface for all facility nodes. In all greenhouses, irrigation and internal environment control are basic processes. Agronomist users know how to program reactive controls and how to configure automated devices. Optimization of these resources (water, energy) are two potential services that agronomist perform through their experience. This knowledge can be transferred to intelligent systems that integrate it through techniques based on AI paradigms. Interconnection of subsystems also are one of the proposed improvement. A deployment for an automated installation is designed and implemented. This case shows how to implement when there are already automated installations. This case also serves as a guide for other types of greenhouse installations.
Analysis
The farmer, together with the technician in information technologies, propose a set of improvements: •
Monitoring and control interfaces on the Internet (control and communication services) • Event and change communication service (communication services) •
Interconnection of irrigation and air conditioning subsystems (interoperability services) • Integration of automation to optimize water consumption (AI services) The work carried out designs an installation that deploy the necessary hardware and software resources using the proposed method Figure (Figure 7) shows agricultural subsystems and model deployed on distributed nodes. Intelligent irrigation control is installed in an experimental greenhouse based on tomato hydroponic cultivation (Table 4). Following the proposed method, the experimental phases are:
1.
Things, communication and context design: objects (things) and its context are detected and related using IoT protocols and services (user-centred, architecture and IoT protocols design) 2.
Hardware devices and software modules: edge and fog nodes with the services that will be implemented are proposed (integration and AI services) 3.
Installing: how the model is tested and deployed (testing process) Table 4. Growth crop process on experimental station. Tomato plant growth stages.
Irrigation actual (Total liters) = 5 L/m 2 (initial phase) Average Temperature = 20 • C Energy used = 60 Wh/m 2 /day Average water PH = 6.5 Solar irradiance = 4 kWh/m 2 /day (NASA HOMER web) Average water EC = 2850 µS/cm Example of new service designed by user: Decision Tree to reduce water consumption
Solutions
Technological results Develop a reference model based on distributed IoT paradigms New PA processes automated Graphics Interfaces use simple and universal access Tools, facilities and resources adapted for agronomist GUI interfaces used on Internet New ways of data access and Low-cost deployment Users design DECISION TREE to optimize water consumption
Design: Things, Communication and Context
Irrigation process, soil parameters, environmental conditions inside and outside the greenhouse and energy consumption define objects and context. Sensors, actuators and processes and their relationship (context) are shown in Table 5. Context vector are (ID, time, date, GH1, relations, state) for each object/thing, where GH1 is the location ID of greenhouse.
All objects are interoperable using MQTT protocol. Publisher and subscriber communication model that implement this protocol allows interconnect all devices and things. Broker device are installed on fog node. Publishers and subscribers are implemented on different nodes.
Hardware Devices and Software Modules
Two edge nodes and one fog node are proposed to control climate and irrigation processes. Objects (things) and processes are deployed on all nodes. Irrigation and climate control are installed in edge nodes, AI services are implemented in fog node. Process control architecture is used in the first node type (edge) and data-centred architecture is used in the second node type (fog) and in the cloud services implemented. In edge node the flow of data comes from a set of variables (things and internal variables) which control the processes execution. Agronomist and expert users designs basic control algorithms. After learning and training process, these algorithms will be adjusted and modified following the results of the expert system (machine learning). The aim is to optimize resources (water, energy) without losing productivity. Two main control processes are executed in two edge nodes and one machine learning process is implemented in a fog node. Table 5 shows these processes and their relationship. Algorithms are implemented in Python and developed using open source criteria. Minimal hardware embedded devices requirements are shown in Table 6.
Installing and Testing
In facilities already automated, edge nodes are interleaved between the installed controllers and actual sensors and actuators. Some new sensors are installed to complete services designed (energy meter). In current facilities edge nodes are deployed and allows: • Work in the same way as before (initial learning stage, analysis and model selection).
•
Change to a new control using new expert and automatic rules using AI processes (supervised stage and training) • Test and reconfigure expert rules (testing and maintenance) Figure 8 shows an edge node to irrigation control and how is deployed in the experimental greenhouse built in this work, without previous installation. Agronomists and farmers preferences are those that drive the design: interfaces, maintain and optimize control processes. In irrigation process a time schedule with the selected flow rate is programmed by the user according to the period of crop growth. In learning stage, edge node captures the data and sends it to the fog node. Fog node process diary water uses, ambient and soil conditions, type of crop and its growth. Using these data crop type is classified. Crop production results are added as data to analyse it together with the stored ones. Production, water consumption, crop growing stage, time, date, soil parameters, current weather, forecast weather and ambient green house conditions are captured as inputs to machine learning platform. In future crop productions the irrigation schedule can be automated, first with human supervision and then automated. Biophysical variables (plant, soil, canal flow, and weather conditions) that are measured during the growing seasons are used as inputs to build the models. Information about crop phenology (growth stages), soil moisture, and weather variables will be compiled. The analysis of irrigation decisions is important because this can help in the estimation of short-term irrigation demands. If the automated process decisions are known, It can help canal operators to better manage water deliveries and avoid unexpected delays and operational conditions that increase canal losses. Information about these demands can also be helpful for the evaluation of expected future agricultural supplies. It can never be possible to know the exact reasons why a farmer decided to irrigate, all farmers are different and prefer their own decision processes. Data analysis with farmer in in its own installation infer automated farmer actions. This data is used to build the models and these learnt frameworks will be used to predict irrigation decisions. The specific objectives planned with farmer are:
1.
Identify the main variables contributing to an irrigation behaviour by training the models with relevant data 2.
Group the irrigation decisions into distinct classes 3.
Identify the decisions taken 4.
Detect the patterns in farmer decisions 5.
Infer future irrigation decisions using the information and modelling tools. Design decision tree algorithms to reduce water consumption Fog Computing node is shown in Figure 9. This paradigm extends the Cloud Computing to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: (a) Low latency and location awareness; (b) Wide-spread geographical distribution; (c) Mobility; (d) Very large number of nodes; (e) Predominant role of wireless access; (f) Strong presence of streaming and real time applications; (g) Heterogeneity [44].
In this paper, fog node is used to carry out the machine learning processes, storage data and communicate with cloud services (monitoring interfaces). Figures 10 and 11 and shows different variables (soil moisture, soil temperature and conductivity means) that farmer decide to use in irrigation control to design new rules to optimize production. Before, control irrigation was controlled only by a time schedule. Now, it takes into account data sensors to decide if watering and growth control can be optimized with decision trees. Cloud services designed are monitoring data accessed through a Human Machine Interface (HMI). IoT platforms push data from any Internet-Enabled Device and prompt them to quickly get started. Similar platforms with similar services show the state of commercial IoT technology: Azure [45], Ubidots [46], Thingspeak [47], are some examples of companies that provide IoT services. These platforms are built with similar architectures and provide, usually, the same resources: Application Programming Interface (API) communication between clients and IoT server.
All these platforms provide dashboard designs to monitor data using HMI formats pre-built. Using API services, processes in fog node are implemented to send new data to each dashboard. The API Documentation specifies the structure of the data that is exchanged between your devices and the Ubidots and Mobile-Alerts Cloud, along with code examples and libraries to speed up the project. Figure 12 shows a dashboard (interface HMI) designed by users on Ubidots cloud platform. The system can use standard protocols in the different layers and platforms that implement these protocols.
Conclusions and Future Work
In this work the state-of-the-art of PA and IoT technologies in agricultural scenarios, have been analysed. PA presents difficulties to be implemented by farmers. These include cultural perception, lack of local technical expertise, infrastructure constraints, knowledge and technical gaps and high start-up costs. Farmers must be involved in the design and integration of these technologies in their facilities. To carry out this solution there must be methods to facilitate such integration. This work proposes a new method to integrate the farmer in the development of new solutions using low cost sensing technologies and innovative communication paradigms. An architecture based on two new levels of communication and processing nodes (edge and fog nodes) form the technological core of the proposed method. Each level performs a set of interconnected functionalities. The proposed infrastructure can be installed either in already automated installations or in the design of new facilities. In the already automated installations, the method introduces new possibilities for the development of intelligent and interconnected control. An experimental work has been carried out in a greenhouse. In this work, communication nodes have been installed and a new service based on a decision tree paradigm has been designed by expert user. The facilities that use the proposed model make the climate control and irrigation subsystems interoperable and allow the farmer to design new integrated control rules. The new distributed communication model allows the farmer to analyse changes and improvements. This experimental work initiates a new methodology of work for the farmer who can use these new technologies more easily. Future control rules and services using a machine learning platform and AI paradigms will allow to optimize and improve the results.
Conflicts of Interest:
The authors declare no conflict of interest. | 2018-06-07T13:50:02.280Z | 2018-05-28T00:00:00.000 | {
"year": 2018,
"sha1": "99fd3bc5077f1257679bcbed8b9286e7bc7781df",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/18/6/1731/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "99fd3bc5077f1257679bcbed8b9286e7bc7781df",
"s2fieldsofstudy": [
"Computer Science",
"Agricultural and Food Sciences",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
231595368 | pes2o/s2orc | v3-fos-license | A Self-Help App for Syrian Refugees With Posttraumatic Stress (Sanadak): Randomized Controlled Trial
Background Syrian refugees residing in Germany often develop posttraumatic stress as a result of the Syrian civil war, their escape, and postmigration stressors. At the same time, there is a lack of adequate treatment options. The smartphone-based app Sanadak was developed to provide cognitive behavioral therapy–based self-help in the Arabic language for Syrian refugees with posttraumatic stress. Objective The aim of this study was to evaluate the effectiveness and cost-effectiveness of the app. Methods In a randomized controlled trial, eligible individuals were randomly allocated to the intervention group (IG; app use) or control group (CG; psychoeducational reading material). Data were collected during structured face-to-face interviews at 3 assessments (preintervention/baseline, postintervention/after 4 weeks, follow-up/after 4 months). Using adjusted mixed-effects linear regression models, changes in posttraumatic stress and secondary outcomes were investigated as intention-to-treat (ITT) and per-protocol (PP) analysis. Cost-effectiveness was evaluated based on adjusted mean total costs, quality-adjusted life years (QALYs), and cost-effectiveness acceptability curves using the net benefit approach. Results Of 170 screened individuals (aged 18 to 65 years), 133 were eligible and randomized to the IG (n=65) and CG (n=68). Although there was a pre-post reduction in posttraumatic stress, ITT showed no significant differences between the IG and CG after 4 weeks (Posttraumatic Diagnostic Scale for DSM-5, Diff –0.90, 95% CI –0.24 to 0.47; P=.52) and after 4 months (Diff –0.39, 95% CI –3.24 to 2.46; P=.79). The same was true for PP. Regarding secondary outcomes, ITT indicated a treatment effect for self-stigma: after 4 weeks (Self-Stigma of Mental Illness Scale/SSMIS–stereotype agreement: d=0.86, 95% CI 0.46 to 1.25; stereotype application: d=0.60, 95% CI 0.22 to 0.99) and after 4 months (d=0.52, 95% CI 0.12 to 0.92; d=0.50, 95% CI 0.10 to 0.90), the IG showed significantly lower values in self-stigma than the CG. ITT showed no significant group differences in total costs and QALYs. The probability of cost-effectiveness was 81% for a willingness-to-pay of €0 per additional QALY but decreased with increasing willingness-to-pay. Conclusions Sanadak was not more effective in reducing mild to moderate posttraumatic stress in Syrian refugees than the control condition nor was it likely to be cost-effective. Therefore, Sanadak is not suitable as a standalone treatment. However, as the app usability was very good, no harms detected, and stigma significantly reduced, Sanadak has potential as a bridging aid within a stepped and collaborative care approach. Trial Registration German Clinical Trials Register DRKS00013782; https://www.drks.de/drks_web/navigate.do?navigationId=trial.HTML&TRIAL_ID=DRKS00013782 International Registered Report Identifier (IRRID) RR2-10.1186/s12888-019-2110-y
CONSORT-EHEALTH (V 1.6.1) -Submission/Publication Form The CONSORT-EHEALTH checklist is intended for authors of randomized trials evaluating web-based and Internet-based applications/interventions, including mobile interventions, electronic games (incl multiplayer games), social media, certain telehealth applications, and other interactive and/or networked electronic applications. Some of the items (e.g. all subitems under item 5 -description of the intervention) may also be applicable for other study designs.
The goal of the CONSORT EHEALTH checklist and guideline is to be a) a guide for reporting for authors of RCTs, b) to form a basis for appraisal of an ehealth trial (in terms of validity) CONSORT-EHEALTH items/subitems are MANDATORY reporting items for studies published in the Journal of Medical Internet Research and other journals / scientific societies endorsing the checklist.
As the CONSORT-EHEALTH checklist is still considered in a formative stage, we would ask that you also RATE ON A SCALE OF 1-5 how important/useful you feel each item is FOR THE PURPOSE OF THE CHECKLIST and reporting guideline (optional).
Mandatory reporting items are marked with a red *. In the textboxes, either copy & paste the relevant sections from your manuscript into this form -please include any quotes from your manuscript in QUOTATION MARKS, or answer directly by providing additional information not in the manuscript, or elaborating on why the item was not relevant for this study. yes: all primary outcomes were significantly better in intervention group vs control partly: SOME primary outcomes were significantly better in intervention group vs control no statistically significant difference between control and intervention potentially harmful: control was significantly better than intervention in one or more outcomes inconclusive: more research is needed Other: Approx. Percentage of Users (starters) still using the app as recommended after 3 months * Overall, was the app/intervention effective? * Your response is too large. Try shortening some answers. Identify the mode of delivery. Preferably use "web-based" and/or "mobile" and/or "electronic game" in the title. Avoid ambiguous terms like "online", "virtual", "interactive". Use "Internet-based" only if Intervention includes non-web-based Internet components (e.g. email), use "computer-based" or "electronic" only if offline products are used. Use "virtual" only in the context of "virtual reality" (3-D worlds). Use "online" only in the context of "online support groups". Complement or substitute product names with broader terms for the class of products (such as "mobile" or "smart phone" instead of "iphone"), especially if the application runs on different platforms.
Clear selection
Does your paper address subitem 1a-i? * Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "A Self-help App for Syrian Refugees with Posttraumatic Stress: Results of the "Sanadak" Randomized Controlled Trial"
1a-ii) Non-web-based components or important co-interventions in title
Mention non-web-based components or important co-interventions in title, if any (e.g., "with telephone support").
Clear selection
Does your paper address subitem 1a-ii?
Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable.
Your response is too large. Try shortening some answers. Does your paper address subitem 1b-i? * Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "In a randomized controlled trial, eligible individuals were randomly allocated to the intervention group (IG; app usage) or control group (CG; psychoeducational reading material). Data were collected during structured faceto-face interviews at three assessments." 1b-ii) Level of human involvement in the METHODS section of the ABSTRACT Clarify the level of human involvement in the abstract, e.g., use phrases like "fully automated" vs. "therapist/nurse/care provider/physician-assisted" (mention number and expertise of providers involved, if any). (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it)
Clear selection
Does your paper address subitem 1b-ii?
Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "The smartphone-based app "Sanadak" was developed to provide cognitive behavioral therapy-based self-help in Arabic language for Syrian refugees with posttraumatic stress." Your response is too large. Try shortening some answers.
Clear selection
Does your paper address subitem 1b-iii?
Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "Data were collected during structured face-to-face interviews at three assessments (pre-intervention/baseline, post-intervention/after 4 weeks, followup/after 4 months)." 1b-iv) RESULTS section in abstract must contain use data Report number of participants enrolled/assessed in each group, the use/uptake of the intervention (e.g., attrition/adherence metrics, use over time, number of logins etc.), in addition to primary/secondary outcomes. (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it)
Clear selection
Your response is too large. Try shortening some answers.
Clear selection
Does your paper address subitem 1b-v?
Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "Sanadak" was not more effective in reducing mild to moderate posttraumatic stress in Syrian refugees than the control condition, nor was it likely to be cost-effective. Therefore, "Sanadak" is not suitable as a standalone Describe the problem and the type of system/solution that is object of the study: intended as stand-alone intervention vs. incorporated in broader health care program? Intended for a particular patient population? Goals of the intervention, e.g., being more cost-effective to other interventions, replace or complement other solutions? (Note: Details about the intervention are provided in "Methods" under 5)
Clear selection
Does your paper address subitem 2a-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "Studies have shown that Syrian refugees were typically exposed to potentially traumatizing events, increasing vulnerability to posttraumatic stress and comorbid mental health outcomes [2]. The most frequently reported disorders associated with war and escape are posttraumatic stress disorder (PTSD) and major depression, often accompanied by somatization [3]." 2a-ii) Scientific background, rationale: What is known about the (type of) system Scientific background, rationale: What is known about the (type of) system that is the object of the study (be sure to discuss the use of similar systems for other conditions/diagnoses, if appropiate), motivation for the study, i.e. what are the reasons for and what is the context for this specific study, from which stakeholder viewpoint is the study performed, potential impact of findings [2]. Briefly justify the choice of the comparator.
Clear selection
Your response is too large. Try shortening some answers. Does your paper address CONSORT subitem 3a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "After screening for eligibility, study participants were randomly allocated (1:1) to the intervention group (IG/app) and control group (CG/psychoeducational brochure)... three face-to-face interviews were scheduled with the study participants: baseline (T0/pre), immediately after the intervention (T1/post, 4 weeks after baseline), and 4 months after baseline (T2/follow-up)." Does your paper address CONSORT subitem 3b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable.
3b-i) Bug fixes, Downtimes, Content Changes
Bug fixes, Downtimes, Content Changes: ehealth systems are often dynamic systems. A description of changes to methods therefore also includes important changes made on the intervention or comparator during the trial (e.g., major bug fixes or changes in the functionality or content) (5-iii) and other "unexpected events" that may have influenced study design such as staff changes, system failures/downtimes, etc. [2].
Clear selection
Your response is too large. Try shortening some answers. Does your paper address subitem 3b-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable.
Does your paper address CONSORT subitem 4a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "Inclusion criteria comprised: Syrian refugee residing in Germany, aged 18-65 years, experience of at least one traumatic event [...]."
4a-i) Computer / Internet literacy
Computer / Internet literacy is often an implicit "de facto" eligibility criterion -this should be explicitly clarified.
Clear selection
Does your paper address subitem 4a-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study 4a-ii) Open vs. closed, web-based vs. face-to-face assessments: Open vs. closed, web-based vs. face-to-face assessments: Mention how participants were recruited (online vs. offline), e.g., from an open access website or from a clinic, and clarify if this was a purely webbased trial, or there were face-to-face components (as part of the intervention or for assessment), i.e., to what degree got the study team to know the participant. In online-only trials, clarify if participants were quasi-anonymous and whether having multiple identities was possible or whether technical or logistical measures (e.g., cookies, email confirmation, phone calls) were used to detect/prevent these.
Clear selection
Does your paper address subitem 4a-ii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "By using a multi-strategic approach to recruit Syrian refugees residing in the urban area of Leipzig, Halle/Saale and Dresden in Germany, potential study participants were attracted [11]."
4a-iii) Information giving during recruitment
Information given during recruitment. Specify how participants were briefed for recruitment and in the informed consent procedures (e.g., publish the informed consent documentation as appendix, see also item X26), as this information may have an effect on user self-selection, user expectation and may also bias results.
Clear selection
Your response is too large. Try shortening some answers. Does your paper address subitem 4a-iii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "Participation was only allowed after written informed consent." Does your paper address CONSORT subitem 4b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "By using a multi-strategic approach to recruit Syrian refugees residing in the urban area of Leipzig, Halle/Saale and Dresden in Germany..."
4b-i) Report if outcomes were (self-)assessed through online questionnaires
Clearly report if outcomes were (self-)assessed through online questionnaires (as common in web-based trials) or otherwise.
Clear selection
Does your paper address subitem 4b-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable.
Your response is too large. Try shortening some answers.
Clear selection
Does your paper address subitem 4b-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable.
5-i) Mention names, credential, affiliations of the developers, sponsors, and owners
Mention names, credential, affiliations of the developers, sponsors, and owners [6] (if authors/evaluators are owners or developer of the software, this needs to be declared in a "Conflict of interest" section or mentioned elsewhere in the manuscript).
Clear selection
Your response is too large. Try shortening some answers. Does your paper address subitem 5-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "The app was developed by frühlingsproduktionen, a creator of e-health interventions based in Berlin, Germany, on behalf of the study PI."
5-ii) Describe the history/development process
Describe the history/development process of the application and previous formative evaluations (e.g., focus groups, usability testing), as these will have an impact on adoption/use rates and help with interpreting results.
Clear selection
Does your paper address subitem 5-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "During the development of the app, typical themes and needs of refugees as well as cultural speci cs were incorporated. Therefore, focus groups were carried out to assess relevant aspects." Your response is too large. Try shortening some answers.
5-iii) Revisions and updating
Revisions and updating. Clearly mention the date and/or version number of the application/intervention (and comparator, if applicable) evaluated, or describe whether the intervention underwent major changes during the evaluation process, or whether the development and/or content was "frozen" during the trial.
Describe dynamic components such as news feeds or changing content which may have an impact on the replicability of the intervention (for unexpected events see item 3b).
Clear selection
Does your paper address subitem 5-iii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "Focus groups were carried out to assess relevant aspects (e.g. concepts of disease and disease management)."
5-iv) Quality assurance methods
Provide information on quality assurance methods to ensure accuracy and quality of information provided [1], if applicable.
Clear selection
Does your paper address subitem 5-iv?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "SDV was performed by commissioned external statisticians." Your response is too large. Try shortening some answers. -v) Ensure replicability by publishing the source code, and/or providing screenshots/screen-capture video, and/or providing flowcharts of the algorithms used Ensure replicability by publishing the source code, and/or providing screenshots/screen-capture video, and/or providing flowcharts of the algorithms used. Replicability (i.e., other researchers should in principle be able to replicate the study) is a hallmark of scientific reporting.
Clear selection
Does your paper address subitem 5-v?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable. The app will be made available for free upon project completion.
5-vi) Digital preservation
Digital preservation: Provide the URL of the application, but as the intervention is likely to change or disappear over the course of the years; also make sure the intervention is archived (Internet Archive, webcitation.org, and/or publishing the source code or screenshots/videos alongside the article). As pages behind login screens cannot be archived, consider creating demo pages which are accessible without login.
Clear selection
Your response is too large. Try shortening some answers. Does your paper address subitem 5-vi?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable.
5-vii) Access
Access: Describe how participants accessed the application, in what setting/context, if they had to pay (or were paid) or not, whether they had to be a member of specific group. If known, describe how participants obtained "access to the platform and Internet" [1]. To ensure access for editors/reviewers/readers, consider to provide a "backdoor" login account or demo mode for reviewers/readers to explore the application (also important for archiving purposes, see vi).
Clear selection
Does your paper address subitem 5-vii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Users did not have to pay. They received access to the app via a unique deidenti ed login code.
Your response is too large. Try shortening some answers.
Clear selection
Does your paper address subitem 5-viii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study This paper reports on the trial results. A detailed description of aspects under 5-viii is given in the study protocol, which is referenced in the trial paper.
5-ix) Describe use parameters
Describe use parameters (e.g., intended "doses" and optimal timing for use). Clarify what instructions or recommendations were given to the user, e.g., regarding timing, frequency, heaviness of use, if any, or was the intervention used ad libitum.
Clear selection
Your response is too large. Try shortening some answers. Does your paper address subitem 5-ix?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "Participants in the IG had the possibility to use the self-help app via person-speci c log-in data for four weeks on demand."
5-x) Clarify the level of human involvement
Clarify the level of human involvement (care providers or health professionals, also technical assistance) in the e-intervention or as co-intervention (detail number and expertise of professionals involved, if any, as well as "type of assistance offered, the timing and frequency of the support, how it is initiated, and the medium by which the assistance is delivered". It may be necessary to distinguish between the level of human involvement required for the trial, and the level of human involvement required for a routine application outside of a RCT setting (discuss under item 21 -generalizability).
Clear selection
Does your paper address subitem 5-x?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study None.
Your response is too large. Try shortening some answers.
Clear selection
Does your paper address subitem 5-xi? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable.
5-xii) Describe any co-interventions (incl. training/support)
Describe any co-interventions (incl. training/support): Clearly state any interventions that are provided in addition to the targeted eHealth intervention, as ehealth intervention may not be designed as stand-alone intervention. This includes training sessions and support [1]. It may be necessary to distinguish between the level of training required for the trial, and the level of training for a routine application outside of a RCT setting (discuss under item 21 -generalizability.
Clear selection
Does your paper address subitem 5-xii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable.
Your response is too large. Try shortening some answers. 6b) Any changes to trial outcomes after the trial commenced, with reasons 6a-ii) Describe whether and how "use" (including intensity of use/dosage) was defined/measured/monitored Describe whether and how "use" (including intensity of use/dosage) was defined/measured/monitored (logins, logfile analysis, etc.). Use/adoption metrics are important process outcomes that should be reported in any ehealth trial.
Clear selection
Does your paper address subitem 6a-ii?
Copy and paste relevant sections from manuscript text Like this "Furthermore, de-identi ed metadata of the app usage stored in the app's log les were collected." 6a-iii) Describe whether, how, and when qualitative feedback from participants was obtained Describe whether, how, and when qualitative feedback from participants was obtained (e.g., through emails, feedback forms, interviews, focus groups).
Clear selection
Does your paper address subitem 6a-iii? 7b) When applicable, explanation of any interim analyses and stopping guidelines Does your paper address CONSORT subitem 6b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable.
7a-i) Describe whether and how expected attrition was taken into account when calculating the sample size
Describe whether and how expected attrition was taken into account when calculating the sample size.
Clear selection
Does your paper address subitem 7a-i?
Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "a dropout rate of approximately 20% may be expected, suggesting a su cient baseline sample size of n = 128 participants." Your response is too large. Try shortening some answers. 8a) Method used to generate the random allocation sequence NPT: When applicable, how care providers were allocated to each trial group 8b) Type of randomisation; details of any restriction (such as blocking and block size) 9) Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned Does your paper address CONSORT subitem 7b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable. Participants were able to terminate the study at any time with negative consequences. No severe harms were expected.
Does your paper address CONSORT subitem 8a? *
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "1:1 ratio utilizing randomized permuted blocks of six, strati ed by age and sex; randomization block lists with a respective computer program ("blockrand"package written for R)." Does your paper address CONSORT subitem 8b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "An external, independent statistician generated the randomization block lists with a respective computer program ("blockrand"-package written for R). " Your response is too large. Try shortening some answers.
10) Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions 11a) If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how NPT: Whether or not administering co-interventions were blinded to group assignment subitem not at all important 1 2 3 4 5 essential Does your paper address CONSORT subitem 9? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "The study coordinator (SR), responsible for individual group allocation, remained blind to the randomization lists' strata identity." Does your paper address CONSORT subitem 10? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "An external, independent statistician generated the randomization block lists with a respective computer program" 11a-i) Specify who was blinded, and who wasn't Specify who was blinded, and who wasn't. Usually, in web-based trials it is not possible to blind the participants [1, 3] (this should be clearly acknowledged), but it may be possible to blind outcome assessors, those doing data analysis or those administering co-interventions (if any).
Clear selection
Your response is too large. Try shortening some answers.
11b) If relevant, description of the similarity of interventions
(this item is usually not relevant for ehealth trials as it refers to similarity of a placebo or sham intervention to a active medication/intervention) Does your paper address subitem 11a-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like ths: "Moreover, the data analyst [...] was blind to group assignment." 11a-ii) Discuss e.g., whether participants knew which intervention was the "intervention of interest" and which one was the "comparator" Informed consent procedures (4a-ii) can create biases and certain expectations -discuss e.g., whether participants knew which intervention was the "intervention of interest" and which one was the "comparator".
Clear selection
Does your paper address subitem 11a-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable.
Does your paper address CONSORT subitem 11b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study It is mentioned that the psychoeducational information provided for the IG and CG are identical.
Your response is too large. Try shortening some answers. Does your paper address CONSORT subitem 12a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "Primary analysis of trial data was intention-to-treat/ITT ... In order to evaluate treatment effect, multilevel mixed-effects linear regression models were used."
12a-i) Imputation techniques to deal with attrition / missing values
Imputation techniques to deal with attrition / missing values: Not all participants will use the intervention/comparator as intended and attrition is typically high in ehealth trials. Specify how participants who did not use the application or dropped out from the trial were treated in the statistical analysis (a complete case analysis is strongly discouraged, and simple imputation techniques such as LOCF may also be problematic [4]).
Clear selection
Does your paper address subitem 12a-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "multiple-imputed missing baseline data using the algorithm of chained equations implemented in Stata with all sociodemographic variables and baseline assessments of outcome variables as predictors." Your response is too large. Try shortening some answers. Does your paper address CONSORT subitem 12b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "For sensitivity analysis, we performed a per-protocol/PP analysis by excluding participants who had not used the intervention, as indicated by deidenti ed login data."
Clear selection
Does your paper address subitem X26-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "The trial was approved by the Ethics committee of the Medical Faculty of the University of Leipzig, Germany (ID: 111-17-ek)." Your response is too large. Try shortening some answers.
Clear selection
Does your paper address subitem X26-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "Participation was only allowed after written informed consent."
X26-iii) Safety and security procedures
Safety and security procedures, incl. privacy considerations, and any steps taken to reduce the likelihood or detection of harm (e.g., education and training, availability of a hotline)
Clear selection
Does your paper address subitem X26-iii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this: "In fact, speci c arrangements regarding data protection (e.g. no real names for program login) were pre-speci ed in a data protection concept." Your response is too large. Try shortening some answers.
https://docs.google.com/forms/d/e/1FAIpQLSfZBSUp1bwOc_OimqcS64RdfIAFvmrTSkZQL2-3O8O9hrL5Sw/formResponse?hl=en_US 36/51 RESULTS 13a) For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome NPT: The number of care providers or centers performing the intervention in each group and the number of patients treated by each care provider in each center 13b) For each group, losses and exclusions after randomisation, together with reasons Does your paper address CONSORT subitem 13a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this " Table 1 shows baseline characteristics of the 133 participants." Does your paper address CONSORT subitem 13b? (NOTE: Preferably, this is shown in a CONSORT flow diagram) * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Shown in CONSORT ow diagram (Figure 1).
Your response is too large. Try shortening some answers. subitem not at all important 1 2 3 4 5 essential 14a) Dates defining the periods of recruitment and follow-up
13b-i) Attrition diagram
Strongly recommended: An attrition diagram (e.g., proportion of participants still logging in or using the intervention/comparator in each group plotted over time, similar to a survival curve) or other figures or tables demonstrating usage/dose/engagement.
Clear selection
Does your paper address subitem 13b-i?
Copy and paste relevant sections from the manuscript or cite the figure number if applicable (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Shown in CONSORT ow diagram (Figure 1).
Does your paper address CONSORT subitem 14a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this: "[...] baseline (T0/pre), immediately after the intervention (T1/post, 4 weeks after baseline), and 4 months after baseline (T2/follow-up)." "Participants were recruited between 10/2018 and 12/2019, follow-up was completed in 04/2020." Your response is too large. Try shortening some answers. 14a-i) Indicate if critical "secular events" fell into the study period Indicate if critical "secular events" fell into the study period, e.g., significant changes in Internet resources available or "changes in computer hardware or Internet delivery resources"
Clear selection
Does your paper address subitem 14a-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable.
Does your paper address CONSORT subitem 14b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable.
Your response is too large. Try shortening some answers.
16) For each group, number of participants (denominator) included in each analysis and whether the analysis was by original assigned groups
Does your paper address CONSORT subitem 15? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Provided in Table 1.
15-i) Report demographics associated with digital divide issues
In ehealth trials it is particularly important to report demographics associated with digital divide issues, such as age, education, gender, social-economic status, computer/Internet/ehealth literacy of the participants, if known.
Clear selection
Does your paper address subitem 15-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable. Almost all Syrian refugees residing in Germany owns smartphone as a key mean to keep in touch with family and friends in Syria.
Your response is too large. Try shortening some answers. 16-i) Report multiple "denominators" and provide definitions Report multiple "denominators" and provide definitions: Report N's (and effect sizes) "across a range of study participation [and use] thresholds" [1], e.g., N exposed, N consented, N used more than x times, N used more than y weeks, N participants "used" the intervention/comparator at specific pre-defined time points of interest (in absolute and relative numbers per group). Always clearly define "use" of the intervention.
Clear selection
Does your paper address subitem 16-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Provided in Figure 1/CONSORT ow diagram, as well as in Tables 2-3.
16-ii) Primary analysis should be intent-to-treat
Primary analysis should be intent-to-treat, secondary analyses could include comparing only "users", with the appropriate caveats that this is no longer a randomized sample (see 18-i).
Clear selection
Does your paper address subitem 16-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this " Table 3 shows results of the analysis of the ITT sample for all outcomes." Your response is too large. Try shortening some answers. 17a) For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval) subitem not at all important 1 2 3 4 5 essential 17b) For binary outcomes, presentation of both absolute and relative effect sizes is recommended Does your paper address CONSORT subitem 17a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study See Tables 2-4 17a-i) Presentation of process outcomes such as metrics of use and intensity of use In addition to primary/secondary (clinical) outcomes, the presentation of process outcomes such as metrics of use and intensity of use (dose, exposure) and their operational definitions is critical. This does not only refer to metrics of attrition (13-b) (often a binary variable), but also to more continuous exposure metrics such as "average session length". These must be accompanied by a technical description how a metric like a "session" is defined (e.g., timeout after idle time) [1] (report under item 6a).
Clear selection
Does your paper address subitem 17a-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study
18-i) Subgroup analysis of comparing only users
A subgroup analysis of comparing only users is not uncommon in ehealth trials, but if done, it must be stressed that this is a self-selected sample and no longer an unbiased sample from a randomized trial (see 16-iii).
Clear selection
Your response is too large. Try shortening some answers. subitem not at all important 1 2 3 4 5 essential Does your paper address subitem 18-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this: "None of the subgroup analyses in regard to age groups, gender, education, app use frequency and posttraumatic stress symptom severity indicated any signi cant effect apart from reduced self-stigma." Does your paper address CONSORT subitem 19? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "Across the intervention and follow-up period, one AE occurred with relation to the trial participation..."
19-i) Include privacy breaches, technical problems
Include privacy breaches, technical problems. This does not only include physical "harm" to participants, but also incidents such as perceived or real privacy breaches [1], technical problems, and other unexpected/unintended incidents. "Unintended effects" also includes unintended positive effects [2].
Clear selection
Your response is too large. Try shortening some answers.
Clear selection
Does your paper address subitem 19-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable/not relevant for this study.
Your response is too large. Try shortening some answers. 22-i) Restate study questions and summarize the answers suggested by the data, starting with primary outcomes and process outcomes (use) Restate study questions and summarize the answers suggested by the data, starting with primary outcomes and process outcomes (use).
Does your paper address subitem 22-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study See rst paragraph of discussion.
22-ii) Highlight unanswered new questions, suggest future research
Highlight unanswered new questions, suggest future research.
Clear selection
Does your paper address subitem 22-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study
20-i) Typical limitations in ehealth trials
Typical limitations in ehealth trials: Participants in ehealth trials are rarely blinded. Ehealth trials often look at a multiplicity of outcomes, increasing risk for a Type I error. Discuss biases due to non-use of the intervention/usability issues, biases through informed consent procedures, unexpected events.
Clear selection
Does your paper address subitem 20-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "There are also limitations to consider. First, the multi-strategic recruitment that heavily relied on snowball sampling techniques may limit generalizability [...]"
21-i) Generalizability to other populations
Generalizability to other populations: In particular, discuss generalizability to a general Internet population, outside of a RCT setting, and general patient population, including applicability of the study results for other organizations Clear selection Your response is too large. Try shortening some answers.
23) Registration number and name of trial registry
Does your paper address subitem 21-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "First, the multi-strategic recruitment that heavily relied on snowball sampling techniques may limit generalizability of our ndings to other traumatized populations." 21-ii) Discuss if there were elements in the RCT that would be different in a routine application setting Discuss if there were elements in the RCT that would be different in a routine application setting (e.g., prompts/reminders, more human involvement, training sessions or other co-interventions) and what impact the omission of these elements could have on use, adoption, or outcomes if the intervention is applied outside of a RCT setting.
Clear selection
Does your paper address subitem 21-ii?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Like this "This requires to think about strategies on how to increase engagement with the app, potentially using push noti cation reminders (not implemented in "Sanadak" due to data protection measures)," Your response is too large. Try shortening some answers. In addition to the usual declaration of interests (financial or otherwise), also state the relation of the study team towards the system being evaluated, i.e., state if the authors/evaluators are distinct from or identical with the developers/sponsors of the intervention.
Clear selection
Does your paper address subitem X27-i?
Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study None.
As a result of using this checklist, did you make changes in your manuscript? * What were the most important changes you made as a result of using this checklist?
Add app version information, app developer, hypothesis Your response is too large. Try shortening some answers. | 2021-01-14T06:16:27.025Z | 2020-10-06T00:00:00.000 | {
"year": 2021,
"sha1": "2f6d0df22392fac388066cb45787744ef0ed5157",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7935251",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f3a6f9425320c60ca389b05efc37d7a55c07fcc",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119280269 | pes2o/s2orc | v3-fos-license | Some criteria for regular and Gorenstein local rings via syzygy modules
Let $ R $ be a Cohen-Macaulay local ring. We prove that the $ n $th syzygy module of a maximal Cohen-Macaulay $ R $-module cannot have a semidualizing (e.g., $ R $ itself) direct summand for every $ n \geqslant 1 $. In particular, it follows that $ R $ is Gorenstein if and only if some syzygy of a canonical module of $ R $ has a non-zero free direct summand. We also give a number of necessary and sufficient conditions for a Cohen-Macaulay local ring of minimal multiplicity to be regular or Gorenstein. These criteria are based on vanishing of certain Exts or Tors involving syzygy modules of the residue field.
Introduction
Let (R, m, k) be a commutative Noetherian local ring. Let M be a finitely generated R-module. For n 0, let Ω R n (M ) be the nth syzygy module in a minimal free resolution of M . We abbreviate Cohen-Macaulay (resp. maximal Cohen-Macaulay) to CM (resp. MCM). In the study of Hochster's Canonical Element Conjecture from the point of view of syzygies, Dutta showed the following properties of syzygies: In [Gol84], Golod introduced the notion of semidualizing module by the name of suitable module. An R-module M is said to be semidualizing if the following hold: (i) The natural homomorphism R → Hom R (M, M ) is an isomorphism.
(ii) Ext i R (M, M ) = 0 for all i 1. For example, R itself is a semidualizing R-module. If R is a CM local ring with a canonical module ω, then ω is a semidualizing R-module; see, e.g., [BH98,3.3.10]. In this article, we examine whether syzygies of an arbitrary module over a CM local ring have semidualizing direct summands. Our result is motivated by the following theorem of Martsinkovsky [Mar96, Proposition 7]: Theorem 1.2 (Martsinkovsky). If R is non-regular, then no direct sum of the syzygy modules of k maps onto a non-zero R-module of finite projective dimension.
In [Avr96, Corollary 9], Avramov generalized this result by showing that each non-zero homomorphic image of a finite direct sum of syzygy modules of k has maximal projective complexity and curvature. In this direction, we prove the following: Theorem 3.3. Let R be a CM local ring, and M be an MCM R-module. Let L be a non-zero homomorphic image of a finite direct sum of Ω R n (M ), n 1. Then L cannot be a semidualizing R-module. In particular, L cannot be free, or L cannot be an MCM R-module of finite injective dimension.
Another motivation for Theorem 3.3 is to obtain a characterization of Gorenstein local rings via free summands of certain syzygy modules. In [Dut89, Corollary 1.3], Dutta gave the following characterization of regular local rings. Theorem 1.3 (Dutta). The local ring R is regular if and only if Ω R n (k) has a non-zero free direct summand for some n 0.
Inspired by this theorem, in [Sna10, Theorem 2.4], Snapp gave a new characterization of CM local rings by proving that R is CM if and only if for some n 0, Ω R n (R/(x)) has a non-zero free direct summand for some system of parameters x that form part of a minimal set of generators for m. Let ω be a canonical module of R. It is well known that R is Gorenstein if and only if projdim R (ω) is finite, which is equivalent to saying that some syzygy module of ω is free. Therefore, in view of the above characterizations, one may pose the following question: If some syzygy module of ω has a non-zero free direct summand, then what can be said about the ring? As a consequence of Theorem 3.3, we show that R is Gorenstein if and only if Ω R n (ω) has a non-zero free direct summand for some n 0; see Corollary 3.6. Let (R, m, k) be a d-dimensional CM local ring. The multiplicity of R, i.e., the normalized leading coefficient of the Hilbert-Samuel polynomial P (n) (= length of R/m n+1 for all n ≫ 0) is denoted by e(R). In [Abh67, (1)], Abhyankar showed that e(R) µ(m) − d + 1, where µ(M ) denotes the minimal number of generators of an R-module M . If equality holds, then R is said to have minimal multiplicity, or maximal embedding dimension. It is well known that if k is infinite, then R has minimal multiplicity if and only if there exists an R-regular sequence x such that m 2 = (x)m; see, e.g., [BH98,4.5.14(c)]. Investigations of these rings were started by Sally in [Sal77] and [Sal79, Theorem 1], where properties of the associated graded rings, Hilbert functions and Poincaré series were studied. Note that every regular local ring is CM of minimal multiplicity. But the converse is not true in general, e.g., , where X and Y are indeterminates, and k is a field. Note that R 1 is not even Gorenstein. In this article, we provide a number of necessary and sufficient conditions for a CM local ring of minimal multiplicity to be regular or Gorenstein in terms of vanishing of certain Exts or Tors involving syzygy modules of the residue field.
It can be noticed that Theorem 1.3 of Dutta is a special case of Theorem 1.2 of Martsinkovsky. In [Tak06,Theorem 4.3], Takahashi generalized Dutta's result in another direction. He showed that R is regular if and only if Ω R n (k) has a semidualizing direct summand for some n 0. Motivated by these results, in [GGP, Corollaries 3.2 and 3.4], the author along with Gupta and Puthenpurakal proved that if a finite direct sum of syzygy modules of k maps onto a semidualizing R-module or an MCM (which is assumed to be non-zero) R-module of finite injective dimension, then R is regular. In this direction, we prove the following: Inspired by these results, we prove the following theorem, which gives some criterion for Gorenstein local rings via syzygy modules of the residue field. In particular, as a consequence of Theorem 5.1, we obtain that if R is CM local of minimal multiplicity, and if a finite direct sum of syzygy modules of k maps onto a non-zero R-module L such that G-dim R (L) = 0 (Definition 5.2), then R is Gorenstein; see Corollary 5.3. This gives an affirmative answer to a question of Takahashi ([Tak06, Question 6.6]) for CM local rings of minimal multiplicity.
In the same spirit, we obtain a few necessary and sufficient conditions for a CM local ring of minimal multiplicity to be Gorenstein by using canonical modules.
Theorem 5.5. Along with the hypotheses as in Theorem 5.1, suppose also that R has a canonical module ω. Then the following statements are equivalent: ) consecutive values of i 1. The organization of this article is as follows. In Section 2, we give a few preliminaries which we use in the next sections. Syzygies of arbitrary modules over CM local rings are studied in Section 3, where Theorem 3.3 is proved. This yields a characterization of Gorenstein local rings via syzygies of canonical modules (see Corollary 3.6). Criteria for regular local rings and some examples are given in Section 4; while the same for Gorenstein local rings are shown in Section 5.
Preliminaries
Throughout this article, unless otherwise specified, all rings are assumed to be commutative Noetherian local rings, and all modules are assumed to be finitely generated. Moreover, (R, m, k) always denotes a local ring with the unique maximal ideal m and residue field k. For an R-module M , and n 0, we denote the nth syzygy module of M by Ω R n (M ), i.e., the image of the nth differential of an augmented minimal free resolution of M . The module Ω R n (M ) depends on the choice of a minimal free resolution of M , but is unique up to isomorphism.
To prove our main results, one may assume without loss of generality that the residue field k is infinite from the following observation: 2.1. If the residue field k is finite, we use the standard trick to replace R by Hence we obtain the following equalities for Hilbert functions: Therefore we have the following: Recall that R is said to have minimal multiplicity if e(R) = µ(m) − dim(R) + 1. So R has minimal multiplicity if and only if R ′ has minimal multiplicity. In view of (ii) and (iii), we also get that R is regular if and only if R ′ is regular. Note that R → R ′ is a flat local extension. Hence R → R ′ is faithfully flat. Since R → R ′ is flat, for R-modules M and N , we have . Therefore, in view of (2.1.1), we obtain that ω is a canonical module of R if and only if ω ′ is a canonical module of R ′ . Moreover, it follows that R is Gorenstein if and only if R ′ is Gorenstein.
2.2.
Let M be an R-module. For every n 1, since Ω R n (M ) is a submodule of mF for some free module F , one can easily obtain the following relation between the socle of the ring and the annihilator of the syzygy modules: 2.3. Let (R, m, k) be a CM local ring. Let x ∈ R be an R-regular element. It is not always true that e(R) = e(R/(x)). So, if R has minimal multiplicity, then it is not necessarily true that R/(x) has minimal multiplicity. This statement holds true if x is an R-superficial element. Recall that an element x ∈ m is called R-superficial if there exists a positive integer c such that It is well known that if k is infinite, then there exists an R-superficial element; see [Sal78,page 7]. If depth(R) 1, then for every R-superficial element x, it can be easily shown that x / ∈ m 2 , which yields that µ(m/( So we obtain the following: The following lemma concerns the behaviour of consecutive vanishing of Exts and Tors after going modulo a regular element. It might be known to experts.
Semidualizing summands of syzygy modules
In this section, we study the syzygy modules over CM local rings. We start by recalling a well-known fact about syzygy modules; see, e.g., [BH98, 1.1.5]: Proof. Assume that f : n∈Λ Ω R n (M ) jn −→ L is a surjective R-module homomorphism, where Λ is a finite collection of positive integers, and j n , n ∈ Λ, are positive integers. Then, in view of Section 2.2, we obtain that If possible, assume that L is a semidualizing R-module. Then, by the definition of semidualizing modules, we have Hom R (L, L) ∼ = R, which implies that ann R (L) = 0. So, by (3.2.1), we see that Soc(R) = 0, and hence depth(R) 1, which contradicts the hypothesis depth(R) = 0. Therefore L cannot be a semidualizing R-module.
Now we can achieve one of the main results of this article. Tensoring f with R/(x), we get that is a surjective R-module homomorphism. Since x is R-regular, and M is MCM, we have that x is an M -regular sequence. Hence, by virtue of Lemma 3.1, inductively, it can be deduced that (3.3.2) Ω R n (M ) ∼ = Ω R n (M ) for all n 1. Therefore, in view of (3.3.1) and (3.3.2), we obtain that L is a non-zero homomorphic image of a finite direct sum of syzygy modules Ω R n (M ), n 1. If possible, assume that L is a semidualizing R-module. Since x is an R-regular sequence, by virtue of [Gol84, page 68], we have that L is a semidualizing R-module, which contradicts the fact depth(R) = 0 as we see in Lemma 3.2. Therefore L cannot be a semidualizing R-module.
Since R itself is a semidualizing R-module, we obtain that L cannot be free. For the last part, without loss of generality, we may assume that R is complete. Then R has a canonical module ω, say. It is well known that every MCM R-module of finite injective dimension can be written as a direct sum of copies of ω; see, e.g., [Eis95, Corollary 21.14]. Since canonical module ω is a semidualizing R-module ([BH98, 3.3.10]), in view of the first part, we obtain that L cannot be an MCM R-module of finite injective dimension.
As an immediate corollary of Theorem 3.3, we obtain the following result.
Proof. Note that every non-zero syzygy module Ω R n (M ) (where n d) is an MCM R-module; see, e.g., [BH98, 1.3.7]. Since Ω R d (M ) is MCM, (i) and (ii) simply follows from Theorem 3.3. For (iii), note that any non-zero direct summand of Ω R n (M ) (where n d + 1) is also an MCM R-module. Therefore (iii) also follows from Theorem 3.3.
The following elementary example shows that there exists an R-module M such that for every 0 n d, Ω R n (M ) has a non-zero free direct summand.
Clearly, for every 0 n d, R is a direct summand of Ω R n (M ). As an application of Theorem 3.3, we obtain the following characterization of Gorenstein local rings via syzygies of canonical modules.
Corollary 3.6. Let R be a CM local ring with canonical module ω. Then the following statements are equivalent: has a non-zero free direct summand for some n 0. Proof. If R is Gorenstein, then Ω R 0 (ω) = ω ∼ = R. For the other implication, suppose that Ω R n (ω) has a non-zero free direct summand for some n 0. Then, by virtue of Theorem 3.3, n must be equal to 0. Thus ω has a non-zero free direct summand, and hence R has finite injective dimension, i.e., R is Gorenstein.
Remark 3.7. It can be observed that in the proof of Corollary 3.6, we only use that ω is an MCM R-module of finite injective dimension. Hence, in Corollary 3.6, canonical module ω of R can be replaced by an arbitrary MCM R-module of finite injective dimension.
Criteria for regular local rings
In this section, we obtain a few criteria for CM local rings of minimal multiplicity to be regular in terms of vanishing of certain Exts or Tors involving syzygy modules of the residue field. By the observations made in Section 2.1, we may assume that the residue field k is infinite. We prove the implications (ii) ⇒ (i) and (iii) ⇒ (i) by using induction on d. Let us first consider the base case d = 0. In this case, R has minimal multiplicity is equivalent to saying that m 2 = 0. Therefore, in view of Section 2.2, we have that m ⊆ Soc(R) ⊆ ann R (Ω R n (k)) for all n 1. Thus m Ω R n (k) = 0 for all n 0. Since M and N are homomorphic images of finite direct sums of syzygy modules of k, we obtain that mM = 0 and mN = 0. Therefore M and N are non-zero k-vector spaces. So Ext i R (M, N ) = 0 for some i 1 yields that Ext i R (k, k) = 0 for some i 1, which gives that projdim R (k) is finite, and hence R is regular. For another implication, Tor R i (M, N ) = 0 for some i 1 yields that Tor R i (k, k) = 0 for some i 1, which also implies that projdim R (k) is finite, and hence R is regular.
We now give the inductive step. We may assume that d 1. Therefore, in view of Lemma 2.4, there exists an R-regular element x ∈ m m 2 such that R/(x) has minimal multiplicity. We set (−) := (−) ⊗ R R/(x). So R is a (d − 1)-dimensional CM local ring of minimal multiplicity. Since M and N are MCM R-modules, and x is R-regular, we get that x is regular on both M and N . Hence M and N are MCM R-modules. Let f : be surjective R-module homomorphisms, where Λ ′ and Λ ′′ are finite collections of non-negative integers, and {j ′ n , j ′′ n } are positive integers. Tensoring f and g with R/(x), we obtain that 1. Therefore, by induction hypothesis, we obtain that R is regular, and hence R is regular as x ∈ m m 2 is an R-regular element. This proves the implications (ii) ⇒ (i) and (iii) ⇒ (i), and hence the theorem.
Remark 4.2. If (R, m, k) is a d-dimensional CM local ring, then every non-zero syzygy module Ω R n (k) (where n d) is MCM; see, e.g., [BH98, 1.3.7]. Hence any non-zero direct summand of Ω R n (k) (n d) is also MCM. Therefore, in Theorem 4.1, M and N can be taken as any non-zero direct summands of Ω R n (k), n d. The following example shows that the number of consecutive vanishing of Exts or Tors in Theorem 4.1 cannot be further reduced.
] is a formal power series ring in two indeterminates X and Y over a field k. We set m := (x, y), where x and y are the images of X and Y in R respectively. Clearly, (R, m, k) is a CM local ring of dimension 1. It can be easily seen that e(R) = 2 and µ(m) = 2. Therefore R has minimal multiplicity. Note that we have the following direct sum decomposition: Ω R 1 (k) = m = (x, y) = (x) ⊕ (y). We set M := (x) and N := (y). Since Ω R 1 (k) is MCM (by Remark 4.2), we obtain that M and N are MCM homomorphic images of Ω R 1 (k). Considering the minimal free resolution of M : According to Theorem 4.1, we need at least 2 consecutive vanishing of Exts or Tors to conclude that R is regular. In this case, R is not regular. Note that one can also compute Tor R i (M, N ) and Ext i R (M, N ) to conclude the fact. (2) Suppose R is a CM local ring of minimal multiplicity, which is not Gorenstein. (For example, R can be taken as k[X 1 , . . . , X n ]/(X 1 , . . . , X n ) 2 for some n 2, where X 1 , . . . , X n are indeterminates, and k is a field). Let ω be a canonical module of R. Clearly, ω is a non-free R-module. Setting M = N = ω, we have Ext i R (M, N ) = 0 for all i 1, but R is not regular. We now close this section by presenting a natural question.
Question 4.5. Can we drop the minimal multiplicity hypothesis in Theorem 4.1?
Though we have not been able to get some counterexample, but we believe that if we omit this hypothesis, then Theorem 4.1 does not hold true.
Criteria for Gorenstein local rings
In this section, we provide several criteria for Gorenstein local rings via syzygy modules of the residue field over CM local rings of minimal multiplicity. We start with the following theorem, which is analogous to the results by Ulrich Proof. In view of Section 2.1, we may assume that the residue field k is infinite. We use induction on d. We first consider the base case d = 0. In this case, we have that L is a non-zero k-vector space as in the proof of Theorem 4.1. So Ext i R (L, R) = 0 for some i 1 yields that Ext i R (k, R) = 0 for some i 1, which implies that R is Gorenstein, see, e.g., [Mat86,Theorem 18.1].
We now give the inductive step. We may assume that d 1. So, in view of Lemma 2.4, there exists an R-regular element x ∈ m m 2 such that R/(x) has minimal multiplicity. We set (−) := (−)⊗ R R/(x). So R is a (d−1)-dimensional CM local ring of minimal multiplicity. As in the proof of Theorem 4.1, we get that L is an MCM homomorphic image of a finite direct sum of Ω R n (k), n 0. Furthermore, in view of Lemma 2.5, Ext i R (L, R) = 0 for some (d + 1) consecutive values of i 1 yields that Ext i R (L, R) = 0 for some d (= dim(R) + 1) consecutive values of i 1. Therefore, by induction hypothesis, we obtain that R is Gorenstein, and hence R is Gorenstein as x is R-regular. This completes the proof of the theorem.
Let us recall the notion of G-dimension, due to Auslander and Bridger [AB69]. As a consequence of Theorem 5.1, we obtain the following result.
Corollary 5.3. Let (R, m, k) be a CM local ring of minimal multiplicity. If a finite direct sum of syzygy modules of k maps onto a non-zero R-module L such that G-dim R (L) = 0, then R is Gorenstein.
Proof. In view of the Auslander-Bridger Formula ([AB69, Theorem (4.13)(b)]), since R is CM and G-dim R (L) = 0, we obtain that L is MCM. Since G-dim R (L) = 0, we also have Ext i R (L, R) = 0 for all i 1. So the result follows from Theorem 5.1.
Remark 5.4. In [Tak06, 6.6], after proving Theorem 6.5, Takahashi raised the following question which is still open: If Ω R n (k) has a non-zero direct summand of G-dimension 0 for some n > depth(R) + 2, then is R Gorenstein? Corollary 5.3 (in particular) provides an affirmative answer to this question for CM local rings of minimal multiplicity.
We now give a few criteria for Gorenstein local rings in terms of vanishing of certain Exts or Tors involving canonical modules. Without loss of generality, we may assume that the residue field k is infinite. As before, to prove these implications ((ii) ⇒ (i) and (iii) ⇒ (i)), we use induction on d. We first assume that d = 0. In this case, we have that L is a non-zero k-vector space. So Ext i R (ω, L) = 0 for some i 1 yields that Ext i R (ω, k) = 0 for some i 1, which gives that projdim R (ω) is finite, and hence R is Gorenstein. For another implication, Tor R i (ω, L) = 0 for some i 1 yields that Tor R i (ω, k) = 0 for some i 1, which also implies that projdim R (ω) is finite, and hence R is Gorenstein.
For the inductive step, we assume that d 1. In view of Lemma 2.4, there exists an R-regular element x ∈ m m 2 such that R/(x) has minimal multiplicity. We set (−) := (−) ⊗ R R/(x). So R is a (d − 1)-dimensional CM local ring of minimal multiplicity. As before, we get that L is an MCM homomorphic image of a finite direct sum of Ω R n (k), n 0. Since ω is a canonical module of R, it is well known that ω is a canonical module of R. By virtue of Lemma 2.5, Ext i R (ω, L) = 0 (resp. Tor R i (ω, L) = 0) for some (d + 1) consecutive values of i 1 yields that Ext i R (ω, L) = 0 (resp. Tor R i (ω, L) = 0) for some d (= dim(R)+1) consecutive values of i 1. Therefore, by induction hypothesis, we obtain that R is Gorenstein, and hence R is Gorenstein. This proves the implications, and hence the theorem.
Remark 5.6. It can be noticed that in the proof of Theorem 5.5, it is only used that ω is an MCM R-module of finite injective dimension. Therefore, in Theorem 5.5, one can replace ω (canonical module of R) by an arbitrary MCM R-module of finite injective dimension.
We now analyze a few examples.
Example 5.7. In Theorems 5.1 and 5.5, the number of consecutive vanishing of Exts or Tors cannot be further reduced. Let (R, m, k) be an Artinian local ring such that m 2 = 0 and µ(m) 2. (For example, R can be taken as k[X 1 , . . . , X n ]/(X 1 , . . . , X n ) 2 for some n 2, where X 1 , . . . , X n are indeterminates). Clearly, the ring R has minimal multiplicity, and it is not Gorenstein. Let L be a non-zero homomorphic image of a finite direct sum of syzygy modules of k. Since R is Artinian, L is an MCM R-module. In this case, we need at least one (= dim(R) + 1) vanishing of Ext i R (L, R), Ext i R (ω, L) or Tor R i (ω, L) to conclude that R is Gorenstein. So the number of consecutive vanishing of Exts or Tors cannot be further reduced.
Example 5.8. If L is not a homomorphic image of a finite direct sum of syzygy modules of k, then Theorems 5.1 and 5.5 do not hold true. For example, taking L = R in Theorems 5.1 and 5.5, we have that Ext i R (L, R) = 0 = Tor R i (ω, L) for all i 1; while L = ω in Theorem 5.5 yields that Ext i R (ω, L) = 0 for all i 1. But there are CM local rings of minimal multiplicity which are not Gorenstein.
As before, we may now ask the following natural question: Question 5.9. Can we omit the hypothesis that R has minimal multiplicity in Theorems 5.1 and 5.5? | 2018-04-29T10:37:59.000Z | 2016-11-10T00:00:00.000 | {
"year": 2016,
"sha1": "9b45fd830e89e42dd9c8217e07121b1e2693bbc9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1611.03263",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9b45fd830e89e42dd9c8217e07121b1e2693bbc9",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
127295826 | pes2o/s2orc | v3-fos-license | BEAMDB and MOLD—Databases at the Serbian Virtual Observatory for Collisional and Radiative Processes
In this contribution we present a progress report on two atomic and molecular databases, BEAMDB and MolD, which are web services at the Serbian virtual observatory (SerVO) and nodes within the Virtual Atomic and Molecular Data Center (VAMDC). The Belgrade Electron/Atom (Molecule) DataBase (BEAMDB) provides collisional data for electron interactions with atoms and molecules. The Photodissociation (MolD) database contains photo-dissociation cross sections for individual rovibrational states of diatomic molecular ions and rate coefficients for the chemi-ionisation/recombination processes. We also present a progress report on the major upgrade of these databases and plans for the future. As an example of how the data from the BEAMDB may be used, a review of electron scattering from methane is described.
Introduction
Databases in atomic and molecular physics have become essential for developing models and simulations of complex physical and chemical processes and for the interpretation of data provided by observations and measurements, e.g., in laboratory plasma [1], and studying plasma chemistries and reactions in planetary atmospheres [2]. In the last decade large amounts of data have been collected for medical applications including stopping powers in different media and tissues as well as cross sections for atomic particles (photons, electrons, positrons, ions) interacting with biomolecules and their constituents in order to achieve an insight, at the molecular level, of the radiation damage and radiotherapy [3]. In order to solve the problem of analysis and mining of such large amounts of data, the creation of Virtual Observatories and Virtual Data Centres have been crucial ( [4] and refs. therein). In this contribution we present a progress report of two atomic and molecular databases, the Belgrade Electron/Atom (Molecule) DataBase (BEAMDB) and Photodissociation (MolD), which are web services at the Serbian virtual observatory (SerVO) [5] and nodes within the Virtual Atomic and Molecular Data Centre (VAMDC) [6].
This branch of science often entitled 'Data management' or 'Data mining' is undergoing rapid expansion and development such that nowadays it is not enough for these databases to satisfy the standards of Virtual centres, etc., but they have to deal with new challenges such as the input of large amounts of data, i.e., Big Data. Thus, we can expect major investment and activity in this field in the next decade. Indeed, in September 2018, the European Strategy Forum on Research Infrastructures (ESFRI) presented its Strategy Report and Roadmap 2018. As a strategic instrument that identifies Research Infrastructures (RI) of strategic interest for Europe and the wider research community, the document presented Big data and e-infrastructure needs and highlighted the Virtual Observatory (VO) and The International VO Alliance (IVOA) as very important initiatives with VAMDC relevant data standards as well as state-of-the-art data analysis tools being highlighted as evidence of good practice.
BEAMDB and MolD Database Nodes
The Belgrade nodes of VAMDC are hosted by SerVO (see Figure 1a) and currently consists of two databases BEAMDB (servo.aob.rs/emol) and MolD (servo.aob.rs/mold). These databases have been developed using the standards developed and operated by the VAMDC project [6] (see Figure 1b). VAMDC and SerVO have been through several different stages of development. SerVO (http://servo.aob.rs/) is a project formally created in 2008 but it originated in 2000, when the first attempts to organize data and to create a kind of web service were made in the BELDATA project, the precursor of SerVO. VAMDC started on 1 July 2009 as a FP7-funded project and originally was to be developed with about 20 databases, but the portal now has more than 33 operational databases [7]. Photodissociation (MolD), which are web services at the Serbian virtual observatory (SerVO) [5] and nodes within the Virtual Atomic and Molecular Data Centre (VAMDC) [6]. This branch of science often entitled 'Data management' or 'Data mining' is undergoing rapid expansion and development such that nowadays it is not enough for these databases to satisfy the standards of Virtual centres, etc., but they have to deal with new challenges such as the input of large amounts of data, i.e., Big Data. Thus, we can expect major investment and activity in this field in the next decade. Indeed, in September 2018, the European Strategy Forum on Research Infrastructures (ESFRI) presented its Strategy Report and Roadmap 2018. As a strategic instrument that identifies Research Infrastructures (RI) of strategic interest for Europe and the wider research community, the document presented Big data and e-infrastructure needs and highlighted the Virtual Observatory (VO) and The International VO Alliance (IVOA) as very important initiatives with VAMDC relevant data standards as well as state-of-the-art data analysis tools being highlighted as evidence of good practice.
BEAMDB and MolD Database Nodes
The Belgrade nodes of VAMDC are hosted by SerVO (see Figure 1a) and currently consists of two databases BEAMDB (servo.aob.rs/emol) and MolD (servo.aob.rs/mold). These databases have been developed using the standards developed and operated by the VAMDC project [6] (see Figure 1b). VAMDC and SerVO have been through several different stages of development. SerVO (http://servo.aob.rs/) is a project formally created in 2008 but it originated in 2000, when the first attempts to organize data and to create a kind of web service were made in the BELDATA project, the precursor of SerVO. VAMDC started on 1 July 2009 as a FP7-funded project and originally was to be developed with about 20 databases, but the portal now has more than 33 operational databases [7]. We are currently in a transition phase updating the software "platform" (Python update, Django, XSAMS evolution, new Query Store on VAMDC, etc.) as the consequence of the rapid development and expansion of our two databases. Some current technical characteristics and aspects of these databases will be briefly introduced here (for details see [4]). Access to the BEAMDB and MolD data is possible via Table Access Protocol (TAP), a Virtual Observatory standard for a web service or via AJAX (Asynchronous JavaScript and XML)-enabled web interface (http://servo.aob.rs/). Both queries return data in XSAMS (XML Schema for Atoms, Molecules and Solids) format. The XSAMS schema provides a framework for a structured presentation of atomic, molecular, and particle-solid interaction data in an XML file. The underlying application architecture is written in Django, a Python web framework, and represents a customization and extension of VAMDC's NodeSoftware [8,9]. We are currently in a transition phase updating the software "platform" (Python update, Django, XSAMS evolution, new Query Store on VAMDC, etc.) as the consequence of the rapid development and expansion of our two databases. Some current technical characteristics and aspects of these databases will be briefly introduced here (for details see [4]). Access to the BEAMDB and MolD data is possible via Table Access Protocol (TAP), a Virtual Observatory standard for a web service or via AJAX (Asynchronous JavaScript and XML)-enabled web interface (http://servo.aob.rs/). Both queries return data in XSAMS (XML Schema for Atoms, Molecules and Solids) format. The XSAMS schema provides a framework for a structured presentation of atomic, molecular, and particle-solid interaction data in an XML file. The underlying application architecture is written in Django, a Python web framework, and represents a customization and extension of VAMDC's NodeSoftware [8,9].
BEAMDB-Belgrade Electron/Atom(Molecule) DataBase
The origins of this database date from the early ideas of developing an Information System in Atomic Collision Physics [10] and at first it provided only cross sections for electron interactions with neutral atoms and molecules [11]. However the database has now been extended to cover electron spectra (energy loss and threshold) and ionic species [4].
Maintaining databases on cross sections and other collisional data, such as different types of spectra, is important for several reasons. One is to provide a comprehensive set of data to both researchers and applied scientists or engineers who need such data to design and make better devices and products. On the other hand, we need basic data to be able to include them in sophisticated models and to understand more complex processes, one of recent example is the use of electron cross section data for oxygen and water molecules in order to reveal for the role of electron induced processing in the coma of Comet 67P/Churyumov-Gerasimenko during the Rosetta mission [12]. Such an analysis clearly demonstrated the need for comprehensive datasets of electron-molecule collisions in format that is readily accessible and understandable to the space community. Electron collision cross sections are also a subject of databases with the particular interest in plasma processes data. An overview of such databases is given by Huo and Kim [13] and White et al. [14], with the emphasis on the role of electron collisional data in gases and surfaces in plasma processes. Another database specialized for modelling in low-temperature plasma is the LXCat database [15]. The compilation of electron scattering data from atoms and molecules has a rich history. After the discovery of the electron in 1897 by J. J. Thompson (see more about his route and how he conducted the experiments in [16]), a series of experiments on how electron beams behave by passing through a gas started to develop. The discovery of electron was followed by intense research of its interactions with matter, such that several Nobel Prizes were awarded for such studies: in 1904 to Philipp E. A. von Lenard for his "work on cathode rays"; in 1906 to Joseph J. Thomson "in recognition of the great merits of his theoretical and experimental investigations on the conduction of electricity by gases"; and in 1925 to James Franck and Gustav Ludwig Hertz "for their discovery of the laws governing the impact of an electron upon an atom." Carl W. Ramsauer at Danzig Technische Hochschule and Sir John S. Townsend at Oxford University independently studied the scattering of electrons of low energy by atoms and discovered an effect of occurrence of minima in total cross section that was named after them. The Ramsauer-Townsend effect was also observed in electron scattering from the methane molecule at the end of the 1920s and beginning of the 1930s, when electron collision studies were pioneered at St. John's College and Trinity College, Cambridge. Dymond and Watson [17] made the first direct determination of the scattering curve for slow electrons by helium atom. Arnot [18] performed scattering experiments in mercury vapour, while Bullard and Massey [19] performed experiments over a wider range of scattering angles in order to observe maxima and minima in scattering curves by argon atoms demonstrating diffraction phenomena. In parallel, the theoretical description of elastic scattering emerged on the basis of quantum wave mechanics by Mott [20] and later was fully developed by Mott and Massey [21]. Theoretical calculations producing cross sections have advanced over the years as the use of quantum mechanics allowed new methods to be developed, such as the time-independent close-coupling approach and R-matrix approach used to study low-energy collisions [22], relativistic convergent close-coupling method [23], distorted-wave Born approach [24] absorption potential [25] and optical potential method [26] to study high-energy collisions [27]. A special advantage of the time-independent close-coupling approach is its possibility of providing a guide to uncertainty estimates of the calculated values.
As an example of how BEAMDB may be used, we will discuss a review of electron scattering from methane. The methane molecule as well as other hydrocarbons have been identified as sources of infrared absorption in the atmospheres of giant planets. Novel measurements of IR spectra allow more precise determination of the methane content in these atmospheres [28]. It is considered that it is constituent of the atmosphere of Uranus, with an abundance of 2.3%, and of Neptune of 1.5%. It plays a very important role in the photochemistry processes on Neptune [29]. It is also considered as one of the major greenhouse gases in the Earth's atmosphere [30].
Elastic Electron Scattering by Methane Molecule-Early Experiments
The methane molecule is one of the molecules for which we have a relatively complete dataset for electron interactions. The electronic structure of methane is representative of most bio-molecules with its valence molecular orbitals being delocalized over the entire nuclear frame. As discussed by Herzberg [31], at first glance it is not obvious which conformation of C and H atoms would be the most stable and to which point group it should be attributed: a regular tetrahedron (T d ), a non-regular tetrahedron (C 3v ), or square planar form (D 4h ). In the T d point group representation the highest occupied molecular orbitals (HOMO) of methane are of a 1 and t 1 symmetry, so the ground state of methane has a configuration 1(a 1 ) 2 2(a 1 ) 2 1(t 2x ) 2 1(t 2y ) 2 1(t 2z ) 2 [32]. The pairing of all electrons in the HOMO makes methane a closed-shell compound. The binding energy of the lowest MO 2(a 1 ) is −18.8 eV, the energy of the three 1t 2 orbitals is −10.6 eV, while the first LUMO 3(a 1 ) is at +1.99 eV and the second 2t 2 is +3.90 eV [33].
The first measurements of electron differential elastic cross sections (DCS) for methane was performed in 1931 by Arnot [34] and by Bullard and Massey [35]. Arnot measured DCSs at higher impact energies of 30, 84, 205, 410 and 820 eV, while Bullard and Massey used lower incident electron energies of 4, 6, 10, 20 and 30 eV and covered an angular range from 20 • to 120 • in steps of 10 • , except at an incident energy of 10 eV, where an additional point was measured at 125 • . A close resemblance of between the DCS of methane and that of an argon atom was recognized. From this observation, the authors opened up the possibility of considering scattering by heavy atoms in terms of successive electron shells [35]. In 1932 Mohr and Nicoll [36] also measured DCS for methane at incident energies of 30, 52 and 84 eV, covering the full accessible angular range up to 150 • . Hughes and McMillen [37] investigated the interference effects between the electron waves scattered by individual atoms as indicated by the presence of maxima in the curves for the ratios of the DCSs for different hydrocarbon molecules. For methane they measured DCS at 11 incident energies in the range from 10 eV to 800 eV and in the angular range from 10 • to 150 • .
Elastic Electron Scattering by Methane Molecule-Modern Experiments and Calculations
Gianturco and Thompson [32] used a model of scattering by a rigid molecule with inclusion of exchange and polarisation (with an ad hoc short range cutoff parameter) effects in an approximate way in order to calculate differential cross sections at low electron energies. For CH 4 they used the scattering states of symmetry A l , T I , T 2 and E, except A 2 , was found to be of little importance for low-energy scattering, with A 1 and T 2 being found to be the most important. The exchange effects were included in each of these states. The results were presented graphically for 9.5 eV incident electron energy and for three cutoffs, r o = 0.92; 0.88 and 0.84. The authors concluded that it was not possible to give a single r o that gives complete agreement with the wide variety of experimental data, which reflects the crudeness of their model, but a good feature of the model is the correct incorporation of the static potential, calculated from good molecular wavefunctions. For intermediate and higher electron energies, from 205 to 820 eV, Dhal et al. [38] calculated DCS using the first Born, the Eikonal and the two-potential approximations including the polarisation and exchange effects. In these calculations no account was taken of the absorption effects that would certainly improve the accuracy of the calculated data.
New experiments exploring vibrationally elastic cross sections were performed by Rohr [39] at energies below 10 eV, i.e., at 1, 2 and 5 eV and from 10 • to 120 • using an spectrometer that consists of an electron monochromator to produce a high-resolution electron beam and a rotatable electron analyser capable of resolving the vibrational modes in energy loss mode, both systems had 127 • electrostatic selectors. Absolute DCS were obtained by a normalization to the integral cross sections obtained in transmission experiments by other authors. A comprehensive study of differential, integral and momentum transfer cross sections was reported by Tanaka et al. [40] for elastic e/CH 4 scattering. They performed measurements using a crossed electron beam, molecular beam apparatus with the relative flow technique allowing the elastic DCS of CH 4 to be derived by comparison with those of He. DCSs were measured at electron impact energies of 3, 5, 6, 7.5, 9, 10, 15, and 20 eV for scattering angles from 30 • to 140 • . The authors concluded that the angular distribution in the energy region of 3 to 7.5 eV is dominated by a d-wave scattering as was theoretically predicted and also established experimentally at 5 eV. Later the same authors remeasured the elastic DCS with a new spectrometer for impact energies from 1.5-100 eV and scattering angles from (10 • -130 • ) and it was found that the previous values were systematically lower by about 30-35% [41].
Using the same set of data for normalization, Vušković and Trajmar [42] obtained DCSs for elastic scattering as well as inelastic cross sections by recording energy-loss spectra at incident energies of 20, 30 and 200 eV. The set of data obtained at 200 eV was normalized to the data by Dhal et al. [38]. All measured relative angular dependences were corrected for effective path length variation with scattering angle.
A group from University College London used an electron spectrometer, incorporating hemispherical electrostatic energy analysers, and a crossed beam of target molecules in order to measure elastic and vibrational excitation cross sections at low incident electron energies from 7.5 to 20 eV and scattering angles from 32 • to 142 • [43].
After the measurements by Rohr [39] at the University of Kaiserslautern, Sohn et al. [44] investigated threshold structures in the cross sections of low-energy electron scattering of methane. They also presented the measurements of angular dependences (DCS) at 0.6 and 1.0 eV from 35 • to 105 • scattering angles. Müller et al. [45] investigated the rotational excitation in vibrationally elastic e/CH 4 collisions and presented vibrationally elastic (rotationally summed) differential cross sections at the primary energies 5, 7.5 and 10 eV. They normalized their results to the previous measurements of Tanaka et al. [40]. Sohn et al. [46], with an improved (in comparison with [45]) crossed-beam spectrometer, measured DCSs at low energies in the range from 0.2 to 5.0 eV in the angular range between 15 • and 138 • . With the aid of a phase-shift analysis, integrated cross sections were calculated as well, but the absolute DCS scale was obtained by the relative flow technique, using He as a reference gas.
Further DCS measurements at intermediate and high electron incident energies were made by Sakae et al. [47]. The angular range was 5 • -135 • and the electron energies were 75, 100, 150, 200, 300, 500 and 700 eV. Absolute DCS were determined by using He as the known reference DCS. Data were presented as rotationally and vibrationally summed elastic DCS, with the overall uncertainty being estimated at approximately 10%. Shyn and Cravens [48] measured differential vibrationally elastic scattering cross sections from 5 to 50 eV and from 12 • to 156 • . The beam of methane molecules was modulated at a frequency of 150 Hz so that the pure beam signal could be separated from the background using a phase-sensitive detector. The overall uncertainty of data was about 14%, including the uncertainty of the He cross sections (filled into the chamber for normalization), from which relative curves were placed on an absolute scale.
Jain and Thompson [49] calculated cross sections for low electron-molecule scattering using a local exchange potential and polarisation potential introducing a first-order wavefunction. DCS were calculated at 3 and 5 eV using three different types of potentials, one parameter-free polarisation potential and two phenomenological potentials. Abusalbi et al. [50] calculated ab initio interaction potentials for e/CH 4 scattering at 10 eV impact energy. Jain [51] exploited a spherical optical complex potential model to investigate, over a wide energy range (0.1-500 eV), electron interactions with methane. The whole energy range was divided into three regions; (i) from 0.1 to 1.0 eV in which a Ramsauer-Townsend minimum is observed in the total cross section; (ii) between 2 and 20 eV where the scattering is dominated by a d-wave broad structure around 7-8 eV; and (iii) from 20 to 500 eV, where ionization and dissociation dominate over the elastic process. It was found that an absorption potential using the distorted charge density is more successful than one with polarized density and that the elastic cross sections are reduced significantly by including the imaginary part in the optical potential.
Gianturco et al. [52] used a parameter-free treatment of the interaction e/CH 4 and calculated cross sections for low energies. Functional forms of exchange and polarisation interactions were examined to find their importance over the whole range of collision energies. McNaughten et al. [53] reported rotationally elastic DCSs in the 0.1-20 eV energy region using the parameter-free model polarisation potential with electron exchange treated exactly and distortion effects included. Lengsfield, [54] used the complex Kohn method with polarized trial functions at incident energies from 0.2 to 10 eV. The latter was the first ab initio study to accurately characterize low-energy electron-methane scattering.
Later Gianturco et al. [55] calculated vibrational elastic, rotationally summed cross sections with ab initio static-exchange interactions and using a symmetry-adapted, single-centre expansion (SCE) representation for the close-coupling (CC) equations. Elastic DCS were obtained at energies 10, 15, 20, 30 and 50 eV. Nishimura and Itikawa [56] calculated vibrationally elastic DCS at 10 to 50 eV impact electron energies using an ab initio electrostatic potential and treating exchange and polarization in approximate way. Nestmann et al. [57] employed the variational R-matrix theory based on the fixed-nuclei approximation in order to calculate DCSs at low energies, i.e., 0.2, 0.5, 0.7, 1.5, 2.5, 3.5 and 5.0 eV. The structures in the calculated DCSs are shifted to smaller angles compared with the experimental results due to the omission of nonadiabatic effects.
Mapstone and Newell [58] reported measurements of hydrocarbon molecules at incident energies from 3.2 to 15.4 eV using electron spectrometer with hemispherical analysers both in monochromator and analyser. They first determined volume correction factors by using a phase shift analysis for helium DCS as a reference gas and then normalized relative values to the data of Tanaka et al. [32]. Bundschu et al. [59] performed a combined experimental and theoretical study for low-energy electron interactions with methane molecule. They determined absolute DCS at energies from 0.6 to 5.4 eV and within the angular range from 12 • to 132.5 • . Elastic differential cross sections were calculated using a body-fixed, SCE for the CC equations.
A group at Wayne State University, although primarily interested in positron scattering by methane [60], also measured electron elastic cross sections at 15, 20 and 200 eV. Their electron beam was produced as secondary electrons from the moderator with the energy spread of several electronvolts. Maji et al. [61] measured elastic DCSs for a number of carbon-containing molecules at the high-energy region from 300 to 1300 eV by the crossed-beam technique. They wanted to test the validity of the independent atom model for polyatomic molecules. The measurements of DCS were carried out with an energy resolution of about 1 eV and by using the relative flow method at 30 • where the overall uncertainty was 15%. Basavaraju et al. [62] gave tabulated values for measured DCSs in [61] and obtained a scaled DCSs regarded as a universal function of a scaled momentum transfer for a number molecular targets.
Iga et al. [63] performed a joint theoretical (for 1-500 eV) and experimental (100-500 eV) investigation on e/CH 4 elastic scattering. Within the complex optical potential method they used the Schwinger variational iterative procedure combined with the distorted-wave approximation to calculate the scattering amplitudes. Experimentally, they used the relative flow technique and neon as the reference gas. The overall experimental uncertainty in the obtained absolute DCSs was about 10.3%. Lee et al. [64], on the basis of previous calculations, tested an improved version of the quasifree scattering model (QFSM) potential proposed by Blanco and García [25].
Elastic Electron Scattering by Methane Molecule-The Twenty-First Century Results
Bettega et al. [65] reported elastic DCSs for a class of molecules, (XH 4 ) among them methane, at incident electron energies between 3 and 10 eV using the Schwinger multichannel method with pseudopotentials. They demonstrated the importance of polarization effects in elastic collisions. Absolute differential elastic and vibrational excitation cross sections were measured by Allan [66] who exploited the improved resolution of the electron spectrometer in Fribourg to achieve the separation for all four vibrational modes within 0.4 eV from elastic peak at the impact energies from 0.1 to 1.5 eV. Varambhia et al. [67] and Tennyson [27] presented a sophisticated R-matrix approach to calculations of low-energy electron alkane collisions. DCSs were obtained for rotationally summed elastic scattering and the graphs were presented at 3.0 and 5.0 eV incident energies. Brigg et al. [68] performed R-matrix calculations at energies between 0.02 and 15 eV using a series of different ab initio models for both the target and the full scattering system. Fedus and Karwasz [69] investigated the depth and position of the Ramsauer-Townsend minimum in methane by applying the MERT theory (Modified Effective Range Theory). They presented the results at incident energies from 0.2 to 1.5 eV and compared them with other results. They were able to put forward the recommended set of data for integral and momentum transfer data for methane at energies from 10 −3 to 2.0 eV. Sun et al. [70] used a difference converging method (DCM) to predict accurate values of experimentally unknown DCSs. They presented vibrationally elastic cross section at 5.0 eV.
Elastic Electron Scattering by Methane Molecule-Coverage in the BEAMDB
Elastic cross sections for electron scattering by molecular targets comprise the majority of data items within the BEAMDB. Molecular targets covered by the present database are: alanine, formamide, tetrahydrofuran, hydrogen sulfide, pyrimidine, N-methylformamide, water, furan, nitrous oxide, and newly added datasets for methane. Currently there are 17 datasets for elastic DCSs for methane, spanning from 1931 (Arnot [34] and Bullard and Massey [35]) to the most recent work by Iga et al. [63]. For example, in Figure 2 we present in 3D graphical form one of the rather complete sets of data by Boesten and Tanaka [41] that have been used by many researchers for comparison and/or normalization. presented the results at incident energies from 0.2 to 1.5 eV and compared them with other results. They were able to put forward the recommended set of data for integral and momentum transfer data for methane at energies from 10 −3 to 2.0 eV. Sun et al. [70] used a difference converging method (DCM) to predict accurate values of experimentally unknown DCSs. They presented vibrationally elastic cross section at 5.0 eV.
Elastic Electron Scattering by Methane Molecule-Coverage in the BEAMDB
Elastic cross sections for electron scattering by molecular targets comprise the majority of data items within the BEAMDB. Molecular targets covered by the present database are: alanine, formamide, tetrahydrofuran, hydrogen sulfide, pyrimidine, N-methylformamide, water, furan, nitrous oxide, and newly added datasets for methane. Currently there are 17 datasets for elastic DCSs for methane, spanning from 1931 (Arnot [34] and Bullard and Massey [35]) to the most recent work by Iga et al. [63]. For example, in Figure 2 we present in 3D graphical form one of the rather complete sets of data by Boesten and Tanaka [41] that have been used by many researchers for comparison and/or normalization. [41] in the range from 1.5 to 100 eV.
Photodissociation-The MolD Database
MolD as a part of SerVO and VAMDC is intensively used by astrophysicists for model atmosphere calculations of solar and near solar-type stars, atmospheric parameter determinations, etc. as well as for theoretical and laboratory plasma research [71][72][73][74][75][76]. Such data are also important for astrochemistry and especially for studies of early Universe chemistry (see e.g., Heathcote et al. [74]). MolD consists of several components such as data collection, and user interface tools (e.g., on-site AJAX enabled queries and visualizations).
The database contains photodissociation cross sections for the individual rovibrational states of the diatomic molecular ions as well as corresponding data on molecular species and molecular state characterizations (rovibrational energy states, etc.). These cross sections can be summed and averaged (Figure 3) for further applications, e.g., obtaining rate coefficients (see Figure 4) for non- [41] in the range from 1.5 to 100 eV.
Photodissociation-The MolD Database
MolD as a part of SerVO and VAMDC is intensively used by astrophysicists for model atmosphere calculations of solar and near solar-type stars, atmospheric parameter determinations, etc. as well as for theoretical and laboratory plasma research [71][72][73][74][75][76]. Such data are also important for astrochemistry and especially for studies of early Universe chemistry (see e.g., Heathcote et al. [74]). MolD consists of several components such as data collection, and user interface tools (e.g., on-site AJAX enabled queries and visualizations).
The database contains photodissociation cross sections for the individual rovibrational states of the diatomic molecular ions as well as corresponding data on molecular species and molecular state characterizations (rovibrational energy states, etc.). These cross sections can be summed and averaged (Figure 3) for further applications, e.g., obtaining rate coefficients (see Figure 4) for non-local thermal equilibrium models of early universe chemistry (Coppola et al. [77]), models of the solar atmosphere, or models of the atmospheres of white dwarfs (Wen & Han [78]), etc. The cross sections are obtained using a quantum mechanical method where the photodissociation process is treated as result of radiative transitions between the ground and the first excited adiabatic electronic state of the molecular ion (see e.g., Ignjatović et al., 2014b [79]). The transitions are the outcome of the interaction of the electronic component of ion-atom systems with the electromagnetic field in the dipole approximation.
MolD offers on-site services that include calculation of the average thermal cross sections based on temperature for a specific molecule and wavelength. Besides acting as a VAMDC compatible web service, accessible through VAMDC portal and other tools implemented using VAMDC standards, MolD offers additional on-site utilities enable the plotting of average thermal cross sections along available wavelengths for a given temperature. The cross sections are obtained using a quantum mechanical method where the photodissociation process is treated as result of radiative transitions between the ground and the first excited adiabatic electronic state of the molecular ion (see e.g., Ignjatović et al., 2014b [79]). The transitions are the outcome of the interaction of the electronic component of ion-atom systems with the electromagnetic field in the dipole approximation.
MolD offers on-site services that include calculation of the average thermal cross sections based on temperature for a specific molecule and wavelength. Besides acting as a VAMDC compatible web service, accessible through VAMDC portal and other tools implemented using VAMDC standards, MolD offers additional on-site utilities enable the plotting of average thermal cross sections along available wavelengths for a given temperature. The MolD database was developed in three stages [80,81]. The first, completed at the end of 2014, was characterized by the construction of the service for all photodissociation data for hydrogen H2 + and helium He2 + molecular ions, together with the development of web interface and some utility The cross sections are obtained using a quantum mechanical method where the photodissociation process is treated as result of radiative transitions between the ground and the first excited adiabatic electronic state of the molecular ion (see e.g., Ignjatović et al., 2014b [79]). The transitions are the outcome of the interaction of the electronic component of ion-atom systems with the electromagnetic field in the dipole approximation.
MolD offers on-site services that include calculation of the average thermal cross sections based on temperature for a specific molecule and wavelength. Besides acting as a VAMDC compatible web service, accessible through VAMDC portal and other tools implemented using VAMDC standards, MolD offers additional on-site utilities enable the plotting of average thermal cross sections along available wavelengths for a given temperature.
The MolD database was developed in three stages [80,81]. The first, completed at the end of 2014, was characterized by the construction of the service for all photodissociation data for hydrogen H 2 + and helium He 2 + molecular ions, together with the development of web interface and some utility programs. In stage 2, completed at the end of 2016, to MolD have been added averaged thermal photodissociation cross sections for H 2 + and helium He 2 + molecular ions as well as new cross sections for processes that involve species like diatomic molecular ions HX + , where X = Mg, Li, Na. During 2017, in stage 3, MolD implemented cross section data for processes involving MgH + , HeH + , LiH + , NaH + , H 2 + , He 2 + . In this third stage, the design of the web interface was also improved and utility programs that allow online data visualization of a wide range of data were developed. The third stage of the MolD development was completed at the beginning of 2018 and has been followed by work on a major upgrade of the MolD database, including inserting new photodissociation data (for Na 2 + , Li 2 + and LiNa + ). All of these data have possible applications in spectroscopy, low-temperature laboratory plasma created in gas discharges, e.g., in microwave discharges at atmospheric pressure [82]. Processes that involve alkali metals are also important for the optical properties and modelling of weakly ionized layers of different stellar atmospheres and rate coefficients are also needed as input parameter for models of the Io atmosphere [83]. The data may also be important in the investigation of metal-polluted white dwarfs and dusty white dwarfs, interstellar gas chemistry, etc. [84].
Node Maintenance
As VAMDC recently introduced Query Store, a new paradigm for dataset citation (see Zwölf et al. [85]), NodeSoftware upgrade was necessary at the Belgrade server. In order for latest pull of NodeSoftware repository (v12.7) from GitHub to work, Django was upgraded from version 1.4 to 1.11.2. The code is still running on Python 2.7, due to requirements of some other services running at the same server, but will be transferred to Python 3.x in the near future [86].
VAMDC has accepted the suggestion of Research Data Alliance, a research community organization [87], to implement the concept of Query Store. Now that Query Store is enabled, each query is persisted as a unique resource (with an identifier) with its pertinent citations and can be recreated even if data or schema at the host node change, as explained by Moreau et al. [88]. In this way, the connection between the citation and the dataset is straightforward and it can be interconnected with existing scientific infrastructures via Zenodo DOI request. This will increase the impact of data producers and will give more reliable citation of datasets.
Conclusions
This review presents the continuation of the work performed on database development at Serbian Virtual Observatory. The SVO is now addressing the challenge of upgrading software and continuous improvements of data processing. The changes since the last publication in 2017 include data for new targets like methane, hydrogen sulfide and rare gas atoms. The upgrades and new standards for databases include:
•
Developing of the VAMDC Portal as a Major Enabler of Atomic and Molecular Data Citation; • Python, Django updates; • Installing the Query Store on VAMDC node that could have a plan store for holding the execution plan information, and a runtime stats store for carrying on the execution statistics information.
• XSAMS evolution to deal with Big Data (resources to be accessed by diverse client platforms across the network; generating and transferring data over a network without requiring human-to-human or human-to-computer; provide security and data quality; etc.).
In this paper, as examples of exploitation of the datasets in the database, we have reviewed the available results on elastic scattering of electron by methane molecule for both experiments and theoretical treatments, covering the ranges of incident energies and scattering angles. By comparing the collected datasets for methane we have been able to show that, although it has been extensively studied in the past, there is a need for new measurements in the intermediate electron impact energy range. Hydrogen sulfide data should be contrasted with data for water molecule and that will be one of our future goals. Data for rare gas atoms, especially helium and argon, may serve as reference gases with the well-established cross sections in relative flow method for upbringing unknown cross sections on absolute scale.
We have also shown that MolD may be used to reveal the surface plot of the averaged cross section and rates for photodissociation of the HLi + are given, as well as averaged cross sections for photodissociation of the hydrogen molecular ion H 2 + as a function of λ and T. | 2019-01-22T08:09:25.052Z | 2019-01-14T00:00:00.000 | {
"year": 2019,
"sha1": "af059aa0cd69eaff46d350fd12927b5101b3e2a2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-2004/7/1/11/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2af58bad51db85289c96a73678bceb8da4c1d242",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
237469555 | pes2o/s2orc | v3-fos-license | Methane and Carbon Dioxide Hydrate Formation and Dissociation in Presence of a Pure Quartz Porous Framework Impregnated with CuSn12 Metallic Powder: An Experimental Report
Hydrate formation and dissociation processes were carried out in the presence of a pure quartz porous medium impregnated with a metallic powder made with a CuSn12 alloy. Experiments were firstly made in the absence of that powder; then, different concentrations were added to the porous medium: 4.23 wt.%, 18.01 wt.%, and 30.66 wt.%. Then, the hydrate dissociation values were compared with those present in the literature. The porous medium was found to act as an inhibitor in the presence of carbon dioxide, while it did not alter methane hydrate, whose formation proceeded similarly to the ideal trend. The addition of CuSn12 promoted the process significantly. In particular, in concentrations of up to 18.01 wt.%, CO2 hydrate formed at milder conditions until it moved below the ideal equilibrium curve. For methane, the addition of 30.66 wt.% of powder significantly reduced the pressure required to form hydrate, but in every case, dissociation values remained below the ideal equilibrium curve.
Introduction
Natural gas hydrate (NGH) is an ice-like solid compound composed by a crystalline structure based on water molecules, which contains natural gas molecules [1]. Hydrate compounds were discovered during the 18th century when, in 1778, Sr. Joseph Priestly made cold experiments in his laboratory in Birmingham and discovered that if temperature was kept close to the ice point, but in any case higher than this value, a mixture containing water and vitriolic air (SO 2 ) was able to pass from the liquid to the solid phase. Starting from its discovery, the history of gas hydrate addressed three different periods. During the first period, hydrate compounds were simply considered a scientific curiosity; thus, scientific production remained particularly contained. Since 1934, due to its capability of causing gas pipelines blockage, gas hydrate started being studied with the aim of preventing its formation and avoiding the consequent issues for the natural gas industry. Finally, in the mid-1960s, scientific interest on gas hydrate rose due to the possibility of making it a new potential energy source. Just in this period, the first natural reservoirs were discovered and the first estimations about the amount of natural gas contained in those deposits were made. To date, these estimates still show some differences between them, but in all cases, the quantity of natural gas present in hydrate was found to be enough to produce more than twice the energy that can still be produced from all conventional energy sources put together [2,3]. NGH reservoirs are mainly sited in offshore sediments (≈97%) and in permafrost (≈3%). The most significant sites were found in the South China The present work deals with gas hydrate formation, with methane and carbon dioxide as guest molecules, in pure quartz sand and with the addition of a metallic CuSn12 powder. Tests were realized in a lab-scale reactor, and different powder concentrations were used. Experiments were carried out in order to establish whether a specific metallic powder is able to intervene on hydrate formation and dissociation conditions and, in a positive case, define its effect.
Hydrate compounds were firstly formed and then dissociated in order to produce the respective equilibrium values, which have been consequently compared with values present in the literature about CH 4 and CO 2 hydrate equilibrium. To the best of our knowledge, in the literature, there is currently a substantial lack of equilibrium values in the presence of that compound, while similar information could be extremely useful to better understand the interaction mechanism of metallic powders with hydrate formation and for potential practical applications. In particular, the present paper aimed to initially investigate the potentialities of realizing tanks and cylinders, having an internal highporous metallic lattice, via 3D print technologies, in order to improve gas storage efficiency, thus increasing the energy density per unit of volume.
Experimental Apparatus
The lab-scale experimental apparatus used in this work to perform hydrate formation and dissociation has been already used in previous research, and a detailed description of it is present elsewhere in the literature [21,24]. The reactor consists of a 316SS cylindrical chamber, with an internal volume equal to 949 cm 3 (diameter 7.3 cm and height 22.1 cm). Such material is chosen following its peculiar corrosion resistance also in severe applications [25][26][27]. More detailed information about geometry is provided in Table 1. The extreme surfaces are closed with two flanges, and an equal number of spirometallic gaskets (model DN8U PN 10/40 316-FG C8 OR) were inserted to avoid any gas leakage. Gas injection may occur both from the upper and the lower flange. Usually, when replacement tests are performed, methane is injected from the bottom, while carbon dioxide is injected from the top. Here, both gases have been alternatively inserted from the bottom in order to guarantee a better diffusion of gaseous molecules inside sand pores.
Temperature was monitored with four Type K thermocouples, having class accuracy 1 and positioned at four different depths, as shown in Figure 1.
Thermocouples have been named as the respective depths: T02, T07, T11, and T16. The temperature-monitoring system was established according to Wang [28][29][30]. Temperature was controlled and varied from the external by inserting the reactor in a thermostatic bath directly connected with a chiller, model GC-LT. Figure 2 shows the 316SS reactor and the thermostatic bath internally equipped with a double copper coil to allow heat exchange between the bath and the refrigerating fluid (glycol), together with a scheme of the completely assembled experimental apparatus. Pressure was measured by using a digital manometer, model MAN-SD, having an accuracy of ±0.5%. All sensors were connected to the Labview software with a data acquisition system manufactured by a National Instrument used for monitoring and recording data from sensors. [28][29][30]. Temperature was controlled and varied from the external by inserting the reactor in a thermostatic bath directly connected with a chiller, model GC-LT. Figure 2 shows the 316SS reactor and the thermostatic bath internally equipped with a double copper coil to allow heat exchange between the bath and the refrigerating fluid (glycol), together with a scheme of the completely assembled experimental apparatus.
Materials
Ultra-High Purity (UHP) methane and carbon dioxide, with a purity respectively equal to 99.997% and 99.999%, were used for gas hydrate formation in addition to pure demineralized water and sand. This latter compound consists of pure quartz spheres, having diameter equal to 100 µm. The grain porosity is 34% and was measured with a porosimeter, model Thermo scientific Pascal 140. More in depth, the reactor was filled with 744 cm 3 of sand and 236 cm 3 of water; the remaining space was kept free for gas injection. In addition to those compounds, also a metallic CuSn12 powder was used, whose specifications are described in the next section. [28][29][30]. Temperature was controlled and varied from the external by inserting the reactor in a thermostatic bath directly connected with a chiller, model GC-LT. Figure 2 shows the 316SS reactor and the thermostatic bath internally equipped with a double copper coil to allow heat exchange between the bath and the refrigerating fluid (glycol), together with a scheme of the completely assembled experimental apparatus. Pressure was measured by using a digital manometer, model MAN-SD, having an accuracy of ±0.5%. All sensors were connected to the Labview software with a data acquisition system manufactured by a National Instrument used for monitoring and recording data from sensors.
Materials
Ultra-High Purity (UHP) methane and carbon dioxide, with a purity respectively equal to 99.997% and 99.999%, were used for gas hydrate formation in addition to pure demineralized water and sand. This latter compound consists of pure quartz spheres, having diameter equal to 100 μm. The grain porosity is 34% and was measured with a porosimeter, model Thermo scientific Pascal 140. More in depth, the reactor was filled with 744 cm 3 of sand and 236 cm 3 of water; the remaining space was kept free for gas injection. In addition to those compounds, also a metallic CuSn12 powder was used, whose specifications are described in the next section.
CuSn12 Powder
Copper-Tin powder alloy produced by a gas-atomization process is considered in this work. The alloy's nominal chemical composition (expressed in weight %) is 12% Sn and 88% Cu. The morphology characterization of the CuSn12 powder was carried out by means of high-resolution electronic scanning microscope (FE-SEM Zeiss LEO-1530). The shape of the powder appears generally spherical ( Figure 3) with a size in the range between 5 and 20 μm (the average diameter is approximately equal to 11 μm). Moreover, ICP-MS analyses were carried out to detect the possible release of ions in water; results proved the absolute absence of any release of ions.
CuSn12 Powder
Copper-Tin powder alloy produced by a gas-atomization process is considered in this work. The alloy's nominal chemical composition (expressed in weight %) is 12% Sn and 88% Cu. The morphology characterization of the CuSn12 powder was carried out by means of high-resolution electronic scanning microscope (FE-SEM Zeiss LEO-1530). The shape of the powder appears generally spherical ( Figure 3) with a size in the range between 5 and 20 µm (the average diameter is approximately equal to 11 µm). Moreover, ICP-MS analyses were carried out to detect the possible release of ions in water; results proved the absolute absence of any release of ions. Previous experiments led us to assert an inhibiting action of this compound on CO2 hydrates.
In the presence of CO2 as a guest compound, the results found in the literature were different from those observed during the experiments: Cu and CuO were defined as ki- Previous experiments led us to assert an inhibiting action of this compound on CO 2 hydrates.
In the presence of CO 2 as a guest compound, the results found in the literature were different from those observed during the experiments: Cu and CuO were defined as kinetic promoters. However, it might be due to the presence of CH 4 in the initial mixture. In conclusion, Cu and CuO particles are known to act as kinetic promoters for methane hydrates, while the experimental evidence led us to affirm an opposite action on carbon dioxide hydrates.
The chemical composition, method of production, and current application of CuSn12 powder explained why this compound was selected for the present research [33].
Methods
Experiments were carried out in the presence of pure quartz porous sand. That porous medium was used to extend the gas hydrate formation to the whole volume and not only in that in correspondence with the gas-liquid interface. In addition, sand allowed us to distribute CuSn12 grains along the whole reactor. Conversely, the powder would settle on the bottom. Two tests were realized without the metallic powder previously described, while in the others, three different concentrations were used in order to investigate also the relation between hydrate formation and powder concentration. In particular, the following quantities were used: 50, 250, and 500 g. Throughout the manuscript, those quantities have been expressed as percentage of the whole amount of solid material present inside the reactor before hydrate formation or the sum of quartz sand and metallic powder. The three quantities indicated above correspond to 4.23 wt.%, 18.1 wt.%, and 30.66 wt.%. That choice was preferred to the others because the chemical additive was at the solid state and not at the liquid or gaseous, as additives are usually applied.
Metallic powder was firstly mixed with silica sand until obtaining a satisfactory mixture; then, the reactor was filled with it and immediately after with water. In Figure 4, the mixture of sand and powder is shown. The difference in density and granulometry made the achievement of a completely homogeneous phase difficult: metallic powder diffused elsewhere; however, its concentration was not completely uniform and, as it is possible to see in Figure 4 (at right), it assumed higher concentrations in some contained portions.
The chiller was regulated in order to generate an internal temperature about 1-3 °C inside the reactor. Gas injection occurred slowly in order to increase pressure by maintaining a constant gradient. In tests carried out with methane, the initial pressure was fixed at 54-56 bar, while in tests where carbon dioxide was involved, it ranged from 38 to 40 bar. The presence of feasible conditions immediately led to hydrate production, whose formation was proved by a constant pressure decrease. Initially, the temperature increased significantly due to heat production caused by water cages formation; then, it constantly decreased until reaching the thermal balance with the system. As soon as the pressure stabilized, the formation process was considered over. Following that, the chiller was switched off, and thermal energy was provided from the external to increase the tem- The difference in density and granulometry made the achievement of a completely homogeneous phase difficult: metallic powder diffused elsewhere; however, its concentration was not completely uniform and, as it is possible to see in Figure 4 (at right), it assumed higher concentrations in some contained portions.
The chiller was regulated in order to generate an internal temperature about 1-3 • C inside the reactor. Gas injection occurred slowly in order to increase pressure by maintaining a constant gradient. In tests carried out with methane, the initial pressure was fixed at 54-56 bar, while in tests where carbon dioxide was involved, it ranged from 38 to 40 bar. The presence of feasible conditions immediately led to hydrate production, whose formation was proved by a constant pressure decrease. Initially, the temperature increased significantly due to heat production caused by water cages formation; then, it constantly decreased until reaching the thermal balance with the system. As soon as the pressure stabilized, the formation process was considered over. Following that, the chiller was switched off, and thermal energy was provided from the external to increase the temperature again and cause hydrate dissociation. The strategy adopted to increase the temperature during the dissociation phase was selected according to previous research, which proved that the experimental results produced in this way had high similarity with the equilibrium conditions widely described in the literature. In addition, further experiments were carried out where temperature increase was regulated from the external and a lower temperature gradient was established. The comparison between results reached in this way and those present in this article proved again the accuracy of the method here adopted. This latter phase was used to consider equilibrium values for both types of guest compounds in the presence of CuSn12 powder.
Results
As previously explained, methane and carbon dioxide hydrate formation were firstly tested in the presence of pure silica sand and then in the presence of three different concentration of CuSn12 powder: 4.23 wt.%, 18.1 wt.%, and 30.66 wt%. Hydrate compounds were formed and then left free to dissociate in order to produce equilibrium values. A comparison between the results here produced and equilibrium values present elsewhere in the literature was provided .
This section has been divided into three parts: firstly, methane hydrate formation tests are described and represented in pressure-temperature diagrams; then, a similar section is dedicated to tests involving carbon dioxide. Finally, the results produced with these two compounds were put together and compared in order to verify if the use of this CuSn12 powder produced similar effects in both cases or not. Table 2 shows equilibrium values for methane hydrate both in the absence and in the presence of the CuSn12 powder previously described. According to the method present in the literature, those equilibrium values were taken during hydrate dissociation. In fact, the dissociation phase followed an almost linear trend and was less affected by external parameters, unlike hydrate formation, whose temperature trend suffered from the formation of new hydrate formation nuclei, which caused local and delayed temperature peaks as well as the tendency of the whole system to reach thermal balance with the surrounding experimental apparatus. Then, Figures 5-12 describe the pressure and temperature trends over time and the pressure-temperature trend observed during tests involving methane. For each different concentration analyzed, one test is shown. In the first type of diagram, values measured by all thermocouples are shown in order to understand where hydrate formation mainly occurred and also to verify if secondary hydrate nuclei appeared during the hydrate massive growth phase. In diagrams where pressure was described as a function of temperature, this latter parameter was calculated as the average of values measured by each thermocouple. The temperature trends over time revealed that hydrate formation mainly affected the lowest portion of the volume available for the process: the most significant variations in temperature, associated to the exothermic nature of such reactions, were observed with thermocouples T16 and T11. However, also, the remaining volume was involved, and several secondary peaks in temperature were observed with T07 and T02. In the first experiment shown, carried out without the CuSn12 powder, the first peak, or the main temperature variation immediately after hydrate formation occurs, was not verified as soon as the thermodynamic conditions made hydrate formation feasible; instead, it occurred with a certain delay in time. That aspect is commonly due to the stochastic behavior of the process. The corresponding diagram describing the pressure trends over time showed that any decrease occurred before such temperature variation. In the same experiment, a secondary peak in temperature was observed by all thermocouples, which made the pressure drop more intense. In the remaining part of the process, only thermocouple T11 measured further temperature peaks, which were less intense than the previous one. In every case, a variation in pressure reduction gradient was observed in the correspondence of all those peaks. The temperature trends over time revealed that hydrate formation mainly affected the lowest portion of the volume available for the process: the most significant variations in temperature, associated to the exothermic nature of such reactions, were observed with thermocouples T16 and T11. However, also, the remaining volume was involved, and several secondary peaks in temperature were observed with T07 and T02. In the first experiment shown, carried out without the CuSn12 powder, the first peak, or the main temperature variation immediately after hydrate formation occurs, was not verified as soon as Differently from it, in the last test, where the CuSn12 powder was inserted inside the reactor, with a concentration equal to 30.66 wt.%, secondary and delayed peaks in temperature were observed in regions associated to T02 and T07, or near the top. As expected, those peaks were immediately followed by a more intense pressure drop than before.
Methane Hydrate Formation Tests
The detection of nucleation sites with all thermocouples proved that methane well diffused in the whole reactor and, similarly, that hydrate formation mainly occurred in sand pores. The first information justified the choice of using a porous medium to produce an internal skeleton able to guarantee diffused hydrate formation. Conversely, only the gas-liquid interface would have been affected by the reaction. The sand pores created a lot of little gas-liquid interfaces inside them, in particular because methane was injected from the bottom; thus, it had pass through the entire medium before reaching the upper surface. Considering the time spent thoroughly mixing sand with CuSn12 powder, whose size is extremely lower than that of sand grains, and taking into account the results obtained in this experience (which will be described in the following lines), we believed that powder adhered to the sand grains, thus influencing hydrate formation both inside and outside sand pores.
Diagrams showing the relation existing between pressure and temperature provided useful information to understand whether that powder intervened or not in the formation process and, in positive cases, what effects it had. The first of those diagrams refers to hydrate formation carried out in the absence of CuSn12 powder. Here, the continuous black line, related to hydrate dissociation, remained completely below the equilibrium curve. It means that the particular silica-based porous medium used during experiments and previously described in Section 2 acted as a promoter for methane hydrate formation. More in-depth, such a promoting effect was observed to increase with temperature: in Figure 6, as in all other P-T diagrams, these two curves moved away from each other in the presence of higher temperature values. That deviation was also found in the presence of powder; however, in this latter case, further considerations were required.
Diagrams describing tests made with concentrations of CuSn12 powder of 4.23 and 18.01 wt.% led to results very similar to those reached in its absence: values of hydrate dissociation were slightly shifted to lower pressures and higher temperature regions, proving again that milder conditions were needed to form hydrate. However, no meaningful variation in the process condition was found. The addition of powder, with the concentrations reported above, did not affect hydrate formation, neither inhibiting nor promoting it. The horizontal trend present in Figure 10 is not particularly meaningful: once the maximum pressure value was reached, the dissociation diagram could only assume that horizontal trend. The values reported in Table 2 provided additional confirmation, as all the values were similar between each other, and in all cases, the observed promoting effect can be completely associated with sand properties.
A completely different behavior was observed in the last experiment, where the CuSn12 powder was used with a concentration equal to 30.66 wt.%. For temperature values higher than 3.8-4.0 • C, the pressure and temperature conditions required to form hydrate become significantly milder than in previous experiments. Thus, with this last concentration, the powder acted as a promoter for methane hydrate formation. As in previous tests, the distance between the ideal equilibrium curve and experimental values increased with temperature. Table 2 confirmed the tendency as soon described.
Considering that the promoting effect was observed only in the presence of a certain concentration, it cannot be motivated with the chemical composition of that powder or geometrical characteristics of its grains. As it is possible to see in Figure 13, such powder acted as a cementing element for sand grains, thus hindering methane escape. Considering that the promoting effect was observed only in the presence of a certain concentration, it cannot be motivated with the chemical composition of that powder or geometrical characteristics of its grains. As it is possible to see in Figure 13, such powder acted as a cementing element for sand grains, thus hindering methane escape. Figure 13. As soon as the test with 30.66 wt.% CuSn12 powder finished, the reactor was opened, and the medium was found to be much more compact than in the absence of powder.
In addition, that powder surely increased the heat transfer rate inside the porous medium, thus giving a further contribution to hydrate formation.
Carbon Dioxide Hydrate Formation Tests
As in the previous section, a table was inserted to describe the hydrate equilibrium values measured in all tests (see Table 3); then, one experiment for each different concentration of CuSn12 (or 0, 4.23, 18.1, and 30.66 wt.%) powder was described by showing the trend over time of its thermodynamic parameters and with a corresponding pressuretemperature diagram. Those diagrams are visible in Figures 14-21. All experiments were carried out by fixing an initial internal pressure in the range of 37-40 bar, thus lower than the corresponding range established for methane hydrate formation tests, due to the milder conditions required by systems involving carbon dioxide to form hydrate. Table 3. Equilibrium values for carbon dioxide hydrate in the presence of CuSn12 powder, analyzed with four different concentrations. Figure 13. As soon as the test with 30.66 wt.% CuSn12 powder finished, the reactor was opened, and the medium was found to be much more compact than in the absence of powder.
In addition, that powder surely increased the heat transfer rate inside the porous medium, thus giving a further contribution to hydrate formation.
Carbon Dioxide Hydrate Formation Tests
As in the previous section, a table was inserted to describe the hydrate equilibrium values measured in all tests (see Table 3); then, one experiment for each different concentration of CuSn12 (or 0, 4.23, 18.1, and 30.66 wt.%) powder was described by showing the trend over time of its thermodynamic parameters and with a corresponding pressuretemperature diagram. Those diagrams are visible in Figures 14-21. All experiments were carried out by fixing an initial internal pressure in the range of 37-40 bar, thus lower than the corresponding range established for methane hydrate formation tests, due to the milder conditions required by systems involving carbon dioxide to form hydrate. As for methane, also, carbon dioxide hydrate formation mainly affected the portion of internal volume corresponding to thermocouples T11 and T16, whose trend remained higher than other devices in all tests.
Secondary temperature peaks were observed in almost all experiments. Figure 14 shows the temperature and pressure trend over time measured in the absence of CuSn12 powder: thermocouple T11 described an initial peak more extended than the other devices. Then, T16 and T07 registered two secondary peaks, which occurred contemporarily. However, those events are independent from each other: thermocouple T11, which is inserted between T16 and T07, did not register any variation, proving that thermocouples revealed two different nucleation sites, which occurred at the same time.
Conversely to other tests, in the presence of 18.01 wt.% CuSn12 powder, the reaction started near the top of the reactor or in the correspondence of T02 and T07; then, the process involved the whole reactor, and the other two thermocouples started measuring temperature values higher than the others. In the same test, T11 also showed a secondary peak, which was associated to the formation of a new hydrate nucleation site. A similar trend was finally observed in the last tests, where the concentration powder was about 30.66 wt.%. In addition, here, secondary peaks were verified; however, they belonged to thermocouples T02 and T07. It means that hydrate formation firstly interested the lowest portion of the reactor and, in the following period, it involved the whole volume. A first comparison between methane and carbon dioxide hydrate formation, based on pressure and temperature trend over time, did not reveal any substantial difference: hydrate formation was found to occur elsewhere, even if it mainly affected the regions corresponding to T11 and T16, and the formation of secondary nucleation sites occurred in almost all experiments, despite the quantity of CuSn12 powder used.
On the contrary, important differences were noticed in the diagrams describing the pressure trend as a function of temperature. Despite what was observed for methane, the porous medium acted as an inhibitor for carbon dioxide hydrate formation. Figure 15 clearly shows that values describing hydrate dissociation remained above the ideal equilibrium curve during the whole test.
Additionally for this guest compound, the formation conditions became milder with increasing temperature values. As for methane, also, carbon dioxide hydrate formation mainly affected the portion of internal volume corresponding to thermocouples T11 and T16, whose trend remained higher than other devices in all tests.
Secondary temperature peaks were observed in almost all experiments. Figure 14 shows the temperature and pressure trend over time measured in the absence of CuSn12 powder: thermocouple T11 described an initial peak more extended than the other devices. Then, T16 and T07 registered two secondary peaks, which occurred contemporarily. However, those events are independent from each other: thermocouple T11, which is inserted between T16 and T07, did not register any variation, proving that thermocouples revealed two different nucleation sites, which occurred at the same time.
Conversely to other tests, in the presence of 18.01 wt.% CuSn12 powder, the reaction started near the top of the reactor or in the correspondence of T02 and T07; then, the process involved the whole reactor, and the other two thermocouples started measuring temperature values higher than the others. In the same test, T11 also showed a secondary peak, which was associated to the formation of a new hydrate nucleation site. A similar trend was finally observed in the last tests, where the concentration powder was about 30.66 wt.%. In addition, here, secondary peaks were verified; however, they belonged to thermocouples T02 and T07. It means that hydrate formation firstly interested the lowest portion of the reactor and, in the following period, it involved the whole volume.
A first comparison between methane and carbon dioxide hydrate formation, based on pressure and temperature trend over time, did not reveal any substantial difference: hydrate formation was found to occur elsewhere, even if it mainly affected the regions corresponding to T11 and T16, and the formation of secondary nucleation sites occurred in almost all experiments, despite the quantity of CuSn12 powder used.
On the contrary, important differences were noticed in the diagrams describing the pressure trend as a function of temperature.
Despite what was observed for methane, the porous medium acted as an inhibitor for carbon dioxide hydrate formation. Figure 15 clearly shows that values describing hy- The same porous medium acted as a promoter for methane hydrate, while it inhibited carbon dioxide hydrate formation. The main reason of such a difference can be found in sand grains and, in particular, in their size.
Borchardt et al. [60] explained how hydrate growth in inner pores is completely different from that in bulk water: the lattice formed in micro and meso pores is generally smaller, due to confinement effects. For methane, nano hydrate structures were proved to grow in micro pores, with a stoichiometry about one molecule of methane trapped every two molecules of water involved [61]. Both guest compounds studied in this work usually form sI hydrate; however, the CO 2 molecule is larger than that of CH 4 , thus, it may encounter more difficulties in forming hydrate inside inner pores.
The addition of a small amount of CuSn12 powder to the porous medium did not cause any significant change: the hydrate dissociation values remained above the ideal equilibrium curve. However, in this latter case, the dissociation values were slightly shifted to lower pressures than in the absence of powder, thus proving a little promoting effect. This can be easily seen in Table 3, where the results measured during each test are shown and compared.
The introduction of higher quantities of powder inside the reactor led to completely different results. Hydrate formation and dissociation values became significantly milder, until moving below the ideal equilibrium curve. In particular, the dissociation values remained completely below it. The relation with temperature verified in previous experiments was observed also in those cases, where it become even more pronounced. Few differences were found between hydrate formation in the presence of 18.01 wt.% and 30.66 wt.% CuSn12 powder: Table 3 proved again that the greater usage of such powder, or 30.66 wt.%, led to milder dissociation values.
These two concentrations corresponded to the introduction of respectively 250 g and 500 g of powder inside the reactor. In the latter case, the usage of a double quantity of powder only produced a weak effect rather than further promoting action. Consequently, further additions were not considered useful to improve that promoting effect and were not taken into account.
As for methane hydrate, the results reached in this experience were not expected. Very few data about hydrate formation in the presence of metal alloys are currently present in the literature and, in all cases, copper is considered an inhibitor for the process [62,63].
Finally, based on previous research studies [64], a brief description of the formation process is here proposed. This process can be mainly distinguished in two phases: hydrate nucleation and massive growth phases. These two processes can be easily observed in the pressure-temperature diagrams, because they usually assume different trends. For instance, Figures 12, 15 and 19 well show it. During nucleation, pressure decreased slowly due to the contemporary formation and dissociation of hydrates. The main pressure drop occurred during the growth phase, during which the pressure decreased drastically even if the temperature did not show equally relevant variations. This phenomenon happened because the hydrate crystals inside the reactor reached the so-called "critical size" and the spontaneous dissociation of them finished, while they continued their growth. This latter aspect can be explained with the Labile Cluster Theory [1,64], which describes the formation of hydrate nuclei during the first phase, or hydrate nucleation.
According to hydrate nucleation, liquid water molecules initially absorb the gaseous molecules, form primordial clusters, which are composed by a guest molecule and 20-24 water molecules, and finally generate the first unstable 5 12 cages. These structures may continue their growth, via collision with other structures, or dissociate. When vertices are shared during collision, small cubic sI units are formed. Conversely, when faces are shared, small cubic sII cages are produced. In addition, these structures may dissociate or continue their growth. This latter phase continues until the hydrate nuclei reach the critical size; then, massive growth occurs. In the figures previously mentioned, the difference between nucleation and massive growth is well visible: nucleation is less expectable, because it depends on numerous variables and consequently consists of a stochastic process. Conversely, the massive growth appears with a drastic decrease in pressure, which may occur as soon as the equilibrium conditions are reached (obviously, the induction time must be considered, because the transition phase, during which the system goes through the so-called metastable region, cannot be neglected). Such characteristics are particularly helpful in defining the entity of the inhibiting/promoting effect associated to the presence of CuSn12 particles.
Direct Comparison between Results Reached with Methane and Carbon Dioxide
In this concluding section, the dissociation values measured during all the experiments, both in the presence of methane or carbon dioxide as a guest compound, were shown in a single diagram in order to provide a direct comparison among those values and, if possible, suggest potential applications.
In that figure, the two dotted lines have been used to indicate the ideal methane (the higher) and carbon dioxide hydrate equilibrium (the lower). Warm colors were used to describe experiments involving methane, while cold colors described tests made with carbon dioxide. Figure 22 shows all the results produced in this work, together with ideal equilibrium curves related to both compounds and those already shown in previous diagrams.
As previously explained, in all the tests, methane hydrate formation occurred at milder conditions than equilibrium, while a high dosage of CuSn12 powder was necessary to reach a similar result in the presence of carbon dioxide. In Figure 22, different indicators were used to represent tests made with a certain powder concentration. The quartz-based porous medium used drastically reduced the distance existing between the formation conditions of hydrate containing these two gaseous compounds. In the presence of temperatures lower than 4.5-4.0 • C, CO 2 hydrate required more severe conditions than CH 4 hydrate. The introduction of 4.23 wt.% CuSn12 powder did not cause significant variations, while for higher concentrations, values associated to CO 2 hydrate formation drastically dropped below the ideal equilibrium curve. Similar variations in tests involving methane were found only with 30.66 wt.% powder, and according to CO 2 -based tests, they become more and more evident with temperature increase. helpful in defining the entity of the inhibiting/promoting effect associated to the presence of CuSn12 particles.
Direct Comparison between Results Reached with Methane and Carbon Dioxide
In this concluding section, the dissociation values measured during all the experiments, both in the presence of methane or carbon dioxide as a guest compound, were shown in a single diagram in order to provide a direct comparison among those values and, if possible, suggest potential applications. Figure 22 shows all the results produced in this work, together with ideal equilibrium curves related to both compounds and those already shown in previous diagrams. Based on the results discussed in this work, the present metallic powder might be used for gas storage. The maximum concentration tested was about 30.66 wt.%, the remaining solid medium consisted of pure quartz sand and was used to guarantee a homogeneous diffusion CuSn12 particles along the whole reactor. With appropriate treatments, a similar powder might be used to directly build a metallic high-porous lattice, thus allowing the use of higher concentrations and optimization of the promoting effect. In that sense, the powder accurately described in those pages may contribute to improving the methane storage capacity of tanks and cylinders, thus increasing the energy density of their content.
Conclusions
The present work allowed us to investigate methane and carbon dioxide hydrate formation in the presence of a porous pure quartz medium and a powder consisting of a CuSn12 metallic alloy. Experiments were carried out with the aim of establishing if this latter material was able to intervene in hydrate formation and dissociation or not and, in positive cases, defining the entity and modalities of its action. A small-scale reactor was used to produce hydrate, firstly in the absence of powder and then by adding it in different concentrations of 4.23, 18.01, and 30.66 wt.%.
The first experimental evidence was provided by the porous medium, which acted as a promoter for methane hydrate, while it inhibited the process in the presence of carbon dioxide. That behavior was related to the geometrical properties of sand grains and their pores.
In both cases, the CuSn12 powder was found to promote hydrate formation. When methane was used as a guest compound, that promoting effect was observed only with a concentration about 30.66 wt.%; however, it was significant and moved hydrate dissociation values to milder thermodynamic conditions. The same occurred in the presence of carbon dioxide with the only difference that also a lower concentration, or 18.01 wt.%, promoted hydrate formation. In this latter case, while sand properties initially fixed conditions of hydrate dissociation above the ideal equilibrium, the involvement of powder brought those values below it.
Based on results reached in this work, it is expected that higher concentrations may further lead to milder dissociation conditions. From here, there is the possibility, which clearly needs to be further researched, to use such powder to build, with the auxilium of 3D print technologies, high-porous lattice in apposite tanks, to improve the gas storage efficiency and the density of energy stored per unit of volume. Further research will be focused on investigating why such metallic powder acted as a promoter for hydrate formation. In particular, analyses on water composition after experiments will be done to detect potential ions release in water. Alongside water, the solid medium will be further investigated as well to determine its variation in thermal conductivity due to the addition of such metallic powder. | 2021-09-11T06:17:04.304Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "718f681a8575035e164b33d29995699bfe0de21f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/17/5115/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b1b877c0c1db9355f90e96a5f9887d56b39c372",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220043826 | pes2o/s2orc | v3-fos-license | Type IV Pili-Independent Photocurrent Production by the Cyanobacterium Synechocystis sp. PCC 6803
Biophotovoltaic devices utilize photosynthetic organisms such as the model cyanobacterium Synechocystis sp. PCC 6803 (Synechocystis) to generate current for power or hydrogen production from light. These devices have been improved by both architecture engineering and genetic engineering of the phototrophic organism. However, genetic approaches are limited by lack of understanding of cellular mechanisms of electron transfer from internal metabolism to the cell exterior. Type IV pili have been implicated in extracellular electron transfer (EET) in some species of heterotrophic bacteria. Furthermore, conductive cell surface filaments have been reported for cyanobacteria, including Synechocystis. However, it remains unclear whether these filaments are type IV pili and whether they are involved in EET. Herein, a mediatorless electrochemical setup is used to compare the electrogenic output of wild-type Synechocystis to that of a ΔpilD mutant that cannot produce type IV pili. No differences in photocurrent, i.e., current in response to illumination, are detectable. Furthermore, measurements of individual pili using conductive atomic force microscopy indicate these structures are not conductive. These results suggest that pili are not required for EET by Synechocystis, supporting a role for shuttling of electrons via soluble redox mediators or direct interactions between the cell surface and extracellular substrates.
Biophotovoltaic devices utilize photosynthetic organisms such as the model cyanobacterium Synechocystis sp. PCC 6803 (Synechocystis) to generate current for power or hydrogen production from light. These devices have been improved by both architecture engineering and genetic engineering of the phototrophic organism. However, genetic approaches are limited by lack of understanding of cellular mechanisms of electron transfer from internal metabolism to the cell exterior. Type IV pili have been implicated in extracellular electron transfer (EET) in some species of heterotrophic bacteria. Furthermore, conductive cell surface filaments have been reported for cyanobacteria, including Synechocystis. However, it remains unclear whether these filaments are type IV pili and whether they are involved in EET. Herein, a mediatorless electrochemical setup is used to compare the electrogenic output of wild-type Synechocystis to that of a pilD mutant that cannot produce type IV pili. No differences in photocurrent, i.e., current in response to illumination, are detectable. Furthermore, measurements of individual pili using conductive atomic force microscopy indicate these structures are not conductive. These results suggest that pili are not required for EET by Synechocystis, supporting a role for shuttling of electrons via soluble redox mediators or direct interactions between the cell surface and extracellular substrates.
INTRODUCTION
Electron transfer and redox reactions form the foundation for energy transduction in biological systems (Marcus and Sutin, 1985). Some microbes have the capacity to transfer electrons beyond their cell wall to extracellular acceptors (Hernandez and Newman, 2001), a function that may be important in microbial ecology (Lis et al., 2015;Polyviou et al., 2018) and has been exploited in bioelectronic applications. Although electron transfer between redox-active sites separated by less than 1.6 nm is well understood to occur via electron tunneling described by Marcus theory, little is known about the mechanisms of electron transfer over larger distances, i.e., nanometers to micrometers, observed in biological ecosystems (Gray and Winkler, 2005). Long-range electron transfer in various microbes may employ soluble redox mediators, conductive bacterial nanowires or pili (Reguera et al., 2005;Marsili et al., 2008;Brutinel and Gralnick, 2012;Kotloski and Gralnick, 2013;Yang et al., 2015;Ing et al., 2018;Heidary et al., 2020). Furthermore, an understanding of this activity forms the foundation for the development of microbial fuel cells and photobiological electrochemical systems, devices that employ microbes to generate electricity (Rabaey and Verstraete, 2005;Kracke et al., 2015).
Two distinct mechanisms have been hypothesized to account for extracellular electron transfer (EET) in anaerobic, heterotrophic bacteria: utilization of soluble, diffusing redox shuttles like flavins to transfer electrons from the cellular interior to the extracellular surface (Watanabe et al., 2009;Glasser et al., 2017) and direct interaction between a redox-active component on the cell surface and the extracellular target (Shi et al., 2009). The latter has been proposed to proceed via redox proteins on the cell surface (e.g., multiheme cytochromes) or via extracellular appendages that have come to be known as bacterial nanowires (Gorby et al., 2006;El-Naggar et al., 2010). The composition of these nanowires is hypothesized to vary between different organisms; recent work by El-Naggar and coworkers has shown that the nanowires of Shewanella oneidensis MR-1 are extensions of EET-protein-containing outer membrane that appear to form from chains of vesicles (Pirbadian et al., 2014). On the other hand, Lovley and coworkers reported that the nanowires of electrogenic Geobacter sp. are conductive pili (Reguera et al., 2005;Holmes et al., 2016), whereas recent studies have shown that Geobacter sulfurreducens produces OmcS cytochrome filaments that are distinct from type IV pili (Tfp) (Filman et al., 2019;Wang et al., 2019). For a recent review of Geobacter protein nanowires see Lovley and Walker (2019). However, details about the types of charge carriers and the exact mechanisms of interfacial electron transport within conductive appendages remain unclear.
Biophotovoltaic devices (BPVs) interconvert light and electrical energy using a photosynthetic organism. The most common devices employ oxygenic phototrophs to harvest light energy and transfer electrons produced by water oxidation to extracellular acceptors, generating power or hydrogen (Zou et al., 2009;Pisciotta et al., 2010;McCormick et al., 2011McCormick et al., , 2015Bradley et al., 2012;Lea-Smith et al., 2015;Saper et al., 2018;Tschörtner et al., 2019). Cyanobacteria, green algae, and plants have been used to generate power in BPVs, with much work performed using the model freshwater cyanobacterial species Synechocystis sp. PCC 6803 (hereafter Synechocystis). Current production in BPVs containing Synechocystis is largely dependent on illumination, and previous studies employing chemical and genetic inhibition indicate that water splitting by Photosystem II (PSII) provides the majority of electrons Pisciotta et al., 2011;Cereda et al., 2014). Improvements of BPVs based on advances in device architecture, electrode material, proton exchange membrane and use of mediators and biofilms have been reported (Thorne et al., 2011;Bombelli et al., 2012Bombelli et al., , 2015Call et al., 2017;Rowden et al., 2018;Wenzel et al., 2018;Wey et al., 2019), but improvements arising from engineering of phototrophs have been limited to genetic removal of competing electron sinks McCormick et al., 2013;Saar et al., 2018) by lack of understanding of how photosynthetic electrons are transferred from the photosynthetic apparatus to extracellular acceptors.
Synechocystis cannot produce pili in the absence of the leader peptidase/methylase, encoded by the pilD gene (Bhaya et al., 2000). Herein, the rates of EET by a pilD mutant are compared to those of wild-type organisms by measuring photocurrent production in our previously described mediatorless bioelectrochemical cell (Cereda et al., 2014). Photocurrent production by the wild-type and pilD cells is not significantly different, suggesting pili do not play a role in photocurrent generation or EET by Synechocystis, at least under the conditions investigated here. Additionally, conductivity measurements using atomic force microscopy (AFM) of wildtype Synechocystis pili found no evidence for conductivity in these structures. Our results support the hypothesis that redox mediator shuttling may be the major mechanism of photocurrent production by cyanobacteria (Saper et al., 2018;Wenzel et al., 2018).
Growth was monitored by measurement of the optical density at 750 nm (OD 750 ).
Deletion of pilD (slr1120)
For deletion of pilD, the central portion of the slr1120 open reading frame was replaced with a chloramphenicol acetyl transferase (cat) gene by allele exchange using a plasmid (pICJH4) constructed by Gibson assembly (Gibson et al., 2009) of three PCR products (two amplified from Synechocystis genomic DNA and the third from pACYC184) together with the 2.6 kb EcoRI-HindIII restriction fragment of pUC19. The allele exchange cassette comprised a first region of 685 bp of homology with the Synechocystis chromosome including upstream flanking sequence and the first 28 codons of pilD followed by two stop codons (amplified with primers pilD-us-F and pilD-us-R), the cat cassette (amplified with primers cat-F and cat-R), and a second region of 500 bp of homology with the Synechocystis chromosome beginning with the 12thfrom-last codon of pilD followed by flanking downstream DNA (amplified with primers pilD-ds-F and pilD-ds-R) (see Supplementary Table S2 for primer sequences). The pICJH4 plasmid was confirmed to be correctly assembled by automated DNA sequencing and introduced into wild-type Synechocystis by natural transformation. Recombinants were selected on plates containing 5 µg ml −1 chloramphenicol, and segregation of genome copies was achieved by sequentially increasing the chloramphenicol concentration (up to 40 µg ml −1 ). Segregation at the pilD locus was confirmed by PCR with primer pair pilDscreen-F and pilD-screen-R.
RNA Isolation and RT-PCR
End-point reverse transcriptase PCR analysis of Synechocystis strains was performed as described previously for Acaryochloris marina (Chen et al., 2016). Briefly, Synechocystis cells were harvested at mid-log phase (OD 750 = ∼0.6), and total RNA was isolated by the hot TRIzol method (Pinto et al., 2009). RNA was treated with the Ambion Turbo DNA-free TM Kit to remove contaminating genomic DNA, and 100 ng was used for cDNA synthesis and PCR, which were performed in a single reaction using the MyTaq one-step reverse transcription-PCR (RT-PCR) kit (Bioline). Gene-specific primer pairs pilA1-RT-F/R, pilD-RT-F/R or rnpB-RT-F/R were used to detect transcript of pilA1 (124 bp), pilD (180 bp), and the reference gene rnpB (180 bp) (Polyviou et al., 2015), respectively. The reaction setup and thermocycling conditions were performed according to the manufacturer's instructions, and 10 µl of PCR product was analyzed on a 2% (w/v) agarose gel.
Immunodetection of PilA1
Denatured whole-cell extracts were separated by SDS-PAGE on 12% Bis-Tris gels (Invitrogen) and transferred to polyvinylidene difluoride membranes (Invitrogen). Membranes were incubated with an anti-PilA1 primary antibody raised against a synthetic peptide corresponding to PilA1 residues 147-160 as described previously (Linhartová et al., 2014) and then a secondary antibody conjugated with horseradish peroxidase (Sigma Aldrich). Chemiluminescence was detected using the WESTAR R EtaC kit (Geneflow Ltd.) and an Amersham TM Imager 600 (GE Healthcare).
Oxygen Evolution and Determination of Chlorophyll Content
Oxygen evolution was measured as described in our previous work (Cereda et al., 2014). Chlorophyll was extracted from cell pellets from 1 ml of culture at OD 750 ≈ 0.4 with 100% methanol and quantified spectrophotometrically according to Porra et al. (1989).
Electrochemical Measurements
Electrochemical measurements were made in a three-electrode cell with carbon cloth as working electrode as described previously (Cereda et al., 2014).
Atomic Force Microscopy Imaging of Wild-Type and Mutant Cells ( pilD * )
Synechocystis
wild-type and pilD * cells grown photoautotrophically in liquid BG11 or on BG11 agar plates were collected, washed three times, and resuspended in 1 ml deionized water (centrifugation speed 3,500 × g). Aliquots of 5 µl were spotted onto a mica support and air dried. After drying, samples were imaged using an Asylum Research MFP 3D (Santa Barbara, CA, United States) Atomic Force Microscope (AFM) in tapping mode using Tap300Al-G probes (with 40 N/m force constant, 300 kHz resonant frequency). The images were processed using Gwyddion software.
Scanning Electron Microscopy (SEM) Imaging
Wild-type Synechocystis and the pilD * strain were grown photoautotrophically and harvested via centrifugation (3,500 × g). Cells were transferred to the carbon cloth used for electrochemical measurements, fixed onto the cloth in 50 mM sodium phosphate buffer (pH 7.2) with 2% glutaraldehyde for 30 min at room temperature, and washed three times in the same buffer for a total of 30 min. After a second fixation step for 30 min at room temperature in the same buffer plus 0.5% (v/v) osmium tetroxide, samples were washed three times with deionized water. Samples were critical point dried with carbon dioxide (Balzers CPD020 unit), mounted on aluminum specimen stubs, and coated with approximately 15 nm of gold-palladium (Technics Hummer-II sputter-coater). Sample analysis was performed with a JEOL JSM-6300 SEM operated at 15 kV, and images were acquired with an IXRF Systems digital scanning unit.
AFM-Based Electrical Characterization of Pili
Glass coverslips (43 × 50 NO. 1 Thermo Scientific Gold Seal Cover Glass) coated with 5 nm titanium and then 100 nm gold via electron beam evaporation were used as conductive substrates. The Au-coated coverslips were rinsed with acetone, isopropanol, ethanol, and deionized water and then dried with nitrogen prior to use. Synechocystis cells were drop cast onto the clean conductive substrates, rinsed with sterile water, and left to dry overnight. An Oxford Instruments Asylum Research Cypher ES AFM was used to make all electrical measurements. Dried samples were affixed and electrically connected to AFM disks with silver paint (TED PELLA, Inc). The sample disks were wired to the AFM upon loading. Si probes, with a Ti/Ir (5/20) coating, a resonant frequency of 75 kHz (58-97), a spring constant of 2.8 N/m (1.4-5.8), and a tip radius of 28 ± 10 nm, were used (Oxford Instruments AFM probe Model: ASYELEC.01-R2). Pili electrical characterization was performed using Oxford Instruments Asylum Research Fast Current Mapping (FCM). To generate FCM images, a bias is held between the probe and substrate while, for each pixel, current and force are measured with respect to the vertical distance of consecutive probe approaches and retractions over the sample. Each approach is terminated when a user-defined force is met (a force setpoint), and each retraction is terminated when a user-defined distance is met (a force distance). A bias of 5.00 V was used. A force setpoint of 49.34 nN and a force distance of 1000 nm were used for thick pili measurements. A force set point of 27.86 nN and a force distance of 750 nm were used for thin pili measurements.
Generation and Phenotypic Analysis of a pilD Strain
The PilD protein is a bifunctional, membrane-bound leader peptidase/methylase that processes PilA precursors and N-methylates the amino acid at position 1 in the mature protein (Strom et al., 1993). PilD is absolutely required for pilus assembly, and a pilD mutant in a motile strain of Synechocystis has been reported to be non-piliated, non-motile, and recalcitrant to transformation (Bhaya et al., 2000). Since Synechocystis contains multiple pilA genes (Yoshihara et al., 2001) but only a single copy of pilD (slr1120), we used a pilD knockout mutant to investigate whether pili are required for EET in Synechocystis. The pilD mutant generated herein has most of the open reading frame replaced with a chloramphenicol resistant cassette ( Figure 1A) and was confirmed to be fully segregated by PCR ( Figure 1B).
It should be noted that GT strains of Synechocystis are typically non-motile because of a frameshift mutation in the spkA (sll1574) gene, which in motile strains encodes a functional Ser/Thr protein kinase (Kamei et al., 2001). In the originally genome-sequenced Kazusa strain (Kaneko et al., 1996), a 1 bp insertion also results in a frameshift mutation in pilC (slr0162/3), preventing pilus assembly (Bhaya et al., 2000), which means this strain is non-competent for transformation with exogenous DNA (Ikeuchi and Tabata, 2001). The pilC mutation seems to be specific to the Kazusa strain as other GT strains contain an intact pilC gene (Tajima et al., 2011;Kanesaki et al., 2012;Trautmann et al., 2012;Morris et al., 2014;Ding et al., 2015), and the GT wild-type strain used in this study (Supplementary Table S1) is naturally transformable and thus must produce Tfp.
When first generated, the pilD mutant displayed an obvious aggregation phenotype, with cells forming small clumps when grown photoheterotrophically in liquid medium. The cells were very difficult to collect with a loop from an agar plate, and the strain grew very poorly, if at all, under photoautotrophic conditions (Table 1 and Figure 1C). Similar phenotypes were described for a pilD mutant generated by Linhartová et al. (2014), who showed that the buildup of unprocessed PilA-prepilins triggered degradation of the essential membrane proteins SecY and YidC. Linhartová et al. (2014) isolated suppressor mutants that were able to grow photoautotrophically by prolonged growth in the absence of glucose or targeted deletion of the pilA1 gene. Similarly, after continued sub-culturing on agar plates we also isolated suppressor mutants that were capable of photoautotrophic growth, and when cultures were well mixed by air bubbling or orbital shaking, these suppressor strains grew at rates comparable to the wild type without significant clumping (Table 1 and Figure 1C). We will henceforth refer to the strain which can grow photoautrophically as pilD * . Linhartová et al. (2014) showed that the loss of PilA1 pre-pilins in their pilD * strain was at least partially responsible for the improvement in growth; conversely, we found that Pre-PilA1 is still present in our pilD * strain, albeit to a lesser extent than in the originally isolated pilD strain ( Figure 1D). Another study found that the level of pilA1 mRNA in a pilD strain capable of phototrophic growth is similar to that of the wild-type organism (Bhaya et al., 2000); sequencing confirmed pilA1 and its promoter are not mutated in our pilD * strain, and we confirmed pilA1 is expressed using end-point RT-PCR (Figure 1E), indicating that reduced transcription of the pilA1 gene is unlikely to be the reason for the decrease in PilA production. Further investigation of the nature of the suppressor mutation(s) in pilD * strains is beyond the scope of the present work and will be reported elsewhere (Linhartova, Sobtoka, et al. Unpublished).
The initially isolated pilD mutant described by Linhartová et al. (2014) had impaired PSII activity. Because it has previously been shown that photocurrent from Synechocystis is largely dependent on the supply of electrons from water splitting by PSII Pisciotta et al., 2011;Cereda et al., 2014), we measured the rate of oxygen evolution by wild-type or pilD * cells. For both photoautotrophically and photoheterotrophically cultured cells, the growth rate, chlorophyll content, and oxygen evolution of the pilD * was not significantly different to that of the wild-type organism ( Table 1). This suggests that PSII activity and the photosynthetic capacity of the pilD * strain are similar to the wild type, allowing direct electrochemical comparison of the two strains when the same number of cells is used (normalized by OD 750 ).
Electrochemical Properties of the pilD * Strain
The light-dependent, EET capacity of the wild-type and pilD * strains of Synechocystis was probed by measuring the photocurrent produced when a potential of +240 mV (vs. standard hydrogen electrode) was applied. This potential was chosen because it has previously been shown to be sufficiently oxidizing for the cells to transfer electrons to an external Frontiers in Microbiology | www.frontiersin.org substrate (Cereda et al., 2014). As shown in Figure 2A, when pilD * cells are applied to the working electrode of a photobioelectrochemical cell followed by incubation for a few minutes at the desired electrochemical potential, photocurrent can be observed [red light with peak λ = 660 nm, maximum intensity 20 W m −2 (110 µmol photons m −2 s −1 )]. The photocurrent produced by pilD * is similar to the photocurrent produced by wild type whether the cells were grown photoautotrophically or photomixotrophically (Figure 2B). For the pilD * strain, photocurrent increases linearly (R 2 = 0.99) with cell density to a magnitude (88 ± 15%) comparable to that produced by the wild type (100 ± 12%) (Supplementary Figure S1). This shows that the electrical output of both strains is directly related to the concentration of Synechocystis cells present in the electrochemical cell. In short, photocurrent production by the two strains is not significantly different, suggesting that it is independent of Tfp.
Atomic Force Microscopy (AFM) Imaging of Wild-Type and pilD * Cells
Planktonic growth under rapidly mixed conditions has previously been reported to negatively impact pili stability via shearing action (Yoshihara et al., 2001;Lamb et al., 2014). To provide evidence that wild-type Synechocystis has Tfp under the growth conditions employed in this study, we visualized the cells by AFM. To ensure that the imaged cells pilD PM 20 ± 1.0 c 3.5 ± 0.7 c 32 ± 8 c 549 c pilD* PM 12 ± 0.5 3.8 ± 0.4 40 ± 4 632 pilD* PA 16 ± 0.5 4.1 ± 0.1 46 ± 4 673 a Growth under PM, photomixotrophic (plus 5 mM glucose) or PA, photoautotrophic conditions, as described in the section "Materials and Methods." b Calculated from Chl content of 1 OD 750 unit of cells and the oxygen evolution (nmol O 2 OD 750 unit -1 min -1 ). c Accuracy of the growth rate, Chl content, and oxygen evolution is limited for this strain as a result of the clumping phenotype.
are as morphologically like those used in the electrochemical measurements, samples were washed in deionized water prior to AFM visualization to remove contaminants, simulating the pretreatment conditions used for the electrochemical experiments. Figure 3 shows representative images. Wildtype cells grown planktonically have hair-like pilus structures protruding from the cell surfaces ( Figure 3A). Conversely, corresponding images of pilD * cells grown and treated in the same way reveal an almost complete lack of cell surface protrusions ( Figure 3B).
Scanning Electron Microscopy (SEM) Imaging of Synechocystis Cells
Scanning electron microscopy was used to visualize the physical interaction between Synechocystis cells and the carbon electrode. SEM micrographs of both wild-type and pilD * cells confirm uniform adhesion of cells to the carbon cloth electrode surface. We note that sample preparation for SEM imaging can affect the total number of cells attached to the electrode and can underestimate the actual coverage. Nonetheless, in all images, cells appear to be in direct contact with the carbon cloth electrode. High-resolution images from wild-type cells clearly show structures consistent with being pili present between the cells and the carbon substrate (Figures 4A-D). Conversely, highresolution images from the pilD * strain show a complete absence of any type of pilus-like structures (Figures 4E-H), suggesting some other mechanism for the physical interaction with the electrode surface.
Conductivity Measurements of Pili Using AFM
The Fast Current Mapping (FCM) mode of AFM was used to simultaneously generate topographical and current map images of Synechocystis pili overtop Au-coated glass coverslips. FCM was chosen for the conductivity measurements to minimize lateral tip-sample forces, which we observed to be damaging and disruptive to the filaments in contact mode conductive AFM. During FCM, current and force curves are generated at each pixel, while the AFM probe vertically approaches and retracts from the sample. Thick and thin pili are clearly visible in the topographical images (Figures 5A,B). The diameters of the thin ( Figure 5A) and thick ( Figure 5B) pili were obtained from AFM height measurements as 3 and 6 nm, respectively. Note that the heights, rather than the apparent widths, were used to estimate the diameters, since AFM lateral measurements are subject to tip convolution artifacts resulting in a significant broadening of structures. There are no current readings along the lengths of pili in the current map images (Figures 5C,D). Representative point measurements of current during probe approach and retraction (Figures 5E,F) show pili current readings comparable to background values when the probe contacts the pili with the same force used to observe current readings from the Au substrate. Our results indicate that, within the sensitivity of our instrumentation, Synechocystis pili are not conductive. We note that AFM measurements were made with dried cells and conductivity may differ under other conditions.
DISCUSSION
Conductive pili are hypothesized to be important for longrange electron transport by various microorganisms including dissimilatory metal-reducing bacteria such as G. sulfurreducens. Gorby et al. (2006) reported scanning tunneling microscopy images suggesting that, under CO 2 limitation, Synechocystis also produces such conductive filaments. However, controversy exists as to whether the structures they observed are true Tfp assemblies. Lovley (2012) has suggested the diameter of the filaments is too large for Tfp. Furthermore, it is hypothesized that similar structures observed in S. oneidensis by Gorby et al. (2006) in the same study are filamentous extracellular polysaccharides that arise as an artifact of dehydration during sample preparation or imaging (Dohnalkova et al., 2011). Finally, although appendages produced by S. oneidensis have been shown to be conductive under dry conditions (Gorby et al., 2006;El-Naggar et al., 2010), additional work has shown that nanowires of S. oneidensis MR-1 are not pili but rather outer membrane extensions containing the multiheme cytochrome conduits of EET (Pirbadian et al., 2014). Consistent with these findings, experiments with mutant strains of S. oneidensis have shown that pili are not required for EET (Bouhenni et al., 2010). Thus, the potential role of pili in EET in cyanobacteria such as Synechocystis was ambiguous and warranted investigation.
The results herein show that our pilD * strain, which lacks the pilD gene and is unable to synthesize mature pili, produces a similar amount of light-dependent current as wildtype Synechocystis in a mediatorless biophotovoltaic device. Given that the rate of photo-electron production by PSII was shown to be similar in the mutant and wild-type using oxygen evolution measurements, we conclude that, at least under the conditions used in this study, pili are not required for photocurrent production. In support of this conclusion, our AFM-based electrical measurements suggest that neither thick nor thin pili of Synechocystis are conductive. Microbial cell-toelectrode electron transfer by Synechocystis must therefore be facilitated by an alternative, i.e., non-pili-mediated, mechanism, either by direct transfer from some other cell surface electron transport proteins or by mediated-transfer via unknown redoxshuttles excreted into the extracellular environment/electrolyte (Saper et al., 2018;Wenzel et al., 2018). Secreted flavins have been detected in cultures of Shewanella and other bacteria and are believed to play a role in EET by serving as soluble redox mediators (Okamoto et al., 2013;Tian et al., 2019).
We confirmed direct contact between Synechocystis cells and the carbon cloth electrode with high-resolution SEM images. This demonstrates that the absence of pili in the pilD * mutant cells does not appear to affect the adhesion of the mutant cells to the electrode surface, and mediated electron transfer may be more important in cyanobacteria than electron transfer via direct contact between cells and the electrode. Wenzel et al. (2018) elegantly demonstrated that bio-anodes with mesopores large enough to accommodate cells, thereby providing an increase in the direct contact area between the bacteria and the electrode surface, showed only a small increase in current generation compared to nanoporous electrodes, which are not directly accessible to the relatively large cells but provide an increased surface area for interactions with soluble redox-carriers. Coupled with our demonstration that pili do not appear to be necessary for EET, it appears most likely that cyanobacteria use a redox shuttle-mediated mechanism for electron transfer from the bacteria to the electrode rather than a direct electron transfer, or both mechanisms may be important under different growth conditions or environmental stresses. Identifying the components responsible for the reduction of the extracellular environment by cyanobacteria is a crucial next step, both for exploiting cyanobacterial EET and determining the role of this phenomenon in natural systems.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
AUTHOR CONTRIBUTIONS
JH, TB, and AJ conceived the study and designed the research. AH and JH generated and characterized the pilD mutant. MT, JL, BD, and RR performed or analyzed the atomic force microscopy. MT and AJ performed or analyzed the scanning electron microscopy. MC and ME-N performed conductive AFM. MT, AC, and AJ performed or analyzed the electrochemical experiments. MT, AH, JH, TB, and AJ wrote the manuscript, which was edited and approved for submission by all the other authors. | 2020-06-25T09:03:28.114Z | 2020-06-25T00:00:00.000 | {
"year": 2020,
"sha1": "eb9c7d428d14340d270d0e9ccfd9e68ff3a37b47",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.01344/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb292a0ce735a772efce1dba1d2594056dde43de",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
252736928 | pes2o/s2orc | v3-fos-license | The magnitude and associated factors of immune hemolytic anemia among human immuno deficiency virus infected adults attending University of Gondar comprehensive specialized hospital north west Ethiopia 2021 GC, cross sectional study design
Background Immune hemolytic anemia commonly affects human immune deficiency infected individuals. Among anemic HIV patients in Africa, the burden of IHA due to autoantibody was ranged from 2.34 to 3.06 due to drug was 43.4%. IHA due to autoimmune is potentially a fatal complication of HIV which accompanies the greatest percent from acquired hemolytic anemia. Objective The main aim of this study was to determine the magnitude and associated factors of immune hemolytic anemia among human immuno deficiency virus infected adults at university of Gondar comprehensive specialized hospital north west Ethiopia from March to April 2021. Methods An institution-based cross-sectional study was conducted on 358 human immunodeficiency virus-infected adults selected by systematic random sampling at the University of Gondar comprehensive specialized hospital from March to April 2021. Data for socio-demography, dietary and clinical data were collected by structured pretested questionnaire. Five ml of venous blood was drawn from each participant and analyzed by Unicel DHX 800 hematology analyzer, blood film examination and antihuman globulin test were performed to diagnosis of immune hemolytic anemia. Data was entered into Epidata version 4.6 and analyzed by STATA version 14. Descriptive statistics were computed and firth penalized logistic regression was used to identify predictors. P value less than 0.005 interpreted as significant. Result The overall prevalence of immune hemolytic anemia was 2.8% (10 of 358 participants). Of these 5 were males and 7 were in the 31 to 50 year age group. Among individuals with immune hemolytic anemia, 40% mild and 60% moderate anemia. The factors that showed association were family history of anemia (AOR 8.30 at 95% CI 1.56, 44.12), not eating meat (AOR 7.39 at 95% CI 1.25, 45.0), and high viral load 6.94 at 95% CI (1.13, 42.6). Conclusion and recommendation Immune hemolytic anemia is less frequent condition in human immunodeficiency virus infected adults, and moderate anemia was common in this population. The prevalence was increased with a high viral load, a family history of anemia, and not eating meat. In these patients, early detection and treatment of immune hemolytic anemia is necessary.
hematology analyzer, blood film examination and antihuman globulin test were performed to diagnosis of immune hemolytic anemia. Data was entered into Epidata version 4.6 and analyzed by STATA version 14. Descriptive statistics were computed and firth penalized logistic regression was used to identify predictors. P value less than 0.005 interpreted as significant.
Background
In HIV infection, hematological parameters are mostly affected because of the viral effect on all lineages of blood cells and the immune system. The most typically affected hematological profiles are leukocytes, erythrocytes, and platelets [1]. Anemia is the most common hematological problem, affecting about 30% of asymptomatic and 75-80% of symptomatic HIV infections. From severely anemic individuals around 53% of HIV-infected adults may be caught by death [2,3]. The most frequent anemia in HIV patients is normocytic normochromic type [4].
Immune hemolytic anemia (IHA) is normocytic or macrocytic normochromic anemia that occurs due to antibodies formed against one or more antigenic constituents of the individual's tissues and results in the destruction of the erythrocyte. The main causes of IHA are primary or idiopathic and secondary by different underlying causes [5]. It can be classified as autoimmune hemolytic anemia (AIHA), alloimmune hemolytic anemia, and drug-induced hemolytic anemia (DIHA) [6]. Alloimmune hemolytic anemia occurs when antibodies are produced against red cells from another individuals, in transfusion, abortion and pregnancy [7]. AIHA results from the autoantibodies secondary to malignancies, autoimmune disorders and genetically [6]. It can be also classified as warm IHA, cold IHA and mixed IHA according to temperature of the reaction [5,8].
In HIV patients IHA occurs due to Viral binding by HIV immune complexes to erythrocytes by one of three mechanisms. The mechanisms include the binding of complement-opsonized immune complexes via complement component receptor 1 (CR1), direct virus binding in a complement-dependent manner, but without the need for specific antibodies, and a third mechanism in which complement was not required at all. In absence of specific antibodies virus directly transported by erythrocyte surface during primary infection through duffy antigen [9,10].
In the development of IHA by antigen-antibody complexes, the viral accessory protein negative factor plays a critical role in the pathogenesis of HIV-associated hematopoietic dysfunction. These factors affect the clonogenic potential of hematopoietic stem cells down modulates host cell receptors like a cluster of differentiation and major histocompatibility complex (MHC-1) molecules. It facilitates the transformation of infection into disease, increase viral infectivity, and increasing immunogenicity of the viral antigen mimic cell [1,11,12].
In addition to direct viral effect, it also occurs due to indirect effects of chronic generalized immune activation by HIV infection. These induce production of autoantibodies owing to the structural antigen similarity between the viral proteins and self-antigens [13]. The progressive decline of helper T-cell (CD4) caused by direct killing of infected cells that results from molecular mimicry of alloantigen with self, antiretroviral therapy (ART) drugs and opportunistic pathogens [14].
The autoimmune manifestations of HIV infection that cause IHA can be increased cytotoxic cell activity, increased expression of autoantigens, and alteration of erythrocyte surface antigen by a virus, and a cross-reaction between antibody induced by an infectious agent against erythrocyte surface antigen [15][16][17][18]. The factors that contribute to IHA includes increased viral load, tuberculosis, poor nutrition [19], family history of hemolytic anemia, neoplasia, unmatched blood transfusion, infection, and ART drugs [5,20]. There is a variation of burden depending on the stage of HIV disease, sex, age, pregnancy status, history of abortion, and adherence to ART [21].
Immune hemolytic anemia in HIV-positive patients is a serious complication that occurs mostly in advanced age and stage of acquired AIDS especially in female patients. In Africa among anemic HIV patients, the burden of IHA due to autoantibody was ranged from 2.34 to 3.06% [17,22], and due to DIHA was 43.4% [23]. It causes fever, jaundice, dark-colored urine, weakness, dizziness, confusion, hepatosplenomegaly, tachycardia and heart murmur [24]. Its consideration is important if patients experience severe to moderate anemia with low CD4 count [17].
Immune hemolytic anemia causes highly severe form of anemia. The most commonly reported age groups were middle-aged adult patients. Patients with IHA had lower mean CD4, Hb, RBC count, positive direct antiglobulin test (DAT), higher immature reticulocyte fraction, and mean reticulocyte percent than the non-anemic patients [25]. But DAT negative doesn't mean there is no IHA because it may not be positive in the case of leukemia patients, immunesuppressed individuals, and low proteins [26]. In other way, a positive DAT is not in all cases resulted from IHA, because overt hemolytic anemia and aplastic anemia with hemolysis might be positive [27].
The diagnosis of IHA depends on the presence of laboratory findings supporting hemolysis such as increase of serum lactate dehydrogenase, haptoglobin, and unconjugated bilirubin in addition to DAT. The peripheral blood smear changes that are used as an indicator for hemolysis include reticulocytosis, shistocytosis, bite cells, and spherocytosis [8,17].
The attention to IHA in PLHIV is less than expected globally, particularly in Ethiopia, due to limited studies considering IHA among PLWHIVs. Some studies that were done concerned with IHA due to autoimmunity even if they did not use immature reticulocyte fraction for differentiation of other cause of hemolysis from immune-mediated ones. The IHA burden is not well studied as its effect on people's health, especially HIV patients and there was a knowledge gap between ART clinicians and other health professionals [17]. Therefore the aim of this study was to determine the magnitude and associated factors of anemia in HIV infected adults attending UOGCSH, North West Ethiopia.
Study design and period
An institutional based cross-sectional study was conducted to determine the magnitude of IHA and associated factors among HIV infected adults attending UOCSH North West Ethiopia from March to April 2021.
Population
Source population. All HIV-infected adult individuals attending ART clinic in UOCSH, North West Ethiopia.
Study population. All HIV-infected adult individuals attending ART clinic at UOCSH, during a time of data collection can be used as the study population.
Inclusion criteria and exclusion criteria
Inclusion criteria. All HIV infected individuals who were greater than or equal to 15 years or older, who had a confirmed HIV infection upon follow-up at the UOGCSH, and who had a clinical data and laboratory data's such as viral load and CD4 counts in record within last six month of data collection time were included in study.
Exclusion criteria. Individuals who had been seriously ill and unable to respond and give blood specimens were excluded from the study.
Sample size calculation and sampling technique
The sample size for this study was calculated using the single population proportion formula, since no study done was on IHA we used 50% proportion with 95% confidence interval and 5% marginal error, and finally using population reduction formula since the total population was less than 10,000 then the sample size obtained was 358.
Sampling technique
A systematic random sampling technique was used to select study participants. The average number of HIV patients attending ART follow up every day and who gave a blood sample for viral load and CD4 + T-cell count concurrently were twenty five. During the two-month data collection period, 1100 PLWHIV were expected to visit the hospital for viral load and CD4 + Tcell count follow-up. The sampling interval (K) value was calculated by dividing the total number of HIV/AIDS patients during our study period by the sample size (1100/358 = 3). Then lottery method was used to select the first participant of three then taken by interval of three. The study subjects selected by every three individuals who are attending ART clinic of UOGCSH (Fig 1).
Operational definition of variables
Anemia is condition at which adjusted Hb amount is less than 12 g/dl for female and less than 13 g/dl for male [2].
DAT positive: when agglutination is observed either after immediate centrifugation or after centrifugation that followed room temperature incubation of red cell suspension with antihuman globin reagent (83).
Data collection tools and methods
Sociodemographic and clinical data collection. The data was collected by semi-structured questionnaire and collection was performed by trained expert nurses. The questionnaire had three parts including sociodemographic, clinical, and nutritional data which were related to IHA. The questionnaire was translated into the Amharic language. For sociodemographic data such as age, sex, residence, marital status, education, and religion were collected via faceto-face interview with study subjects. The clinical data's such as history of abortion, CD4 result, viral load, neoplastic disease, opportunistic infection, history autoimmune disease and medication was extracted manually from participants' records. For pregnancy, the female participants who were in age between 15-49 years old were screened by laboratory test of pregnancy and family history anemia was also requested by face to face interview (Annexes I, II in S1 File).
Sample collection procedures and hematological analysis
Blood collection procedures. About 5ml of blood was collected with sterile syringe and needle by expert medical laboratory technologist into study participant code number labeled EDTA anticoagulant test tube. The collected blood sample was delivered to hematology laboratory for analysis of hematological parameters, DAT, and blood film preparation. The blood was transported to the hematology laboratory within 1 to 2 hours and the analysis to be performed. From the collected blood sample, hematological analysis was performed, then blood
PLOS ONE
film were prepared from the remnant sample. Finally, DAT was performed on the rest blood sample (Annex V in S1 File).
Hematological analysis. The hematological analysis was performed on a blood sample in an EDTA anti-coagulated test tube to confirm the presence of IHA by following standard operating procedures. Hb measurement, reticulocyte count, immature reticulocyte fraction, RBC count, and RBC indices such as MCV, MCH, and MCHC were performed by using an automated hematology analyzer (Unicel DxH800, Danaher Corporation, Beckman Coulter, United States of America (USA)). Unicel DxH800 provides RBC count, reticulocyte count, immature reticulocyte fraction, and nucleated RBC on whole blood by impedance principle and leukocyte 5-part differential (Diff) and platelet by flow cytometery or light scattering principle [30,32]. The blood sample is suspended in diluent and passes through the apparatus causing direct current resistance. Change in blood cell size is detected as the electrical pulse and blood cell count is measured by counting pulse (Annex V in S1 File).
Blood film examination. After hematological parameter performed, the remaining EDTA blood was used for blood film examination. A thin blood smear was prepared by wedge method by putting a drop of blood on the slide about 1-2cm from the end of the slide and making smear by another smooth-edged slide as spreader at an approximate angle of 30˚on three fourth (¾) of the length of the slide. The prepared smear was air dried by placing the smear film side up on a staining rack. The dried smear was covered with filtered undiluted wright stain, left for 1minute, washed then dried, and examined by using oil immersion (100x objective) on the microscope. The morphology was examined by a trained technologist for the presence of features of hemolysis. In blood film the presence of schistocytes, spherocyte, bite cells, and nucleated RBC were evidence of hemolysis and in the meantime for cross-checking for morphology with analyzer result was performed (Annex V in S1 File).
Coombs test (DAT). Three percent of washed blood suspension was used for direct ant globin test (DAT) for detection of coated antibody on the surface of red cells that results in immune hemolysis. Coombs test (DAT) was performed based on the principle of a hem agglutination test. Two Drops of the anti-globulin reagent added to two drops of the three percent of red cell suspension into the test tube. Polyspecific anti-human globulin antiIgG-C3d acts as a link between the antibodies and complement coating of neighboring RBC and induces agglutination. The test tube would be immediately centrifuged after thorough mixing and finally reading for presence of agglutination was examined microscopically then reported as positive or negative for DAT (Annex V in S1 File).
Immune hemolytic anemia (IHA). Finally, Immune hemolytic anemia was diagnosed from the result of hematological parameters, blood smear, and coombs test results. It was defined as low Hb, normocytic or macrocytic red cell, feature of hemolysis on blood film such as bur cells, shistocytosis, spherocytosis, reticulocytosis, high immature reticulocyte fraction, and positive for direct anti-human globulin test. The test was finally confirmed for the presence of IHA by a laboratory technologist.
Quality management of laboratory tests and data
Quality assurance for sociodemographic data. Before data collection, training was given to the data collectors to ensure the reliability and validity of data to reduce technical and observation bias. The questionnaires were tested (pretest) on randomly selected patients from the study site for reliability and validity before it was used for actual data collection. To check language translation information quality the translated questionnaire was reviewed by three individuals and retranslated back to the English language from Amharic. The validity of the information was checked again.
Quality control for hematology analyzer. Quality control for working equipment and reagents was ensured using standard controls as well as standard operating procedures. For Unicel DHX800 hematology analyzer normal background reading was checked daily and the performance was checked by low, normal, and high controls. The result of each test was properly recorded (annex v in S1 File).
Quality control for coombs test. The quality control was done for DAT, by using Rh-positive blood sample coated with an anti-D for positive control and negative control by Rh-negative blood were used. Then the result of both control was properly recorded (annex v in S1 File).
Quality control for microscopy and wright stain reagent. The preventive maintenance was performed for the microscope to prevent the entrance of abnormal artifacts in the morphological examination. The microscope was cleaned daily for quality examination of blood smear morphology and reticulocyte count. A microscopic smear review was performed to check functionality of microscope, quality of slide and staining by using previously examined and confirmed slides. To make quality staining, the solution was filtered before staining the smear. The quality control for wright staining solution was performed by using a patient sample with a normal MCV, MCH, MCHC and total white blood cell count (annex v in S1 File).
Data management and analysis. Data entry was entered into Epi data version 4.6 (Epidata, Inc. Redwood City, CA, United States) and analysis was done by using STATA (Software for statistics and data science) statistical software version 14 developed by StataCorp for data. Every day the collected data was checked for completeness and accuracy by the principal investigator. During the entry of data, it was cross-checked to assure the right data was entered and cleaned for accuracy. Descriptive statistics such as frequency, charts, tables, and percentages were used to summarize the data. The firth penalized logistic regression model was fitted to determine the associations of independent variables with outcome variables. For measure of association for variable was analyzed by bivariable firth penalized logistic regression model and those variables which had P value 0.2 were included in multivariable firth penalized logistic regression model to control the confounding factors. Then multivariable firth panelized logistic regression was computed for selected variables and the significance of association was determined and interpreted. Both Crude odds ratio (COR) and adjusted odds ratio (AOR) with their corresponding 95% confidence interval (CI) were used to see the strength of association between dependent and independent. A p-value <0.05 in multivariable firth penalized logistic regression model was considered statistically significant. The results were presented in words and tables. Based on the study result, conclusions and recommendations were done.
Dissemination of results. The study result would be submitted to Department of Clinical Hematology and immunohematology the School of BMLS and CMHS, UOG, and also the results would be submitted to the study site. The abstract would be submitted to local concerning bodies Such as EMLA and libraries. The result would be communicated with the research community through a presentation on conferences and publication on peer-reviewed reputable journals to communicate with the international community.
Sociodemographic characters
The total number of participants in this study was 358. Of the total participants, 216 (62.1%) of them were females and the median age was 38 (interquartile range 33 to 45) years. Among the study participants 313 (87.43%) were Orthodox Christian, and 285 (79.61%) were urban residents. Of all participants 187(52.87%) were married followed by divorced, 90(25.14%) were government employee and 115(32.12%) had secondary school education level (Table 1).
Clinical characteristics of the study participants
From the study participants, 117 (32.68%) of them had a history of comorbidity of HIV and opportunistic infections including bacterium tuberculosis, 33 (9.22%), had a family history of anemia and 66 (18.44%) had history of an autoimmune disease. Of all study participants, 24 (6.70%) of study participants had a history of neoplastic disease, and 278(77.65%) were on stage one of disease (AIDS). Among female study participants 2(0.93%) had history of pregnancy but none of them had abortion history in last four months ( Table 2). Among study participants, 52.79% (189) drink coffee at least once a day, 94.13% (337) of them use (consume) meat in diet and 280 (78.21%) use green vegetables daily ( Table 2).
Prevalence of immune hemolytic anemia
The overall prevalence of IHA (who met criteria for IHA) in this study was 2.80%. (95% CI 1.07, 4.50). The prevalence of IHA among HIV in patients the 15-30 years age group was 4.22% (n = 3), However, there was no IHA found in the age group greater than 50. Among individuals who had IHA, 6 (60%) of them had moderate, and the rest of them had mild anemia. However, there was no severe anemia case in a patient who had IHA in this study.
Factors associated with immune hemolytic anemia
To determine the association between the IHA and independent variable bi-variable and multi-variable firth penalized logistic regression model was used. Based on the analysis variables with p value less than 0.2 in bivariable firth penalized logistic regression model included in multivariable analysis. Accordingly, vegetarianism (do not using meat in their diet), high viral load and family history of anemia showed significant association with IHA (Table 4).
Discussion
Human immuno deficiency virus infection triggers anemia, which is most likely caused by HIV infection of stromal cells and hematopoietic stem cells. In HIV infection, the commonly affected hematological parameters are leukocyte, erythrocyte, and platelets due to viral effect on all lineages of blood cells and immune system [1]. Of all hematological abnormalities anemia is common among HIV patients. The pathophysiology of anemia includes decreased production, increased destruction, and increased loss due to hemorrhage [3]. IHA is a type of anemia caused by immune mediated destruction of RBC by antibodies against erythrocyte antigens. It is characterized by normocytic normochromic or macrocytic anemia with hemolysis evidence in -blood film, reticulocytosis or high immature reticulocyte fraction and positive for DAT [29][30][31].
The overall prevalence of IHA among HIV-positive adults was 2.80% (95% CI 1.08%, 4.50%), The prevalence was in agreement with studies conducted in Addis Ababa (2.34%) [17] and Benin (Nigeria) (3.06%) [22]. However, it is higher than that of study done in Lagos (Nigeria) (0%) (38). The difference might be resulted from the variation in the defining IHA. The study in Lagos (Nigeria) was the defined IHA by using a reticulocyte count, hemoglobin level and coombs test only, without using immature reticulocyte count. But in this study, immature reticulocyte fraction was used for diagnosis of IHA. Reticulocytopenia is common in HIV patients, this causes misdiagnosis of IHA in HIV patients. The immature reticulocyte fraction is the best parameter for diagnosis of IHA in HIV patients, because it increases in case of IHA regardless of HIV status [33,34].
According to this study, among individuals who had IHA, 60% of them had moderate and 40% of them had mild anemia. This finding did not agree with the study in Addis Abeba [17]. The study indicated 22.2% of them had severe and 33.3% of them had moderate anemia. This variation might be due to the advancement of ART medication from AZTbased regimen especially navirapine to new advanced regimens with reduced adverse effect such as dolutegravir based regimen. Even though the mechanism is not fully elucidated, IHA occurs as part of drug rash with eosinophilia and systemic symptoms syndrome in the presence of drugs (navirapine). The drug dependent antibody mediated hemolysis appears within two weeks after initiation of the drug, whereas the patient presents with rapidly progressing IHA. But dolutegravir did not cause anemia and dolutegravir-containing regimen demonstrated a high virologic efficacy. This might protects patients from developing severe anemia [35,36].
The determinant factors which shown significant association with IHA were family history of anemia (AOR 8. 30 In this study, individuals whose families had a history of anemia were 8.30 times more likely to develop IHA than their counterparts (AOR 8.30, 95% CI 1.56, 44.12). This might be due to the presence of study participants with family history of IHA. IHA might be caused by a fundamental defect in the immune system that inhibits the immune system from establishing a proper homeostatic mechanism. This disorder appears to be passed down in families which block erythrocyte immune homeostasis. The patients with hereditary spherocytosis, who had naturally occurring autoantibodies directed against different membrane proteins. This antibody makes the reaction with the surface antigen of erythrocyte and results in immune mediated hemolytic anemia [37,38].
According to finding of this study, vegetarians or peoples who did not eat meat were 7.39 times (AOR 7.39, CI, 95% 1.13, 42.6) at risk of developing IHA than individuals who eat meat in their diet. This study agrees study done in Shalla, [Ethiopia] [39], Vietnam [40], and Pakistan [41], which reported that anemia was higher among individuals who did not use meat and animal products. Lack of meat in diet results in vitamin B12 deficiency which causes impaired immune system activity such as decreased lymphocyte especially CD8, natural killer cells, lymphokine activated killer cells and an increase in the CD4/CD8 ratio [42,43]. The impaired levels of the immune activity is associated with higher risk of HIV disease progression and increased viral replication. The increased virus causes viral protein induced immune activation which might result in IHA in HIV patients [44].
In this study, individuals whose viral load greater than 1000 copies/μl were 6.94 times more likely to develop IHA than individuals whose viral load less than 1000 copies/ μl (AOR of 6.94 at 95CI % (1.13, 42.6)). The higher viral load is indicative of poor suppression of viral quantity. This might be occurred due to the positive correlation between plasma HIV ribonucleic acid levels and both CD4+ T-cell activation and CD8+ T-cell activation levels [45]. Virus induces IHA by HIV binding with erythrocytes, then causes immune activation, dysregulation of T and B cells, immune intolerance and expression of auto antigens similar to virus [9,10]. The structural antigen similarity between HIV proteins and RBC antigens can induce autoantibody production. Moreover, the presence of viral negative factor protein induces autoimmune responses by cross-reaction of specific viral antigens with self-proteins through the stimulation of auto-reactive T cells [45]. The circulating autoantibodies to RBC and host red cell with increased immunogenicity finally these results antibody-mediated hemolysis or IHA [11,45,46].
Strength of the study
In this study hematological analysis, such as reticulocyte count, mean reticulocyte volume and immature reticulocyte fraction were performed by automation. This study also attempted to describe associated factors of IHA in addition to prevalence.
Limitation
The first limitation of this study was a cross-sectional nature of its design, it did not allow us to observe causality in the relationship between IHA and its associated factors, as it is temporal association. The other limitation was, this study was not included DAT negative IHA which requires latest technology such as gel technology and molecular methods. The serum lactate dehydrogenase, haptoglobin, and unconjugated bilirubin were not tested for additional evidence of hemolysis.
Conclusion
According to the findings of this cross-sectional study IHA in HIV patient is rare public health problem. This finding revealed that IHA was significantly associated with vegetarianism, family history of anemia and high plasma viral load.
Recommendations
The ART clinicians were recommended to focus on viral load to monitor disease progress and give attention for IHA. IHA screening test has to be done specifically before blood cell transfusion in HIV patients. We recommend HIV patients to use meat in their diet to protect themselves from vitamin B12 deficiency induced IHA. Individuals who had family history of anemia should be screened for IHA. Additionally, we recommend that additional study to be done by using sensitive and specific advanced technology products like flow cytometery and advanced molecular testes which also help to quantify amount of RBC bounded antibodies to know the probability for hemolysis. It is better for researcher in hematology area, give attention to set the reference interval of immature reticulocyte fraction and mean reticulocyte volume. Even though it is less frequent IHA diagnosis needs prior identification to minimize severity and burden of disease because it may result in fatal condition. We suggest policy makers to develop guideline for HIV patient by considering IHA and work to make screening tests for IHA to be available in every ART clinic across the country.
Declarations
Ethical approval and consent to participate Ethical considerations. The study was carried out after receiving ethical approval from the University of Gondar college of medicine and health sciences (CMHS), school of biomedical and laboratory science research, and ethical review committee (Reference number SBLS/ 2750). All activity in this research work was based on Helsinki declaration. Furthermore, support and permission letter were secured from UOGSCH. In addition, following an explanation of the purpose, the benefits and the possible risks of the study, written informed consent was taken from a parent/legal guardian and assent was sought from children before commencement of the study. It was made clear that participation in the study were purely on a voluntarily basis and refusal was possible. To ensure confidentiality of data, study participants were coded by using unique codes, and only authorized persons were accessing the collected data. The study participant's with abnormal findings were linked to the physicians who are working at the ART clinic for proper patient care.
Availability of data and materials
All relevant data are available within the manuscript. In case of need, the data that support the findings of this study are available from the corresponding author on reasonable request. | 2022-10-07T06:17:43.207Z | 2022-10-06T00:00:00.000 | {
"year": 2022,
"sha1": "2f4f9056b3f72d50f0fd814379cc3b5b8eb505ac",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7d163aacf4cb9aafc40b9108ef96ddbd24986948",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258947196 | pes2o/s2orc | v3-fos-license | Counterfactual Probing for the Influence of Affect and Specificity on Intergroup Bias
While existing work on studying bias in NLP focues on negative or pejorative language use, Govindarajan et al. (2023) offer a revised framing of bias in terms of intergroup social context, and its effects on language behavior. In this paper, we investigate if two pragmatic features (specificity and affect) systematically vary in different intergroup contexts -- thus connecting this new framing of bias to language output. Preliminary analysis finds modest correlations between specificity and affect of tweets with supervised intergroup relationship (IGR) labels. Counterfactual probing further reveals that while neural models finetuned for predicting IGR labels reliably use affect in classification, the model's usage of specificity is inconclusive. Code and data can be found at: https://github.com/venkatasg/intergroup-probing
Introduction
Most work on bias in NLP only considers negative or pejorative language use (Kaneko and Bollegala, 2019;Sheng et al., 2019;Webson et al., 2020;Pryzant et al., 2020;Sheng et al., 2020). While recent work has delved into implicit bias (Rashkin et al., 2015;Sap et al., 2017Sap et al., , 2020, they are still limited as they rely on identifying specific demographic dimensions or an individual's intent. Crucially, language production is still taken to be 'unbiased' by default. Research in social psychology suggests a different framing of bias that encompasses all language use -we can analyze bias as changes in (language) behavior reflecting shifting social dynamics (Van Dijk, 2009). Under this view, all the language we produce is biased, with the nature of the bias determined by the social relationships between the speaker and target. Inspired by this idea, Govindarajan et al. (2023) proposed a new framing of bias by modeling intergroup relationships (IGR, in-group and out-group) in interpersonal English language tweets, potentially capturing more subtle forms of bias. This framing raises a question: which linguistic features vary systematically in different intergroup contexts?
The Linguistic Intergroup Bias (LIB; Maass et al., 1989;Maass, 1999) hypothesis offers some clues towards linguistic features that change with shifting intergroup contexts. LIB speculates that socially desirable in-group behaviors and socially undesirable out-group behaviors are encoded at a higher level of abstraction. The theory however relies on a restricted definition of abstractness that relies solely on predicates, and an ad-hoc analysis of 'social desirability' that doesn't permit largescale analysis. We can do better by using two welldefined pragmatic features: specificity (Li, 2017) is a pragmatic feature of text that measures the level of detail (similar to abstract-concrete axis), while affect is a feature that measures the attitude of a speaker towards their target (Sheng et al., 2019) in an utterance (analogous to social desirability).
Specificity and affect are analogous to the LIB axes of language variation that are easy to annotate and compute. Furthermore, specificity is a more general property than abstractness in the LIB -specificity is a property of the whole sentence rather than just the predicate. Thus, our study focuses on intergroup bias more generally, rather than the narrow parameterization of the LIB. Similar to the LIB, our formulation of intergroup bias predicts that positive affect in-group utterances and negative affect out-group utterances are encoded with lower specificity (i.e. more generally). Tables 1 and 2 compare the predicted language variation between the LIB and our formulation.
In this work, we perform the first large-scale study of linguistic differences in intergroup bias by analyzing its nature in the corpus of English tweets from Govindarajan et al. (2023), which makes use of naturally occurring labels for in-group vs. outgroup. This distinguishes us from existing work in LIB which mostly relies on artificial responses from participants in studies, rather than natural language use in the wild. To bolster our probing investigation, we also explore it causally: exploiting the quantitative nature of our formulation to study if a neural model finetuned for IGR prediction uses pragmatic features such as specificity and affect in its decision-making process through counterfactual probing techniques (Ravfogel et al., 2021).
To summarize our findings, we find a modest positive correlation between affect and IGR in our data, with a positive causation effect as well -making a tweet's affect more positive makes it more likely to be in-group regardless of its specificity. We find no correlation between specificity and IGR in our data. Surprisingly, we discover a causal effect of low specificity on IGR prediction that is uniform across affect, but none for high specificity. We hypothesize that this could be because of damage to the underlying language model, but we leave further investigation to future work. We release our code and data at github.com/venkatasg/intergroup-probing.
Background
Intergroup bias The Linguistic Intergroup Bias (LIB) theory (Maass et al., 1989;Maass, 1999) tries to explain how stereotypes are transmitted and persist in communication by hypothesizing that socially desirable in-group behaviors and socially undesirable out-group behaviors are encoded at a higher level of abstraction . The LIB has been reproduced in various psychological experiments and analyses (Anolli et al., 2006;Gorham, 2006); it has also been used as an indicator for a speaker's prejudicial attitudes (Hippel et al., 1997), and racism (Schnake and Ruscher, 1998). Table 1 describes the LIB asymmetry and the parameters used. As stated earlier, the LIB relies on ad-hoc and hand-coded concepts such as 'social desirability' and abstractness of predicates (Semin and Fiedler, 1988) . Our proposed experiments generalize beyond the LIB by utilizing parameters that are easily computable, and are a function of the whole utterance. We also build upon the dataset and work in Govindarajan et al. (2023), which is the first large-scale analysis of intergroup bias on naturally occurring speech.
In-group
Out-group positive affect low specificity high specificity negative affect high specificity low specificity Table 2: Predicted language variation in our more general formulation, using specificity and affect Specificity Specificity is a pragmatic concept of text that measures the level of detail and involvement of concepts, objects and events. Louis and Nenkova (2011) introduced the first dataset and model for sentence specificity prediction, and in later work Li (2017) (Plutchik, 1980(Plutchik, , 2001 as a framework. While fine-grained, this approach isn't amenable to the experimentation we propose easily. Inspired by the concept of regard by a speaker towards a demographic in an utterance (Sheng et al., 2019), we introduce annotations for a coarse-grained feature we term affect that estimates how a speaker feels towards the target they mentioned in an interpersonal utterance. Table 2 describes the intergroup language variation as hypothesized in our experimentation, using specificity and affect. Analogous to LIB, our hypothesis is that positive affect utterances directed at in-group individuals, and negative affect utterances directed at out-group individuals are encoded with lower specificity. to probe for syntactic phenomena such as subjectverb number agreement (Ravfogel et al., 2021). To our knowledge, ours is the first work probing if a model learns and uses higher-level pragmatic features like affect and specificity using AlterRep.
Data & Annotations
We use the same dataset of tweets from Govindarajan et al. (2023), which consists of tweets by members of US Congress that @-mention other members in the same tweet, with 'found-supervision' for the IGR labels of every tweet. A tweet is in-group if it is targeted at another member of the same party as the writer of the tweet, else it is out-group.
Affect We build upon the dataset's fine-grained annotations for interpersonal emotion by adding annotations for affect. We presented annotators on Mechanical Turk with tweets from our dataset with the target mention masked (with the placeholder Doe, to minimize potential biases of the annotator), and asked the following questions: a. How does the writer feel in general about Doe? warmly, coldly, neutral, mixed b. How does the writer feel in general about Doe's actions/behavior? approval, disapproval, neutral, mixed Annotators are given the option to select one of the 4 options listed above for each question. For each tweet, we collect annotations from 3 annotators, obtaining an aggregate label for each question by majority vote. We report an inter-annotator agreement score (Fleiss's kappa; Fleiss, 1971) of 0.53 for the first question, and 0.56 for the second.
We derive a binary affect label (±1) from our annotations using a simple rule: If the writer of a tweet is deemed to either feel warmly towards the target, or if they approve of the target's actions, the affect is set to be positive; else it is set to be negative. An analysis of our collected annotations on the data shows that there is a small positive (Pearson's) correlation (r=0.2, p < 0.001) between binary affect and IGR.
Specificity Specificity of the tweets in the dataset are calculated using the specificity prediction tool from Gao et al. (2019). Their specificity predictor is trained on tweets, and uses surface lexical features, as well as syntactic, semantic and distributional features to calculate a specificity score between 1 and 5. We note that on our dataset, there was no correlation between specificity and IGR (r=−0.07, p < 0.001), unlike affect. On further inspection of our dataset, we find that tweets with very high/low specificity scores (gathered by excluding specificity scores between 3 and 4, similar to excluding the middle in Gelman and Park, 2009) have a small but statistically significant negative correlation with IGR labels (r=−0.13, p < 0.001).
Interventions
Model We use BERTweet (Nguyen et al., 2020), a language model pre-trained on 850M English tweets, the same model used in Govindarajan et al. (2023). All intervention experiments are carried out with the best performing finetuned version of this model -where the model is finetuned on the task of predicting IGR labels. The input to the model is only the tweet with no other context, and the target masked with a placeholder @USER.
We use the model's representations from layer 11 for the INLP procedure since it shows the most reliable effects. INLP (Ravfogel et al., 2020) works by learning a series of linear classifiers on the representations from an encoder. In each iteration, the embeddings are projected onto the intersection of nullspaces of the classifiers learned so far, meaning the information used by the existing classifiers is removed from the model. Every subsequent classifier we learn removes more information of the property of interest from the model's representations. We find that higher layers offer a good balance between feature extractability and language model stability (see Appendix D) for our features.
After training INLP, AlterRep uses the classifier's decision space to project model embeddings into a null component that contains no information from the feature of interest, and an orthogonal component, that contains all the information from the feature of interest. These two components thus enable us to perform the counterfactual intervention -pushing model embeddings towards having more, or less, of a particular property. When Al-terRep uses INLP classifiers with more iterations, the strength of the intervention is greater. Figure 1 offers an illustration of our intervention experiment on specificity, and the expected results.
Affect Using the binary affect labels we derived from annotations that we described in § 3.1, we perform interventions to test if the model uses affect causally in its decision. We sample 3 tokens at random from each sentence in the training and validation split of our dataset, train an iterative linear classifier on the model's representations of these tokens using INLP (against the affect label of the tweet), and use the decision boundary learned by the classifier to intervene by pushing model representations to have more positive affect or have more negative affect. We set the hyperparameter α in AlterRep to 4.
Specificity The INLP classifier for specificity is learned using the same procedure as for affect. We train the classifier on only the tweets with high and low specificity scores in our dataset (scores below 3 and above 4; scores taken from the specificity prediction tool in Gao et al. (2019)), excluding the middle to ensure effective learning of the decision boundary (Gelman and Park, 2009). Thus, we are effectively pushing the model representations to have high or low specificity. For both affect and specificity, once the INLP classifier is learned, we perform the intervention on a random subset of 30% of the tokens of a tweet (to control for tweet length). We also report the results of random interventions as a control, where random interventions are generated by sampling from a standard gaussian instead of using the decision matrix generated by INLP.
Hypotheses We report the percentage of tweets in the test split of our dataset that are predicted to be in-group by our classifier model with increasing strength of the intervention (number of INLP iterations, 0 being pre-intervention). Thus, we have the following hypotheses on the effects of our intervention on the data based on our intergroup bias framework described in Table 2: 1. Interventions towards positive affect should induce the model to predict low specificity tweets to be in-group and high specificity tweets to be out-group, while interventions towards negative affect should affect the model conversely. 2. Interventions towards higher specificity should induce the model to predict positive affect tweets as out-group and negative affect tweets as in-group, while interventions towards lower specificity should affect the model conversely.
Results & Analysis
The results for the interventions on affect are presented in Figure 2, while those for specificity are presented in Figure 3. Overall, we observe that in both cases, interventions had the same effect on tweets that were annotated with positive affect as they did on tweets with negative affect (and similarly for tweets with high and low specificity)so we only show the percentage of all tweets in the test split classified as in-group.
Affect As Figure language model being destroyed, as the LM Top-100 accuracy plot in Appendix D shows. Pushing the model's representations towards negative affect shows the inverse effect as expected, although the nature of the drop appears different. We hypothesize that this is because most of the tweets in our dataset (75.2%) have positive affect. An intervention pushing the representations towards negative affect would be slower and require stronger intervention forces, which is borne out in Figure 2.
Specificity Figure 3 shows that pushing model representations towards being more specific has no effect on model behavior and is indistinguishable from the control; but pushing towards lower specificity has a noticeable effect -interventions after 48 iterations of iNLP lead to all the data being predicted as in-group. Our hypothesis states that general language is more likely in positive affect in-group contexts; however we find no difference in the model's behavior on positive versus negative affect tweets as reported earlier.
Overall our findings indicate that while the model does use affect towards making its decision on the interpersonal group relationship prediction task (albeit uniformly across specificity), it doesn't use specificity as we had predicted. The discrepancy between high and low specificity interventions could be because the average specificity of tweets in our training data is 3.49 (σ = 0.54) -meaning that interventions towards lower specificity act in opposition to most of our data in representation space. But these results requires further investigation to understand them better.
Qualitative error analysis Digging into the results further, we wanted to investigate if the interventions function the way we wanted them to. We analyzed the tokens that the model predicts before and after intervention for example (1). Firstly, fine- tuning the model for IGR prediction leads to degradation in LM abilities -a vanilla model predicts birthday, anniversary for the masked token in (1), but the finetuned model predicts nonsensical tokens like sworn, opport__ even before any interventions.
Pushing towards negative affect causes it to predict tokens with negative connotations (killing, ass, opposition), but degrades the underlying LM even further. The specificity interventions are especially hard to interpret due to the semantically and syntactically implausible tokens being selected (opport__, mug__, ask__) (1) Happy <mask> @USER! I got you a new bill: #IIOA While some of the interventions push the model's predictions to be in the general lexical space desired (which probably explain the affect intervention results), the lack of contextual fit due to LM degradation may explain the inconclusive results, and lack of interaction between affect and specificity.
Limitations
Future work must look into the generalizability of the results presented here in other domains of language use, and other languages. While we present the utterances as constituting natural speech by one speaker (the congressperson who sent the tweet), it is likely most congresspeople employ social media teams that help in crafting the language of some of their tweets. However, we believe for the sake of interpersonal group membership, the relationship between the speaker(or speakers) and their target(s) would not be affected. Techniques like INLP extract information that is linearly extractable. While we've shown that it is possible to extract and manipulate language information using such simple linear techniques, more complex methods like those proposed by Ravfogel et al. (2022) might be able to manipulate more non-linearly encoded properties.
The AlterRep procedure, as can be seen in our results and in Ravfogel et al. (2021), is sensitive to parameters like α and the number of INLP iterations. Picking these parameters is tricky and we have done it in a manner that preserves information in the language model. It is possible that a different set of settings not explored here could lead to different results.
Ethics Statement
For the corpus of tweets on which we performed annotations, we downloaded the tweets using the official Twitter API. In accordance with the Twitter Terms of Service, we release tweet IDs and usernames, but not the tweet text itself. Our dataset was built through crowdsourced annotations on Amazon Mechanical Turk. To ensure annotators were paid a fair wage of at least $10 an hour, we paid annotators $0.50 per HIT. Each HIT involved annotating 3 tweets, which we estimate to take on average 3 minutes to complete. | 2023-05-29T01:22:23.507Z | 2023-05-25T00:00:00.000 | {
"year": 2023,
"sha1": "04f0d2b8873f0dc7f086ed82f93091782d59d19f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "04f0d2b8873f0dc7f086ed82f93091782d59d19f",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
258414504 | pes2o/s2orc | v3-fos-license | Realization of Broadband Negative Refractive Index in Terahertz Band by Multilayer Fishnet Metamaterial Approach
|In the present study, a broad negative refractive index (NRI) performance is achieved in the terahertz frequency range (0.6{0.9 THz) through the design of multi-layered (cid:12)shnet metamaterial (FMM). Herein, the conventional (cid:12)shnet structure is modi(cid:12)ed by smoothing the sharp corners to reduce the electric (cid:12)eld concentration and improve NRI. At corner radius, r = 30 (cid:22) m, an effective refractive index of (cid:0) 11 : 14 is achieved with lower electric (cid:12)eld concentration at the corners. A multilayer structure of up to 40 layers is studied to achieve a broad NRI frequency response. The frequency band of NRI response is improved from 0.034 THz for a single layer structure to 0.178 THz for 28 layer structure, almost 6 times the original bandwidth. With the increase in the number of layers, the improvement in NRI and Figure of Merit (FOM) is observed, and maximum NRI and FOM values of (cid:0) 87 : 5 and 12.67 are achieved at 28 layers. This multilayer broadband design can surpass tunable response of available electro-optic materials.
INTRODUCTION
For the last two decades, the negative refractive index (NRI) has fascinated and motivated researchers to develop technologies and devices that are simpler and more efficient for the current fast-paced world.Applications like perfect lens [1,2] and invisible cloak [3] are made possible with these NRI materials.In the real world, many applications demand stable and efficient performance of devices over a wide frequency range.The wavelength-dependent refractive index of optical materials is a major limitation to develop broadband optical devices.A uniform refractive index over wide frequency band can allow researchers to extend phenomena like superlens and cloaking to broader wavelengths.To obtain the NRI response over broad frequency, special geometries and multilayer structures have been used by researchers [4][5][6][7][8][9][10][11]; however, these structures have limitations which will be discussed ahead.A special nano-scale split ring type resonator was used by Atre et al. [4] to show broad 250 nm wavelength range, NRI (n ′ = −1.9) in visible and near-infrared regions.A polarization independent 2 layered metamaterial that exhibits NRI in microwave region with a refractive index of −2.66 at 14.19 GHz and 0.51 GHz range was studied by Aydin et al. [12].Another broadband NRI plasmonic metamaterial by the combination of cut wires and dimers exhibited approximately 10% of bandwidth around 14.4 GHz frequency [9].Horizontal and vertical hyperbolic stacking of metal-dielectric layers [6,13], optically [7], and thermally [14] tunable approaches have also been studied to develop broadband NRI structures.Recently, terahertz waves have also shown great potential in communications, imaging and security [15][16][17], sensors [18,19], and energy harvesting [20,21] applications.In the terahertz range, a broad response of almost 0.5 THz (from 0.4 THz to 0.9 THz) was achieved by thermally tuning the substrate material [14].However, this structure shows NRI response only under the influence of heat.Apart from this, many reported structures in the terahertz region are single/dual narrow bands or require special designs that are complicated to fabricate [22,23].In this work, we attempt to broaden the resonance of a single-layer fishnet structure by stacking a number of fishnet-shaped metal and dielectric layers and utilize multiple resonances to broaden the NRI response.The goal is to develop broadband NRI metamaterial with a simple geometry that is easy to fabricate and works at room temperature.By using such structures, a flat perfect lens (considering NRI response) based on a multi-layer array of subwavelength cells with broadband performance could be realized.
We particularly select PVDF-TrFE-CTFE (terpolymer) material that has the potential to make the device tunable, so the tunability response can be compared to the broadband response of the proposed design.Since, to apply the electric potential, the materials need metal electrodes on both sides, it is a suitable candidate to become the Metal-Insulator-Metal (MIM) structure.In our work, first, we design a simple fishnet structure with resonance frequency between 0.6 and 0.9 THz, then modify this structure by smoothing the sharp corners to reduce electric field concentration and improve NRI response.Second, we design a multi-layered fishnet structure to achieve broad NRI and low loss.The effect of the number of layers on transmission, refractive index, figure of merit (FOM), and absorbance is studied.Some optimum operating conditions concerning FOM and refractive index are suggested in discussion section.
METHODOLOGY
We begin the design process by first performing experiments to measure the refractive index and extinction coefficient of PVDF terpolymer in the range of 0.2-1 THz because there is no information currently available.THz time domain system (THz-TDS) (Batop Optoelectronics) is used to derive these properties from S 21 measurement.The results are plotted in Supplementary data Fig. 1.From measured 'n' and 'k,' real and imaginary parts of permittivity are calculated by using relations ε ′ = n 2 − k 2 , ε ′′ = 2nk.Silver is used as metal layer with Drude's model having plasma frequency equal to 14.602 × 10 15 rad/s and collision frequency equal to 13.5 × 10 12 /s.All the numerical simulations are done using commercially available SIMULIA CST Studio Suite 2022.Periodic boundary conditions are applied to a unit cell along X and Y directions during the simulation with the assumption of an infinite metamaterial array, where wave travels in the −Z direction.To simulate a 3 × 3 array, E t = 0 at +Y and −Y directions; and H t = 0 at +X and −X are applied as boundary conditions.
Estimating the complex refractive index, permittivity, and permeability from S-parameters is a well-established method that has been described by earlier researchers [24][25][26][27].To begin, the complex impedance 'z' is estimated from S 11 and S 21 parameters, where k 0 is the wavevector; d is the effective medium thickness; m is the branch due to the periodicity of function and is considered as 0 [25].Afterwards, permittivity (ε) and permeability (µ) are calculated from n and z.
RESULTS AND DISCUSSION
First, we designed a conventional fishnet structure to exhibit double-negative (DNG) response around 0.72 THz and compared its performance with modified (rounded corner) fishnet structure in the aspect We performed smoothing of the sharp corners of conventional FMM by adding radius at the corners.The motive here is to reduce the electric field concentration at the corners and reduce difficulties that arise during the fabrication process to achieve such sharp corners [28].The effect of different corner radii on the impedance and refractive index is plotted in Fig. 2. Due to the rounded corners, the impedance is reduced, with a lower concentration of electric charges on the surface during resonance.Fig. 3 metal layer) for sharp and rounded corners at resonance on the same scale.The E-field concentration at the corner decreases with an increase in radius.It is easy to notice that after 30 µm, the maximum E-field concentration at the corners and n ′ saturates.Here, impedance is given as Z ∝ √ L/C (L and C are inductance and capacitance, respectively).If the input field (V ) is the same, and charge concentration (Q) decreases, then C decreases (as CV = Q), and impedance will increase.However, there is an increase in the rotating field along the thickness of the fishnet structure with the addition of corner radius, as depicted in Fig. 3(b) (observing the color and size of arrows), meaning that the inductance that opposes the current flowing through the structure is reduced.This field increases up to r = 30 µm and saturates afterwards.With the reduction in inductance, the impedance decreases.The reduction in inductance must be dominating the increase in capacitance, due to which there is a decrease in impedance with the increase in corner radius.
Since ω = √ 1/LC, due to the reduction in L and C, the resonance frequency shifts toward a higher value.As presented in (3), the estimation of refractive index totally depends on the magnitude and phase of S-parameters, as well as impedance characteristics.Hence, due to the combined effect of these parameters, with an increase in radius, −n ′ increases, with resonance shifting toward a higher frequency.At 0.76 THz for r = 30 µm, n ′ is −11.43 compared to the index of sharp cornered structure (−9.1 at 0.72 THz) along with lower concentration E-field.Meanwhile, after 30 µm, the shape of the square patch is disrupted.Therefore, for all further studies, fishnet cell with r = 30 µm is considered.
One might argue just using a circular patch fishnet instead of just rounding the corners.However, we compared our patch design with r = 30 µm to a circular patch keeping the arm width and patch size similar as shown in Fig. 4(a).No resonance is seen within the given range due to which no NRI response is exhibited by circular patch fishnet design.However, the resonance exhibited by an equivalent circular fishnet structure lies at frequency more than 1 THz (please refer to Figs. ) for the multilayer structure is plotted in Fig. 5. Just before the resonance, S 21 is extremely low for structures with more than one layer.For a single layer, maximum S 21 is ∼ 60% at the resonance, and it further decreases with the addition of layers.The dispersive behaviour of polymer material and multiple reflections between metal electrodes could be major factors reducing the transmission of light through the structure [6,28].
It can be observed that the resonance peak is broadened instead of a sudden jump in the S 21 curve with the addition of layers.As seen from Fig. 5, a single peak in the S 21 curve of 1 layer structure is transformed into multiple peaks at the resonance, contributed by each layer in the structure.These multiple peaks broaden the frequency response with a reduction in the amplitude of light transmission.A similar broadband response is expected in the S 21 phase curve.As the values of refractive indices are estimated through amplitude and phase of S parameters, the ultimate effect is seen in the n ′ curve due to broad anomalous dispersion (Fig. 6(a)).Table 1 summarizes the estimated results for multilayer structure, min n ′ (or max −n ′ for our understanding) and corresponding frequency (f nr ), frequency range within which −n ′ occurs (∆f nr ), max.FOM (where FOM = n ′ /n"), and n ′ at the frequency corresponding to max.FOM (f F M ).Maximum NRI band ∆f nr of ∼ 0.18 THz (∼ 24%) is achieved at 28 layers, 6 times as compared to single layer FMM.
Here, the value of NRI is also improved from −11.14 for a single layer to −87.5 for 28 layers' structure.Our approach to estimation is purely based on the employment of the finite element method for S-parameter retrieval expressed by ( 1)-(3).Herein, along with multiple resonances due to the multilayer structure, the effective thickness (d) of the structure in (3) also increases with the addition of layers.This d is inversely proportional to the effective refractive index and is calculated by adding thicknesses of the number of substrate and metal layers.With an increase in the number of layers, we observe that this effective thickness for the 28-layered structure becomes almost equal to the wavelength corresponding to ∼ 0.75 THz, due to which the constructive effect of these layers might provide a greater n ′ .For 35 layers, this effective thickness is not close to the resonance wavelength, and the n ′ value starts diminishing.In the same way, the 40-layered structure should have a lower n ′ than the 35-layer structure; however, the sudden reversal of sign before 0.7 THz might indicate an estimation error in the computation.According to the effective medium theory (EMT), the estimated effective refractive index should be independent of thickness in the direction of propagation of electromagnetic (EM) waves [24].A recent work by Liu et al. [29] demonstrated the limitations of EMT in estimating effective parameters for multilayer metamaterials.They observed significant discrepancies between EMT and the Finite Element Method (FEM) results for multilayer structures, with the deviation increasing with the number of layers.Additionally, the presence of low spacing between metallic layers (dielectric thickness) contributes to this error, as EMT does not account for the coupling between these layers.Consequently, the reliability of the S-parameter retrieval method for estimating effective parameters in multilayer structures with multiple resonance phenomena may be questioned.Despite these limitations, the goal of this study was to showcase broadband negative refractive index (NRI), and the multilayer approach employed herein demonstrates such performance.The selected electro-active PVDF terpolymer substrate could serve the tunable response, as the reported change in refractive index (∆n) due to the applied electric field is equal to 0.025 in the 3-5 µm wavelength range [30].For now, we assume the same ∆n of PVDF terpolymer substrate to check its frequency shift in the THz region.In addition, we also use ∆n change up to 0.2 in THz region, as electro-optic material like liquid crystal can exhibit such ∆n change [28].We also consider a nonrealistic farfetched ∆n = 0.5 for the sake of comparison of frequency response with our multilayer structure.Fig. 6(b) depicts comparison in the shift of estimated n ′ due to ∆n equal to 0.025, 0.2, and 0.5, respectively.As shown, the maximum shift due to 0.5∆n is ∼ 0.146.As shown, the maximum shift due to 0.5 ∆n is 0:146; however, 28-layered structure exhibits a broad response of more than 0.182 THz that surpasses the frequency response of all the ∆n.Hence, this approach can eliminate the use of external aid or active materials to achieve negative index in wider frequency ranges.
The quality of performance of the structure is usually measured by FOM, and an ideal structure shows a high value of FOM meaning a lower loss in the structure.Maximum FOM achieved within the NRI band increases with a greater number of layers.For 28 layers FOM is 12.67, and with further addition of layers FOM decreases (as shown in Fig. 7(a)).For 40 layers this FOM value drops to less than 0.1.Extremely high loss in the material with the addition of layers is responsible for such behaviour.In Table 1, all the frequencies corresponding to maximum FOM (f F M ) are higher than 0.76 THz.For 28 layers, the max.FOM occurs at 0.780 THz, whereas NRI at this frequency is −30.62.Hence, this structure can be operated around the frequency corresponding to max.FOM to exhibit NRI.As listed in Table 1, the optimum conditions for structures with different layers can be achieved to obtain NRI with reasonable FOM.terahertz regime.In the case of [7], a 3D Split Ring Resonator (SRR) is inserted within the polyimide substrate, and VO 2 is used in the gap between the top and bottom layers.Herein by tuning the VO 2 through temperature control, a tunable broadband metamaterial is achieved.However, such a structure requires special materials like VO 2 , and the performance of the structure deteriorates at room temperature.The work of [14,31] demands the fabrication of vertically standing structures that are complicated to fabricate.Zhang et al. [32] depicted dual band NRI response in the THz range with a SiO 2 substrate.However, all these studies consider a dispersion-free dielectric or semiconductor substrate material which can be questionable in terms of its accuracy and reliability of the results.In contrast, we attempted to measure the dispersive properties of substrate material in the desired frequency region.
The achieved refractive index is also much higher than others' work.Our recent work [33] highlights the
Figure 1 .
Figure 1.(a) Fishnet unit cell and equivalent LC circuit (Green and grey colour represent dielectric and metal respectively).(b) Simulated results of S parameters for unit cell (resonance near 0.72 THz), S 21 phase, and real part of estimated refractive index.(c) Estimated effective permittivity and permeability of unit cell FMM structure showing double negative performance.
Figure 2 .
Figure 2. Modified FMM study.Effect of corner radius (in µm) on the (a) real and (b) imaginary part of the impedance, and (c), (d) refractive index.
Figure 3 .
Figure 3. Modified FMM study.(a) E-field distribution on the surface of fishnet cell at different corner radii (frequency mentioned corresponds to minimum n ′ ).All the plots presented on same scale.(b) E-Field Vector plot along the thickness of structure.All the plots are presented on same scale.Yellow arrow shows the direction of wave propagation, cutting plane (Y Z) is depicted in bottom right corner.
2-4 in supplementary data).Furthermore, to compare the left-handed material (LHM) response of the FMM unit cell with a finite array, a 3 × 3 single layer (m-d-m) array is modelled.The estimated n ′ for unit cell and array shows remarkably close performance as shown in Fig. 4(b).
Figure 4 .
Figure 4. (a) S parameters and n ′ comparison between our rounded cornered fishnet patch (r = 30 µm) and circular patch.(b) S parameters for FMM unit cell vs 3 × 3 single layer finite array of the rounded cornered design.
Figure 5 .
Figure 5.Effect of the number of layers on transmission through the structure.Inset shows the modified multilayer fishnet metamaterial.
Figure 6 .
Figure 6.(a) Estimated n ′ for different layers.(b) Comparing tunable design vs current broadband design.
Figure 7 (Figure 7 .
Figure 7. (a) Effect of addition of layers on FOM.(b) Absorbance in the structure with different number of layers.
Table 1 .
Summary of results. | 2023-04-30T15:08:30.851Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "90a2e2b2902f385275f93169561f2430e1c665c5",
"oa_license": null,
"oa_url": "https://www.jpier.org/ac_api/download.php?id=23022302",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a3a64f178367a3d194ed9a27510f0518371e0b69",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
268540035 | pes2o/s2orc | v3-fos-license | All tidal wetlands are blue carbon ecosystems
Abstract Managing coastal wetlands is one of the most promising activities to reduce atmospheric greenhouse gases, and it also contributes to meeting the United Nations Sustainable Development Goals. One of the options is through blue carbon projects, in which mangroves, saltmarshes, and seagrass are managed to increase carbon sequestration and reduce greenhouse gas emissions. However, other tidal wetlands align with the characteristics of blue carbon. These wetlands are called tidal freshwater wetlands in the United States, supratidal wetlands in Australia, transitional forests in Southeast Asia, and estuarine forests in South Africa. They have similar or larger potential for atmospheric carbon sequestration and emission reductions than the currently considered blue carbon ecosystems and have been highly exploited. In the present article, we suggest that all wetlands directly or indirectly influenced by tides should be considered blue carbon. Their protection and restoration through carbon offsets could reduce emissions while providing multiple cobenefits, including biodiversity.
Reducing atmospheric greenhouse gas (GHG) concentrations and adapting to climate change have become one of humanity's biggest challenges.This global effort requires, primarily, reducing fossil fuel emissions and, secondarily, implementing nature-based solutions that can help offset peaks in projected global warming (Matthews et al. 2022 ).The Clean Development Mechanism, part of the United Nations Framework Convention on Climate Change, allows countries to purchase carbon credits for GHG emissions reduction projects to reach their targets under the Paris Agreement.Of the projects that can reduce emissions, restoring or managing coastal wetlands is one of the more promising activities, providing multiple cobenefits and significantly contributing to meeting the United Nations Sustainable Development Goals (Smith et al. 2019 ).Management of blue carbon is an integral part of carbon offset strategies but is usually limited to specific wetland types, including mangroves, marshes, and seagrass (Mcleod et al. 2011 ).
Blue carbon ecosystems can be defined following the criteria by Lovelock and Duarte (2019 ) as marine or coastal wetlands that have long-term storage of fixed carbon dioxide; that can remove GHGs; that have been lost or degraded because of anthropogenic impacts; that can be managed to enhance carbon stocks, reduce GHG emissions, and facilitate habitat persistence; that have the potential for interventions that have no or minimal social and environmental harm; and that can be managed in alignment with policies for mitigation and adaptation to climate change.Mangroves, saltmarshes, and seagrass meet all the conditions except that, in some cases, management interventions could have social and environmentally negative consequences-for instance, when competing with aquaculture that provides income to local communities or when their restoration results in the loss of freshwater wetlands.Other ecosystems, such as coral reefs, do not satisfy this blue carbon definition, because their carbon balance results in the net production of carbon dioxide through calcification.
Blue carbon projects are exponentially expanding worldwide; however, the demand for carbon credits greatly exceeds the offer.Currently, most blue carbon projects are conducted in mangrove forests.However, many other wetlands have characteristics that strongly align with the blue carbon definition and support expanding global initiatives for large and management actionable areas.The previous data gaps on carbon dynamics in these ecosystems are closing rapidly.In this review, we explore whether tidal wetlands other than mangroves, saltmarshes, and seagrasses align with the current definition of blue carbon.To achieve this goal, we first revisit the biochemical and physical characteristics of wetlands that can be defined as blue carbon.Second, we provide a compilation of published and unpublished data on the capacity of these previously unaccounted tidal wetlands to sequester carbon while having relatively low GHG emissions.Third, we investigate the threats and management options for these tidal wetlands.Finally, we provide knowledge gaps and describe future opportunities.We propose that protecting and restoring all tidal wetlands could reduce GHG emissions while providing multiple cobenefits, including biodiversity.
Definition and classification
In the present article, we define blue carbon ecosystems on the basis of the biochemical and physical attributes that support the processes that result in climate change mitigation.These processes include long-term organic carbon storage, mostly in their soils, and low GHG emissions; the former results from high primary productivity and low soil decomposition rates in waterlogged soils.The latter is mainly caused by sulphates in marine water, which outcompete carbon as an electron acceptor, inhibiting methanogenesis.On the basis of these criteria, we suggest the definition of blue carbon include the following language: "ecosystems that are influenced by marine waters that fix carbon dioxide and that store and accumulate it as organic carbon.They are bounded by the highest levels of tidal inundation at the terrestrial edge and by the limits of the photic zone in the marine edge." This definition includes forested wetlands and those dominated by shrubs, grass, sedges, or microalgal mats.It excludes temporal carbon storage, such as macroalgal beds that do not sequester carbon in their sediments and ancient peat formations, which are not currently fixing and accumulating carbon.The deep ocean is also excluded from this definition, because carbon is not fixed within these ecosystems (except for localized chemotrophic communities) but is transported from elsewhere.
Tidal wetlands are typically distributed along inundation and salinity gradients.Seagrass is found within the lower end of tidal inundation and can be permanently flooded.Mangroves and saltmarshes are usually found slightly below or above mean sea level.Above mean sea level and extending to the highest tide levels, other ecosystems can occur where geomorphology, climatic conditions, and water availability allow (figure 1 ).These wetlands are influenced by tides; their soils are directly flooded by the highest tides, or changes in groundwater levels during tidal fluctuations or affected indirectly through ocean waves or marine spray.
Tidal wetlands at the highest end of the intertidal zone are dominated by grasses, sedges, or woody plants forming shrubs, thickets, or tall forests, usually of a single or a few tree species.They have primarily organic unconsolidated soils, with different compositions and grain sizes.Their inundation can be frequent or infrequent, regular or sporadic, depending on their location within the landscape and their association with river channels.Their inundation regime and, consequently, their soil salinity depend on the tidal amplitude, the occurrence of storm surges, groundwater flows, and wind and wave intensity.Many of these tidal wetlands are oligohaline (0.5-5 practical salinity units).
We provide a classification of types that can be applied to tidal wetlands globally following the key attributes of blue carbon ecosystems (figure 2 ).These attributes are divided into four themes: climate, water, biota, and substrate.Within each theme, there are attributes subdivided into categories.For instance, for the water theme, one attributes is tidal inundation, with intertidal and subtidal being categories.Other attributes within the water theme are the intertidal immersion period or the frequency of tidal inundation and salinity.For biota, the attributes essential for blue carbon are structural flora, which includes trees, shrubs, sedges and microalgal mats (e.g., cyanobacteria mats).The substrate attributes include the dominant grain size, soil salinity, and the composition of the sediment, whether organic, silicious, calcareous, or mineral.For example, coastal Melaleuca forests in northern Australia are tropical oligohaline wetlands, which are infrequently inundated, are dominated by trees, and have a substrate that is saline, organic, or mineral and composed of mostly silt and clay.
Geographical location
Tidal wetlands, other than mangroves and saltmarshes, have been reported worldwide.Some of these are called supratidal forests in Australia (Iram et al. 2022 ); tidal freshwater wetlands , tidal swamps , tidal forested wetlands , or brackish tidal wetlands in the United States (Conner et al. 2007, Duberstein et al. 2014 ); transitional forests in South East Asia (Aslan et al. 2016 ); and coastal wetlands or swamp forests within estuary boundaries in South Africa (figure 3 , table 1 ; Van Deventer et al. 2021 , Riddin andAdams 2022 ).
In Southeast Asia and throughout the Pacific, common tidal wetlands at the terrestrial edge are forests of Melaleuca or paper bark tree swamps (CABI 2019 ).For instance, extensive forests of Melaleuca viridiflora or Melaleuca quinquenervia cover the coast of Australia.Similarly, Casuarina forests are found on the southern Australian coast.Melaleuca and Casuarina trees tolerate acidic and oligohaline conditions.They typically form dense monospecific stands of fast-growing trees, and both have become invasive species in many regions outside their native ranges.For instance, in the Everglades of Florida, M. quinquenervia currently occupies an area larger than that of native mangroves (Turner et al. 1998 ).Casuarina is also widely distributed, because it has been introduced into many countries for wood production.It has become invasive in many coastal wetlands, such as in South Africa, Brazil, India, and the Caribbean (CABI 2019 ).
In the southeastern United States, tidal freshwater forested wetlands are influenced by river flows and flooded by spring tides during high river but not flooded during low river stages or neap tides.These tidal wetlands include forests of bald cypress ( Taxodium distichum ), shrubs of twinberry ( Lonicera involucrate ), and mixed bottomland hardwood forests ( Nyssa , Fraxinus , Alnus ; Conner et al. 2007, Duberstein et al. 2014 ).In the northwestern United States, tree-, grass-, and sedge-dominated tidal wetlands may be brackish, with salinities ranging from fresh to mesohaline (5-18 practical salinity units; Brophy et al. 2011 ).These wetlands are located from just above the mean higher high water up to the highest tide level (Brophy et al. 2011 ).
Forests in southern Africa can be found within the estuarine functional zone, which has been classified as the habitat below the 5-meter topographical contour (Veldkornet et al. 2015, Adams et al. 2016 ).They are associated with freshwater lakes and coastal drainage areas extending from the subtropical to tropical areas of South Africa to Mozambique.These tidal wetlands can be found at the fresher upper reaches of estuaries but typically occur in temporarily closed estuaries where the connection to the sea can be interrupted by sandbars forming across the mouth during low river flow under high coastal wave conditions (Van Niekerk et al. 2020 ).These small estuaries can be perched above normal tidal levels, resulting in brackish conditions because of little tidal exchange.These tidal wetlands are dominated by Hibiscus tiliaceus (lagoon hibiscus) and Barringtonia racemosa (powder puff tree or freshwater swamp tree; figure 3 ).The endemic Raphia australis (raffia, giant palm) can also form tidal wetlands in Maputaland, South Africa, at Kosi Bay and the Siyaya estuaries.
There are also various reports of tidal wetland forests in Indonesia that are not mangroves.In Papua, transitional forests, or those fringing mangroves in the landward zone, cover large areas (Aslan et al. 2016 ).In Central Kalimantan, riverine mangrove forests are bordered by wetlands in the upper tidal reaches of the river dominated by Ganua motleyana (Sapotaceae) and Gluta walichii (Anacardiaceae), among 49 other tree species (Murdiyarso et al. 2009 ).
In New Zealand, forests located inland of tidal marshes were initially dominated by k ānuka ( Kunzea ericoides ) and m ānuka (Leptospermum scoparium ) forests.Other unique tidal wetlands occur in tropical America, such as zapotonales ( Pachira aquatica ; Adame et al. 2015 ) in the Mexican Pacific, the Mora forests ( Mora olifera ) of central and South America (Palacios Peñaranda et al. 2019 ), and the mixed mangrove and rainforest of the Amazon River in Brazil (table 1 ; Bernardino et al. 2022 ).Many other tidal wetlands probably have not been considered blue carbon worldwide, which could include significant areas.
All tidal wetlands are important for carbon storage but also for supporting biodiversity (figure 4 ).For instance, in Melaleuca forests in Australia, at least 642 species of mammals, birds, fungi, amphibians, reptiles, and plants have been recorded (Wetland Info 2024 ).These include the endangered spectacle flying fox ( Pteropus conspicillatus ), the eastern curlew ( Numenius madagascariensis ), and the critically endangered orange-bellied parrot ( Neophema chrysogaster ).In the Mekong Delta, Vietnam, 159 bird species have been recorded in Melaleuca forests, 15 of them listed under the Global and Regional Endangered List (Tran and Matusch 2017 ).In peat swamps in Southeast Asia, 2236 species of plants, mammals, reptiles, amphibians, and fish have been recorded, with 252 species restricted to these habitats (Posa et al. 2011 ).In some coastal peat swamps, such as those in Borneo, orangutans ( Pongo pygmaeus ) and proboscis monkeys ( Nasalis larvatus ) are commonly found (Posa et al. 2011 ).Forested tidal wetlands of the northwestern United States provide foraging habitats for juvenile salmon ( Oncorhynchus tshawytscha spp.; Davis et al. 2019 ).Therefore, all tidal wetlands support national and global biodiversity, and they share characteristics closely aligned with what is considered blue carbon as outlined in the following sections.
Tidal wetlands have long-term storage of fixed carbon dioxide
Many tidal wetlands have carbon stocks comparable to or exceeding those traditionally considered blue carbon ecosystems (figure 5 ).For instance, in Australia, Melaleuca forests have aboveground carbon stocks between 57 and 430 megagrams (Mg) of carbon per hectare (ha) and soil carbon stocks between 23 and 230 Mg of carbon per ha (0-50 centimeters [cm] deep; Tran et al. 2015, Tran and Dargusch 2016, Adame et al. 2019b ).Casuarina forests have 143 Mg of carbon per ha, with a standard error of 61, for aboveground and 241 Mg of carbon per ha, with a standard error of 136, for belowground stocks (at a 1 meter depth; Kelleway et al. 2022 ).Cypress swamps and mixed ( Nyssa , Fraxinus , Alnus ) forest stands have 115 and 560 Mg of carbon per ha for aboveground and belowground stocks, respectively, with respective standard errors of 20 and 125 (1.4 meter depth; Krauss et al. 2018 ).However, some sites can reach at least 800 Mg of carbon per ha of belowground stocks (Krauss et al. 2018 ).High carbon stocks have also been measured in forests of Picea and P. aquatica , with 95.1 and 220 Mg of carbon per ha for aboveground and 614 and 844 Mg of carbon per ha for belowground carbon stocks, respectively, with Kauffman et al. 2020b ).
The highest carbon stocks have been measured in the peat swamps of Indonesia, with 168 Mg of carbon per ha for aboveground, with a standard error of 31, and 1526 Mg of carbon per ha for belowground carbon stocks, with a standard error of 126 (Novita et al. 2020 ), and total ecosystem carbon stocks ranging from 558 to 1213 Mg of carbon per ha (Murdiyarso et al. 2009 ).In Borneo, coastal peat swamps have 168 and 1526 Mg of carbon per ha for above and belowground carbon stocks, respectively, with respective standard errors of 31 and 126 (Saragi-Sasmito et al. 2019 ).Therefore, the total ecosystem carbon stocks of these tidal wetlands range between 358 and 1694 Mg of carbon per ha, well within the ranges of mangroves (79-2208 Mg of carbon per ha; Hutchison et al. 2014, Atwood et al. 2017 ), seagrasses (9-830 Mg of carbon per ha; Fourqurean et al. 2012 ), and saltmarshes (100-800 Mg of carbon per ha; figure 5 ; Chmura et al. 2003 ).
The depth of organic matter (42 to at least 300 cm) in many of these tidal wetlands is comparable to that of mangroves, which have a mean of 216 cm, ranging from 22 to 600 cm (Kauffman et al. 2020a ).The shallowest organic matter layers are found in Melaleuca forests in tropical Australia, with mean values of 41.7 cm, with a standard error of 4.4 (Adame et al. 2019b ).In the peat swamps of Indonesia, organic matter is deeper, around 200 cm (Murdiyarso et al. 2009 ).Comparatively, in Picea and sago forests, soil depth ranges from 100 to at least 300 cm (Jones et al. 2017, Kauffman et al. 2020b ).
The processes responsible for accumulating and storing soil carbon in all tidal wetlands are similar.These include relatively high productivity (Finlayson et al. 1993 , Srivastava andAmbasht 1996 ) and slow organic matter decomposition (Wallis andRaulings 2011 , Middleton 2020 ), which are favored in waterlogged soils (Spivak et al. 2019 ).For instance, the organic carbon concentration of Melaleuca forest soil is typically associated with its water content.Therefore, links between water content and organic soil carbon reflect seasonal inundation and the depth of the groundwater table.In addition, high carbon stocks in many forested tidal wetlands result from the layering of local production and mineral carbon transported from the catchment (Noe et al. 2016, Jones et al. 2017, Adame and Reef 2020 ).The sequestered carbon in the soil in these tidal forested wetlands is relatively stable and, if undisturbed, may persist for centuries (Adame et al. 2019b ).
Carbon preservation in the soil of tidal wetlands is favored where litter and roots are recalcitrant and have high carbon-tonitrogen and lignin-to-nitrogen ratios (Srivastava andAmbasht 1996 , Stagg et al. 2017 ).Preservation varies with the type of organic matter and inundation.For labile organic matter, frequent inundation can increase decomposition through leaching; for recalcitrant organic matter, increased inundation facilitates carbon accumulation due to slowed anaerobic decomposition (Wallis andRaulings 2011 , Stagg et al. 2017 ).Decomposition can also be enhanced by fluctuating water regimes compared with permanent waterlogged conditions (Ozalp et al. 2007 ).In cypress and mixed hardwood forests ( Nyssa , Fraxinus , Alnus ), the decomposition of litter and roots is significantly reduced by flooding and the incursion of saline water (Weston et al. 2006 ).
The net capacity of tidal wetlands to sequester carbon can only occur when the decomposition of organic matter is lower than accumulation.This is common for many tidal wetlands, where decomposition rates are low (Trevathan-Tackett et al. 2021 ).For example, in forests of T. distichum , Paquira aquatica , Melaleuca , and Casuarina spp.soil decay rates range between 0.001 and 0.008 per day.These values are lower than those for seagrasses (0.0002-0.03 per day; Trevathan-Tackett et al. 2020 ), mangroves and saltmarsh (mean ranges of 0.015-0.06per day; Middleton 2020, Trevathan-Tackett et al. 2021, Ouyang et al. 2023 ) (figure 6 ), highlighting their capacity for soil carbon sequestration.
Tidal wetlands may have significant GHG removals
The accumulation of soil organic carbon in many tidal wetlands is similar to or exceeds that in blue carbon ecosystems.In Australia, Melaleuca wetlands accumulate soil organic carbon at a rate of 0.55 Mg of carbon per ha per year, with a standard error of 0.05.Cypress and mixed forests ( Nyssa , Fraxinus , Alnus ) have long-term accumulation rates (millennia) of between 0.07 and 3.4 Mg of carbon per ha per year (Krauss et al. 2018, Adame et al. 2019b ) and decadal rates of between 1.1 and 1.8 Mg of carbon per ha per year (1963-2012;Ensign et al. 2015 ).These values are close to or exceed those of saltmarshes and mangroves (e.g., 0.60 and 0.55 Mg of carbon per ha per year, respectively; Chmura et al. 2003, Alongi 2020 ).Similar grass-dominated wetlands have higher carbon accumulation rates of 1.24 Mg of carbon per ha per year compared with saltmarshes with 0.040 Mg of carbon per ha per year (Loomis and Craft 2010 ).Tree uptake and carbon dioxide fixation that results in wood accumulation occur at similar rates to those of mangroves, averaging 4.0 Mg of carbon per ha per year, with a standard error of 0.2 (Xiong et al. 2019 ), compared with rates of Melaleuca and cypress or mixed forests ( Nyssa , Fraxinus , Alnus ) with mean values of 5.0 and 1.1 Mg of carbon per ha per year, respectively, and with respective standard errors of 2.1 and 0.3 (Krauss et al. 2018, Adame et al. 2019b ).
GHG emissions (atmospheric plus lateral flux) must be lower than carbon sequestration for an ecosystem to be a net carbon sink.To be a net radiative carbon sink, the summed radiative forcing of carbon dioxide and methane, a potent GHG, cannot exceed their sink potential.The latter is not likely the case in many freshwater wetlands, where methane emissions counteract the atmospheric cooling effect of carbon dioxide fixation (Hemes et al. 2018 ).Methane is produced by methanogens during the anaerobic breakdown of organic matter and is commonly emitted from wetland soils (Al-Haj and Fulweiler 2020 ).However, in many tidal wetlands at or near maximum tide levels, where the soils are saline, the methane fluxes have, thus far, differed from freshwater wetlands, having lower methane emissions (Holm et al. 2016 ).For instance, in oligohaline Melaleuca forests, soil GHG emissions are lower than neighboring mangroves and those reported from other forests around the globe (table 2 ).
The low soil methane emissions measured in tidal wetlands could partially explain their long-term capacity to store soil carbon (Holm et al. 2016 ).The soils of tidal wetlands are generally enriched with marine sulphate deposits or receive marine water, at least periodically.As sulphate reduction outcompetes methanogenesis (Burdige 2012 ), sulphate-enriched soils produce low methane emissions.In addition, low methane emissions could result from soil uptake (e.g., Krauss and Whitbeck 2012 ).Although the mechanisms of this processes are unclear, changes in tidal fluctuations affecting the water table may simultaneously support anaerobic zones where methane is produced by methanogens and aerobic zones, where it is consumed or converted back to carbon dioxide by methanotrophs (Megonigal and Schlesinger 2002 ).
Another GHG emitted from wetlands is nitrous oxide, a product of nitrification and incomplete denitrification.Emissions of nitrous oxide are higher in warm climates and wetland soils rich in nitrifying archaea (Bahram et al. 2022 ).In addition, higher nitrous oxide emissions are found where nitrogen, primarily dissolved inorganic nitrogen, is high (Murray et al. 2015 ).However, some tidal wetlands, such as mangroves can be sinks of nitrous oxide if nitrogen concentrations are low (Maher et al. 2016 ).Other tidal wetlands such as Melaleuca forests in Australia and peat swamps in Indonesia have shown similar patterns with low nitrous oxide soil emissions ( −1.1 to 2.6 kilograms [kg] per ha per year), which are comparable to those of mangrove forests and saltmarshes (-0.73 to 1.2 kg per ha per year; table 2 ).
Although soil emissions in tidal wetlands may be low, their trees and understory vegetation may be substantial GHG sources.The methane generated in deep soils can be transported to the atmosphere via tree roots, stems and bark, potentially bypassing consumption in aerobic surface soils (Vann andMegonigal 2003 , Jeffrey et al. 2020 ).Indeed, tree methane fluxes from some freshwater-flooded wetlands contribute to 10%-50% of their total ecosystem methane emissions (Pangala et al. 2017, Jeffrey et al. 2020, Sjögersten et al. 2020 ).However, some trees have communities of methanotrophs within their bark, which can consume a third of the potential vegetation methane emissions (Jeffrey et al. 2021a(Jeffrey et al. , 2021b ) ).Similar microbial communities have also been found in dead trees and branches of tidal wetlands in the Southeast United States (Martinez et al. 2022 ).Although further data are required to generalize on the extent of methane emissions from the trees of tidal wetlands close to maximum tide levels, so far, the data suggest that they have similar rates to those of mangroves (Jeffrey et al. 2020, Zhang et al. 2022 ), which are lower than for trees in freshwater wetlands (table 3 ).
Finally, lateral carbon movements are common in blue carbon ecosystems.For instance, about 39% of the carbon (leaf litter, wood, and sediments) stored in tidal forested wetlands in the United States is exported (Krauss et al. 2018 ).Dissolved forms of carbon are also exported through tidal pumping, driven by the et al. 2021 ).Some of the particulate and dissolved exported carbon will be released into the atmosphere as GHG (Bogard et al. 2014 ), whereas the rest will be buried or exported to the ocean (Maher et al. 2018 ).The carbon export can also occur in the form of carbonate alkalinity (mostly as bicarbonate at a pH greater than 8), reducing acidity in the coastal ocean (Maher et al. 2018 ).This process provides additional benefits for climate change regulation and adaptation.The exchange of GHG through tidal pumping is well studied in mangroves and saltmarshes (Maher et al. 2018, Schutte et al. 2020 ) and is an essential pathway in those blue carbon ecosystems (Alongi 2014, Chen et al. 2022 ).However, this process has yet to be measured in other tidal wetlands.Nevertheless, the lateral exchange is likely less critical in tidal wetlands close to maximum tide levels, which have only sporadic inundation events compared with wetlands that are frequently flooded, such as mangroves (figure 7 ).
Tidal wetlands have been lost or degraded by anthropogenic impacts
Tidal wetlands are located near the coast, where human population density is high, and agriculture and other land uses are widespread and intensive (Barendregt and Swarth 2013 ).Therefore, they are often affected by both terrestrial and marine threats, including pollution, deforestation, land-use cover change, changes in sedimentation and hydrology, and, recently, changing climate and sea-level rise (Barendregt andSwarth 2013 , Jones et al. 2017 ).However, the impacts of these multiple threats on many tidal wetlands are challenging to quantify.First, there is no consensus on the definition and classification of tidal wetlands, and second, their historical and current distribution is largely undescribed.
Australia has about 6.4 million ha of native Melaleuca forests, most in the tropical north (ABARES 2024); however, only some of these forests are located within the intertidal.Most tidal Casuarina forests are found in the temperate south, with an estimated historic area in New South Wales and southeast Queensland of between 89,000 and 152,000 ha (DEE 2018 ).Deforestation rates of these tidal wetlands in Australia were likely to be high, because these areas are in prime agricultural land in fertile floodplains.For example, in the Herbert River catchment in northern Australia, 80% of all Melaleuca forests were converted to sugarcane farms in the last century (Johnson et al. 1999 ).The deforestation of these wetlands caused severe problems with the acidification of streams, the release of heavy metals, and the loss of biodiversity.Similar losses have occurred for tidal Casuarina forests, with the current distribution being almost half their historical area (more than 50,000 ha; DEE 2018 ).More accurate and recent data for Melaleuca forest distribution in Australia exists only for the state of Queensland, where 11,900 ha have been mapped and classified as natural coastal and subcoastal floodplain and nonfloodplain Melaleuca or Eucalyptus forests .In Queensland, the annual deforestation rate is 0.2% (2013-2017;Wetland Info 2024).
In temperate North America, cypress or mixed forests ( Nyssa , Fraxinus , Alnus ) have had significant losses since the turn of the last century.Still, less intense land-use changes began as far back as 400 years ago, during European colonization.During this time, the development of port cities and agriculture were the leading causes for the drainage and filling of many wetlands to accommodate human infrastructure, pasture, or crops.In the seventeenth century, along the southeast coast of the United States, much of the tidal wetland area was converted to rice agriculture (Smith 2012 ).In the northwestern United States, most tidal wetlands were historically forested.However, over 90% of these forested tidal wetlands have been lost to diking by levees and alternate vegetation conversion, and the losses have been as high as 99% in some major estuaries (Marcoe andPilson 2017 , Brophy et al. 2019 ).Later, because this land was abandoned, years of land subsidence resulted in emergent freshwater or low-salinity marshes (Smith 2012 ).Currently, the primary threats to tidal wetlands in the United States include salinity intrusion due to sealevel rise, changing river flow patterns, damming of rivers, dredging, and other localized land-use changes, such as urban development (Jones et al. 2017, White et al. 2022 ).
Similar to Australia and other nations, the exact historical and current national area of tidal wetlands in the United States is unknown.However, there is information available for some regions within the country.For instance, wetland timberland in the southeast United States, including cypress and mixed forests ( Nyssa, Fraxinus, Alnus ), was estimated at 3.9 million ha in 1990 (Tansey and Cost 1990 ).More recently, the total historical area of vegetated tidal wetlands for the US West Coast has been estimated at 335,230 ha, from which 85% have been lost (Brophy et al. 2019 ).
A similar story can be found in South Africa, where sugar cane farming, industrial development, roads, and bridges have extensively removed and degraded tidal wetlands.Altered soil conditions have encouraged habitat invasion by terrestrial and exotic invasive plants such as Chromalaena odorata , Lantana camara , and Pereskia species.In rural and highly populated areas, tidal wetlands are cleared illegally through unsustainable slash-and-burn practices to provide subsistence farming of bananas and vegetables (Van Deventer et al. 2021 , Riddin andAdams 2022 ).Desiccation, burning, and erosion of the peat of these wetlands destroy the carbon sink function of these ecosystems and may cause significant emissions.Water extraction is also a threat; Eucalyptus and pine plantations lower the groundwater table and reduce freshwater inflows to downstream estuaries and wetlands (Bate et al. 2016 ).Extreme climate events, such as droughts and storms, are also of concern.Because of the low salinity tolerance of some tidal wetlands close to maximum tide levels, storm surges can cause catastrophic damage.For instance, at Mgobezeleni Estuary, in South Africa, storm swells after two cyclones caused strong winds and waves that scoured open the usually closed estuary mouth.Water stress caused by marine water intrusion into the oligohaline forest of Ficus trichopoda caused its death (Taylor 2016 ).As a result of these human and climatic pressures, 20% of the area (12,000 ha) of these wetlands was lost between 2000 and 2011 (Van Deventer et al. 2021 ).If this trend follows, these tidal wetlands in South Africa will likely be lost by 2060 and are, therefore, classified as Critically Endangered (Van Deventer et al. 2021 ).
The area of most tidal wetlands in other parts of the world is highly uncertain; however, they could exist in most coastal countries and would have likely experienced high deforestation rates.For instance, a vast 680,000 ha, classified as transitional forests , has been identified in Papua Indonesia alone (Aslan et al. 2016 ).In the Mekong Delta, in Vietnam, 99% of Melaleuca swamps (about 4 million ha) have been lost within 200 years of human expansion (since 1816); the loss has been exceptionally high in the past few decades because of conversion to rice fields, urbanization, dike construction, and deforestation (Huu Nguyen et al. 2016 ).In New Zealand, k ānuka and m ānuka forests have become rare, because they were heavily deforested for agriculture (Elser and Astridge 1974 ).Since M āori settlement, these forests have been converted by repetitive fire and then, during colonization, by agricultural expansion (Burrows 1973 ).Most kahikatea forests have been lost, with only small patches remaining (Smale et al. 2005 ).Overall, if all tidal wetlands follow the same trend as global wetlands, more than half of their area, especially those within Asia, will probably have already been lost (Davidson 2014 ).
Management of tidal wetlands is practical and possible
Management of tidal wetlands is conducted through conservation and restoration.Conservation includes protection, preventing overuse, and limiting development.In contrast, restoration includes hydrological reconnection, provisioning for sea-level rise migration, removing invasive species, revegetation, managing nutrient and sediment fluxes, and restoring natural floods of saline tidal water to reduce methane emissions (Kroeger et al. 2017, Krauss et al. 2022 ).The management of tidal wetlands may not have previously included consideration of their ecosystem services, including carbon storage and biodiversity.However, because most tidal wetlands are affected by anthropogenic change, restoring them through carbon offset projects provides an opportunity to recover and enhance their multiple values.
In Australia, most tidal wetlands are offered legislative protection, although their protection varies within States and regions.For instance, coastal Casuarina forests are listed as an Endangered Ecological Community under the New South Wales Threatened Species Conservation Act 1995 and the Commonwealth Environmental Protection and Biodiversity Conservation Act 2016 (DEE 2018 ).For wetlands in the Great Barrier Reef region, legislation, policy, and management programs provide strong protection (Adame et al. 2019a ).However, most Melaleuca forests (75%) in Australia are on leasehold and private land (1 million ha; ABARES 2024) presenting challenges for their management but also opportunities for landholders to participate in carbon offset restoration programs for projects within their properties.
Carbon offsets are unlikely to financially outperform urban developments, and therefore, additional benefits markets and offset schemes, such as nitrogen markets and biodiversity credits, may need to be considered together (Mack et al. 2022 ).There is potential for restoring intertidal Melaleuca and Casuarina forests in agricultural land that is no longer productive.For instance, in the Maroochy floodplain, large-scale restoration of previous sugarcane fields into coastal wetlands is being trialed as part of Australia's national program to boost blue carbon activities.This project aims to test the implementation of the first Australian carbon market methodology for blue carbon, which consists of reintroducing tidal inundation to restore hydrologically altered landscapes used for agriculture to coastal wetlands, including Melaleuca and Casuarina forests (Lovelock et al. 2022 ).In Vietnam, a successful restoration program of a Melaleuca forest that was lost because of a fire in 2002 resulted in the recovery of the forest and return of 156 bird species, 15 of which are important for the East Asian Australasian Flyway (Tran and Matusch 2017 ).
In the United States, wetland protection legislation was established under the 1972 Clean Water Act, and since then, efforts have been undertaken to manage and protect freshwater tidal wetlands (Mihelcic and Rains 2020 ).For instance, cypress or mixed forests ( Nyssa , Fraxinus , Alnus ) have been treated with insecticide to protect the dominant Fraxinus spp.trees from invasive insects such as the Emerald ash borer (Dr.Andy Baldwin, University of Maryland.College Park, Maryland, US, personal communication, 29 May 2019).On a larger scale, restoring freshwater tidal wetlands in the United States in catchments where water flows are regulated could be possible through dam management for sediment and water delivery (Weston 2014 , Ensign andNoe 2018 ).Historically, changes in sediments have dramatically affected tidal wetlands.Following European colonization, the intensification of agricultural practices caused erosion and sediment delivery, allowing tidal freshwater wetland areas to expand (Noe et al. 2020 ).Later, in the twentieth century, the implementation of soil conservation efforts reduced erosion and decreased sediment loads downstream, reducing wetland expansion (Noe et al. 2020 ).Therefore, dam management could be conducted to balance the necessary sediment loads to allow tidal wetlands to persist while maintaining good water quality, which requires low sediment.
The coastal wetlands of South Africa face a conservation conundrum; 62% of their area occurs within protected areas (Van Deventer et al. 2021 ).However, there has been degradation (measured as metrics of fragmentation and transformation) within these wetlands for the past two decades.Although legislation and management measures have been implemented, this trend has not stopped or reversed (Van Deventer et al. 2021 ).Slash and burn agriculture and lowering the water table from surrounding timber plantations have been identified as the primary threats.Drawdown of the groundwater table has resulted in the exposure of the soils and oxidation of the organic material (Grundling et al. 2021 ).These findings indicate that managing water and agricultural practices in the catchment or increasing protection in nature reserves could improve conservation outcomes for tidal wetlands in South Africa.
Finally, in Mexico, in La Encrucijada Biosphere Reserve, fires are a threat to zapotonales and brackish marshes (Adame et al. 2015 ).Funding from carbon projects that support fire brigades and fire management activities in the reserve could help reduce the intensity and frequency of fires in these wetlands.Many of these restoration projects are still ongoing.Studies suggest that many species of tidal wetlands, such as Melaleuca are fast colonizers, especially in low salinity conditions (Johnston et al. 2003, Iram et al. 2022 ), as has been shown in Australia and Indonesia ( supplemental figure S1).Pr otection and mana gement activities could be conducted globally throughout these tidal wetlands, depending on the local context and the threats these forests face.
Despite the opportunities, there are still challenges to managing and restoring tidal wetlands worldwide.Many of these challenges extend beyond knowledge gaps or technical difficulties, such as the country's political landscape and regulatory capacities.However, there is also immense potential, given the extent and global distribution of many of these previously unaccounted wetlands in blue carbon projects.
Management interventions of tidal wetlands may have no social or environmental harm
The multiple benefits of restoring and protecting wetlands are well known and accepted as a no-regrets option for GHG removal (Smith et al. 2019 ).However, any management activity always has trade-offs that must be considered and, if necessary, managed.
For example, carbon offset programs that include tidal wetlands can benefit farmers with land that is not profitable to cultivate; however, the inundation of agricultural land may be irreversible, and the land may no longer be suitable for most crops.Many previously tidal wetlands have been converted to agriculture, such as the Mekong Delta, in Vietnam, where rice cultivation is critical for the country's economy (Huu Nguyen et al. 2016 ).Similarly, tidal wetland restoration in agricultural landscapes of the US West Coast can generate social and political controversy (Breslow 2014 ).In these situations, the restoration of tidal wetlands could directly conflict with the immediate needs of the local farmers without adequate landscape planning (Huu Nguyen et al. 2016 ).In other regions, restoring or managing tidal wetlands would not typically affect ongoing agricultural interests.For example, in the southern United States, coastal rice agriculture was abandoned after the Civil War (1861-1865; Smith 2012 ).
There are also other local issues and values to consider that are specific to each country and region.For example, in Australia and New Zealand, tidal wetlands tend to attract undesirable nonnative animals, such as wild pigs and buffalo, that devastate biodiversity and can carry and spread diseases (Mihailou and Massaro 2021 ).In contrast, there are also additional benefits to restoring tidal wetlands.For example, in the Mekong Delta, in Vietnam, natural tourism has become a strong incentive and driver of restoration (Tran and Matusch 2017 ).It has been estimated that the annual value of protecting the Mekong Delta wetlands is US$0.5 to US$1.8 million (Do and Bennett 2009 ).Tourism of Melaleuca wetlands may drive local economies and increase awareness of conservation activities; however, if tourism is not well controlled, the activity could result in forest degradation (Tran and Matusch 2017 ).The trade-offs of restoring tidal wetlands can also be considered alongside the project goals, which may extend further than carbon-for example, for fowl hunting, birdwatching, tourism, or cultural activities.
Management interventions of tidal wetlands are aligned with policies for mitigation and adaptation to climate change
Restoration and improved management of wetlands are some of the most effective land-management options for achieving the United Nations Sustainable Development Goals (Smith et al. 2019 ), Aichi targets, and the Ramsar Convention.Their management may also help nations achieve their carbon emission targets under the Paris Agreement, especially in small countries with low fossil fuel consumption and high deforestation rates (Taillardat et al. 2018 ).Restoration and improved management of these poorly recognized tidal wetlands could also be important to offset residual emissions-that is, emissions that cannot be practically reduced, such as those created by the aviation industry.
In Australia, restoring tidal wetlands can generate carbon offsets through the Australian blue carbon methodology (Lovelock et al. 2022 ).The offset emissions will count toward Australia's commitment to reducing GHG emissions from land conversion.The management and protection of all types of tidal wetlands are also aligned with the Reef 2050 long-term sustainability plan (Commonwealth of Australia 2021 ) and the State of Queensland Wetland Policy of no wetland loss (Wetland Info 2024 ).Strengthening the protection and restoration of all types of tidal wetlands requires addressing gaps in legislation for blue carbon projects (Bell-James 2022 ).Classifying and mapping tidal wetlands are essential to any project with crucial management implications.For instance, in Queensland, environmental values are managed as maps of matters of state on environmental significance.These maps do not always align with current tidal wetland distributions.
Wetland protection in the United States is often related to water quality maintenance, with state-level regulations often adding additional specificity by wetland type.In the United States, the most robust protections for tidal wetlands are initiated from section 404 of the Clean Water Act, passed by the US Congress in 1972.Embedded within this legislation is a provision to limit sediment discharge into aquatic habitats, including wetlands, and preserve water quality on the basis of nutrient limits.Wetland conservation and restoration are recognized strategies for climate change mitigation in the United States (Needelman et al. 2018 ).Several coastal states are developing natural and working lands policies incorporating GHG inventories for tidal wetlands and other ecosystems (e.g., Oregon Global Warming Commission 2021 ).
Although there are good examples of alignment of tidal wetland management with climate change policies, there are challenges to address.In most countries, national policies do not distinguish between wetland types.For instance, in Vietnam, high-level policies are relevant to all wetlands, with more specific regulations established in the Forest Law, Land Use Law, Fishery Law and Environmental Protection Law (Nguyen et al. 2017 ).In Mexico, only maps and deforestation rates for mangroves are available at the national and state levels (CONABIO 2022 ).Longterm monitoring for other wetlands distinguishes only marsh from open water and other wetlands , which include inundated rainforests and other tidal and nontidal peat swamps (CONABIO 2022 ).More recent proposals for wetland classification in Mexico, following the Ramsar guidelines, include estuarine versus palustrine wetlands (CONAGUA 2017 ).Within palustrine wetlands, the saline swamps subcategory may include some tidal wetlands that could be incorporated into blue carbon projects.
This lack of specificity on wetland types and the overlapping of legislation is a global challenge and can hamper implementation and monitoring of conservation and restoration practices.Despite barriers, protecting and restoring all types of tidal wetlands in many countries and regions will likely align with state, national, and international policies to reduce GHG emissions, adapt to climate change, and provide multiple social and environmental cobenefits.
Conclusions
We provide compelling evidence that tidal wetlands beside mangroves, saltmarsh, and seagrass have characteristics aligned with blue carbon.There is strong evidence that most tidal wetlands, even those near the highest tides, have long-term storage of fixed carbon dioxide in their soils and the aboveground biomass.In addition, there is some evidence that GHG emissions from these tidal wetlands are low, although studies of emissions from trees and lateral carbon export are scarce and a significant knowledge gap.Furthermore, most tidal wetlands near the highest tides have suffered immense losses because of ongoing anthropogenic impacts.However, some examples exist where management and restoration have resulted in enhanced carbon sequestration and positive effects on biodiversity and the local communities.
Despite the potential of including all tidal wetlands in blue carbon projects, pressing issues to be addressed.First is a globally applicable consensus definition of blue carbon and a classification of tidal wetlands.From our experience, we have provided a definition of blue carbon and a classification of tidal wetlands based on attributes that make them important for carbon sequestration and emission reductions.This classification could be applied to blue carbon wetlands globally.Our review suggests that limiting wetlands by the limits of tidal influence in the terrestrial edge provides adequate inclusion of the ecosystems that possess the biophysical attributes of a blue carbon ecosystem.This delimitation is conservative as, by definition, it excludes all wetlands that do not have marine influence.Some wetlands above the highest astronomical tides could have a positive GHG balance, where emissions are higher than their sequestration potential, and therefore will not have a net cooling effect in the atmosphere.However, some of these freshwater wetlands may not produce high levels of methane because of other processes, such as microbial methane consumption or sulphates provided by remnant marine sediments.We propose that future studies address this knowledge gap.An agreed definition and appropriate classification are key steps for overcoming the second most crucial problem: mapping their distribution.Improved mapping at appropriate scales and regular updates of all tidal wetlands are essential for protecting and managing them.
Despite these limitations, including all tidal wetlands in blue carbon projects could accelerate the restoration and protection of these ecosystems and significantly expand the scale of carbon credits generated.Many tidal wetlands have been scarcely studied, and most have not been considered for blue carbon projects.However, tidal wetlands, even those close to the tidal inundation limits, are one of the most carbon-dense ecosystems on the planet.Mechanisms that finance their protection and restoration, such as carbon and biodiversity crediting, could significantly accelerate their conservation and recovery for the benefit of humanity.
Figure 1 .
Figure 1.Typical distribution of vegetated communities in (a) tropical or subtropical Melaleuca and Casuarina forest in eastern Australia and (b) temperate cypress forest and tidal freshwater marshes in the Southeast region of the United States, (c) within a tidal inundation and salinity gradient.Abbreviations: HAT, highest astronomical tide; MSL, mean sea level.Graphics: Kim Kraeer, Lucy Van Essen-Fishman, Tracey Saxby, Annie Carew and Catherine Collier from the Integration and Application Network (ian.umces.edu/media-library).
Figure 6 .
Figure 6.Organic matter decay rate (k) for standardized (a) labile and (b) recalcitrant organic matter (OM) for saltmarsh, mangroves, seagrass, Melaleuca and Casuarina (Mel/Cas) forests in Australia.The data are mean and standard error.Source: The figures are modified from Stacey Trevathan-Tackett and colleagues (2021 ).
Table 1 .
Global dominant species of tidal wetlands that are not mangroves, seagrass or saltmarshes.
Table 2 .
Soil fluxes of methane and nitrous oxide (in kilograms per hectare per year) from tidal wetlands paired with adjacent mangroves and saltmarshes and compared with global data.
Note: Positive fluxes are emissions, and negative ones are uptakes.
Table 3 .
Jeffrey et al. 2020luxes extrapolated to annual aerial emissions from forest density (in kilograms per hectare per year, as inJeffrey et al. 2020) from intertidal Melaleuca, Casuarina, and Taxodium distichum forests compared with mangroves and freshwater forested wetlands. | 2024-03-21T15:16:21.558Z | 2024-03-18T00:00:00.000 | {
"year": 2024,
"sha1": "d4c33709499c1767b808726290b224a80c3d2a3f",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/bioscience/advance-article-pdf/doi/10.1093/biosci/biae007/56987368/biae007.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "def9226c36a289a6bbbad36f24f3188807f2be4a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
226051812 | pes2o/s2orc | v3-fos-license | Evaluation of Carbon Ion Radiation-Induced Trismus in Head and Neck Tumors Using Dose-Volume Histograms
Simple Summary Patients who receive carbon ion radiotherapy (C-ion RT) for tumors near the temporomandibular joint are likely to experience trismus, a condition characterized by reduced jaw opening. However, the relationship between the dose of carbon ion and the onset of trismus remains unclear. Therefore, we conducted a subgroup analysis of a prospective observational study to understand the relationship between the dose of carbon ion and the occurrence of trismus. Of 35 patients included in the study, six developed grade 2 trismus, and the median onset time was 12 months. The affected muscles included masticatory muscles and the coronoid process. Our findings suggest better treatment planning, such as dose optimization, to minimize the occurrence of muscle-related adverse effects associated with C-ion RT. Abstract Carbon ion radiotherapy (C-ion RT) provides a highly localized deposition of energy that can increase radiation doses to tumors while minimizing irradiation of adjacent normal tissues. For tumors located near the temporomandibular joint, C-ion RT-induced trismus may occur. However, the relationship between the carbon ion dose and the onset of trismus is unclear. In this prospective observational study, we assessed the trismus/carbon ion dose relationship using dose−volume histograms in 35 patients who received C-ion RT in their head and neck regions between 2010 and 2014. Trismus was evaluated in patients according to the Common Terminology Criteria for Adverse Events, version 4.0. All patients were treated with 57.6 or 64.0 Gy (relative biological effectiveness (RBE)) in 16 fractions, and the median follow-up time was 57 months. Grade 2 trismus was observed in six patients. The median onset time was 12 months. At maximum radiation doses, all masticatory muscles and coronoid processes, particularly the masseter muscle, were significantly different (p = 0.003). The contouring of the masseter muscle and coronoid process requires different treatment planning. The maximum radiation doses of the coronoid process can be proposed as a guideline for treatment planning, considering the ease of contouring in C-ion RT.
Introduction
Head and neck tumor patients undergoing radiotherapy suffer from acute adverse events (mucositis and dermatitis), and late adverse events (dysgeusia, osteoradionecrosis, and trismus). Usually it is challenging to improve late adverse events that reduce the patient's quality of life (QOL). Radiation-induced trismus impacts the patient's QOL, making it difficult for them to open their mouths, to eat, to talk, and to maintain oral hygiene [1,2]. The prevalence of X-ray-induced trismus for head and neck tumors is 1.4-41% [3][4][5][6][7][8][9][10][11]. Previous studies showed trismus onset by X-ray correlated significantly with the radiation dose received in the masseter muscles [1,2,6,10,12], pterygoid muscles [1,13,14], and in the temporomandibular joint [2,15], focusing efforts on reducing the radiation dose received by the above structures to mitigate the X-ray-induced onset of trismus. However, this is difficult when the tumor has invaded near the temporomandibular joint structures, increasing the concentration area of radiation-induced adverse events, since radiotherapy has become highly precise. In particular, a highly localized deposition by carbon ion radiotherapy (C-ion RT) energy can increase radiation doses to tumors while minimizing the irradiation area of adjacent, normal tissues. Previous studies have reported the incidence of trismus onset by C-ion RT. The prevalence of trismus by C-ion RT for head and neck tumors is 2-9% [16][17][18][19][20]. Therefore, the relationship between C-ion RT and trismus is unclear.
This study aimed to identify the correlation between the maximum dose and the dose−volume histogram, and C-ion RT-induced trismus.
Incidence of C-Ion RT-Induced Trismus
The median follow-up time was 57 months. C-ion RT-induced trismus was evaluated using the Common Terminology Criteria for Adverse Events (CTCAE), version 4.0. [21], and grade 2 or higher was considered as trismus. Grade 2 trismus was observed in six patients (19.4%). There were no cases of grade 1 or 3 trismus. The median onset time was 12 months (range, 10-22 months). The trismus resolved entirely after 24 months of C-ion RT ( Figure 1). Trismus onset showed no significant difference between the age, sex, primary site, histological type, T stage, and gross tumor volume (Table S1).
Figure 1.
Cumulative incidence of grade 2 carbon ion radiotherapy-induced trismus after carbon ion radiotherapy in patients with this study (n = 31). Figure 2 shows a representative case of computed tomography images of the tumor and temporomandibular joint-related muscles and bones and a 3-dimensional image of the mandible. This patient had adenoid cystic carcinoma of the right maxillary sinus. The tumor and temporomandibular joint-related organs are close and irradiated with a high dose of radiation ( Figure 2a). The masseter muscle, mandible head, and coronoid process were displayed on the 3-dimensional image of the mandible (Figure 2b). The high-dose region can be seen in front of the masseter muscle and the coronoid process ( Figure 2b). In this patient, trismus was onset at 22 months after C-ion RT. Figure 2 shows a representative case of computed tomography images of the tumor and temporomandibular joint-related muscles and bones and a 3-dimensional image of the mandible. This patient had adenoid cystic carcinoma of the right maxillary sinus. The tumor and temporomandibular joint-related organs are close and irradiated with a high dose of radiation ( Figure 2a). The masseter muscle, mandible head, and coronoid process were displayed on the 3-dimensional image of the mandible (Figure 2b). The high-dose region can be seen in front of the masseter muscle and the coronoid process ( Figure 2b). In this patient, trismus was onset at 22 months after C-ion RT. Figure 3 compares the dose−volume histogram (DVH) of the temporomandibular joint-related muscles and bones of patients with and without trismus. In the temporomandibular joint-related muscles, medial and lateral pterygoid muscle shared most of the high-dose area with or without trismus (Figure 3c, d). In contrast, the masseter muscle shared the least high-dose area with or without trismus (Figure 3a). The temporomandibular joint-related bones had different tendencies for DVH values with or without trismus (Figure 3e, f). In cases of trismus, the coronoid process tended to have a higher dose than the mandible head (Figure 3e, f). Figure 3 compares the dose−volume histogram (DVH) of the temporomandibular joint-related muscles and bones of patients with and without trismus. In the temporomandibular joint-related muscles, medial and lateral pterygoid muscle shared most of the high-dose area with or without trismus (Figure 3c,d). In contrast, the masseter muscle shared the least high-dose area with or without trismus ( Figure 3a). The temporomandibular joint-related bones had different tendencies for DVH values with or without trismus (Figure 3e,f). In cases of trismus, the coronoid process tended to have a higher dose than the mandible head (Figure 3e
Maximum Dose in the Temporomandibular Joint-Related Structures and C-Ion RT-Induced Trismus
The maximum radiation dose leading to no trismus or its onset was significantly different among various types of masticatory muscles (Table 1). In particular, the masseter muscle showed the most significant difference among its different muscles ( Figure 4a, Table 1, p = 0.003). The maximum dose received by the masseter muscle that resulted in no trismus was 47.9 ± 19.0 Gy(RBE), whereas the maximum dose that caused trismus was 61.2 ± 5.9 Gy(RBE), which was significantly different. From the receiver operating characteristic (ROC) curve, the cut-off value was found to be 44.0 Gy(RBE) for trismus (sensitivity: 1.0, specificity: 0.44, AUC: 0.653, Table 1). In contrast, the difference between the maximum doses received by the bone structure of the temporomandibular that led to no trismus, 33.0 ± 20.8 Gy(RBE), or trismus, 54.8 ± 11.5 Gy(RBE), were significantly different in the coronoid process ( Figure 4e, Table 1, p = 0.002) but not in the mandible head ( Figure 4f, Table 1, p = 0.39). From the ROC curve, the cut-off value was found to be 38.0 Gy(RBE) for trismus (sensitivity, 1.0; specificity, 0.56; AUC, 0.773; Table 1). There were no cases in which trismus was absent when high doses of radiation were administered to both the temporomandibular joint-related muscles and bones.
Maximum Dose in the Temporomandibular Joint-Related Structures and C-Ion RT-Induced Trismus
The maximum radiation dose leading to no trismus or its onset was significantly different among various types of masticatory muscles (Table 1). In particular, the masseter muscle showed the most significant difference among its different muscles ( Figure 4a, Table 1, p = 0.003). The maximum dose received by the masseter muscle that resulted in no trismus was 47.9 ± 19.0 Gy(RBE), whereas the maximum dose that caused trismus was 61.2 ± 5.9 Gy(RBE), which was significantly different. From the receiver operating characteristic (ROC) curve, the cut-off value was found to be 44.0 Gy(RBE) for trismus (sensitivity: 1.0, specificity: 0.44, AUC: 0.653, Table 1). In contrast, the difference between the maximum doses received by the bone structure of the temporomandibular that led to no trismus, 33.0 ± 20.8 Gy(RBE), or trismus, 54.8 ± 11.5 Gy(RBE), were significantly different in the coronoid process ( Figure 4e, Table 1, p = 0.002) but not in the mandible head ( Figure 4f, Table 1, p = 0.39). From the ROC curve, the cut-off value was found to be 38.0 Gy(RBE) for trismus (sensitivity, 1.0; specificity, 0.56; AUC, 0.773; Table 1). There were no cases in which trismus was absent when high doses of radiation were administered to both the temporomandibular joint-related muscles and bones.
Dose Rate of the Temporomandibular Joint-Related Structures and C-Ion RT-Induced Trismus
The doses received by 10, 20, 30, 40, and 50% (D10, D20, D30, D40, and D50) of the temporomandibular joint-related muscle and bone volumes, along with mean above structure dose in percentage are summarized in Table 2. The coronoid process showed significantly different doses for the presence or absence of trismus in all groups from D10 to D50 (D10; trismus 52.
Dose Rate of the Temporomandibular Joint-Related Structures and C-Ion RT-Induced Trismus
The doses received by 10, 20, 30, 40, and 50% (D10, D20, D30, D40, and D50) of the temporomandibular joint-related muscle and bone volumes, along with mean above structure dose in percentage are summarized in Table 2
Discussion
In this study, we analyzed the maximum dose and DVH associated with trismus in head and neck cancer patients treated with C-ion RT. The median follow-up time was 57 months. Grade 2 trismus was observed in six patients. The prevalence of trismus was 19.4%, and the median onset time was 12 months. The prevalence of trismus induced by X-ray and C-ion RT is reported to be 1.4-41% [3][4][5][6][7][8][9][10][11] and 2-9% [16][17][18][19][20], respectively. In this study, the prevalence of a slightly higher incidence than previously reported may be due to the tumor site; however, this difference is not significant because this is a small study. A previous study showed that the onset of trismus is associated with a median of 1−16 months after completion of X-ray radiotherapy [2,4,[8][9][10]12,15,22,23]; however, C-ion RT-induced trismus has not been reported. Most studies on X-ray-induced trismus have defined a mouth opening distance of less than 35 mm [2,6,[8][9][10]23]. In contrast, as in this study, trismus in reporting C-ion RT is defined according to the CTCAE criteria [17][18][19][20]22].
Previous studies showed that trismus onset by X-ray had a significant correlation with the C-ion RT dose in the masseter muscles [1,2,6,10,12], pterygoid muscles [1,13,14], the temporomandibular joint [2,15]. The mean dose to the masseter muscle with trismus was 57.2 Gy at D50 [6]. In another report, after a dose of 40 Gy, for every additional 10 Gy radiation in the pterygoid muscle, an increase in the probability of trismus by 24% was observed [1]. However, in this study, the mean radiation doses administered to the masseter and pterygoid muscles were not significantly different for the onset of trismus at any time (D10, D20, D30, D40, and D50). Similarly, the mean radiation dose administered to muscles was not significant, except for the temporal muscle of D10; however, the radiation doses administered to muscles were significantly different for the maximum dose. In this report, a significant difference was confirmed, especially in the masseter muscle maximum dose, and the cut-off value was 44.0 Gy(RBE) for C-ion RT-induced trismus. Similar results have been reported with X-rays [1,2,6,10,12]. In contrast, in the coronoid process, there was a significant difference in both the mean dose (D10; 52.2 Gy(RBE), D20; 50.9 Gy(RBE), D30; 49.7i Gy(RBE), D40; 48.6 Gy(RBE), and D50; 47.4 Gy(RBE)) and the maximum dose (cut-off value 38.0 Gy(RBE)) associated with the onset of trismus.
In the DVH analysis, not only the high dose of radiation received by the masseter muscle but also the low to middle dose range received by the coronoid process seemed to be associated with the development of C-ion RT-induced trismus. Therefore, the reduction of the low to moderate dose volume of the DVH of the coronoid process may be useful in preventing trismus. The maximum dose administered to the coronoid process was also significant; however, it is unlikely that the high dose to the bone structure led to the trismus; the effect of radiation on the temporal muscle (at a muscular attachment), and the tendon may also be important. In some cases, high maximum doses were found even in non-onset cases, but the maximum dose was the maximum point dose, and due to the good dose distribution characteristic of C-ion RT, high-dose sites were spotted. Considering the ratio of radiation dose to each structure, it is possible that the middle to low dose area occupies most of the structure. According to the results in the present study, at the maximum doses of radiation, all types of masticatory muscles showed a significant difference in the development of trismus, with the most significant difference observed in the masseter muscle. We can offer a dose-constraint option in the C-ion RT optimization process, including a maximum dose to the masseter muscle of approximately 44.0 Gy(RBE). Moreover, from D10 to D50, it is important to keep the radiation dose to the coronoid process to less than 47 Gy(RBE). However, the contouring of the masseter muscle and the coronoid process requires different treatment planning for radiation oncologists (Figure 2b). Since the coronoid process forms contours more easily than the masseter muscle, radiation oncologists should consider it a risk organ to prevent C-ion RT-induced trismus.
As mentioned above, for non-invading tumors, minimizing radiation exposure to temporomandibular joint-related structures by reducing the dose of radiation therapy prevents the onset of trismus. However, this dose reduction strategy is challenging when the tumor invades temporomandibular joint-related structures. Several stretching techniques and jaw mobilizing devices are currently available to treat radiotherapy-induced trismus [24,25]. Jaw opening exercises using various jaw mobilizing devices, such as the TheraBite Jaw Motion Rehabilitation System and the Dynasplint Trismus System, have been proposed for treating radiotherapy-induced trismus [24,25]. There is, however, a lack of standard jaw mobilizing devices to treat radiotherapy-induced trismus. Our institution also recommends implementing mouth opening exercises using a device for patients who experience trismus within a year after C-ion RT. Future studies should consider the appropriate timing of mouth opening exercise, duration of mouth opening exercise, and type of device used to prevent radiation-induced trismus.
This study had a few limitations. First, since this study was conducted on a small number of patients enrolled at a single institute, the primary sites of tumor initiation were not examined thoroughly. Future studies should increase the number of patients for the study design. Second, we only evaluated the CTCAE criteria. In past studies using X-rays, many studies have defined a mouth opening of less than 35 mm as a mouth opening disorder [2,6,[8][9][10]23]. As in a previous study using the CTCAE criteria [3], we focused on whether it was easy to open the mouth because there are individual differences in the mouth opening disorder.
Patients and Tumor Characteristics
The present study is a subgroup analysis of a prospective clinical study that included 35 patients diagnosed with nonsquamous cell carcinoma of the head and neck region and treated with C-ion RT between 2010 and 2014 in our institution. This study was approved by our Institutional Review Board and registered with the University Hospital Medical Information Network in Japan (trial registration number: UMIN000007886) [16]. All patients provided informed consent before treatment. For analyzing trismus, we excluded one case with previously existing C-ion RT-induced trismus, and three patients who underwent salvage surgery for recurrence after C-ion RT. Thirty-one cases were eventually analyzed ( Figure 5). Table 3 summarizes the patient and tumor characteristics. The primary cancer sites were the maxillary sinus (n = 8), nasal cavity (n = 8), parotid gland (n = 5), oral cavity (n = 4), pharynx (n = 4), and the external auditory canal (n = 2). The 3-year local control rate (93%) and the 3-year overall survival rate (88%) for these patients have already been reported [16].
Cancers 2020, 12, x FOR PEER REVIEW 9 of 12 The present study is a subgroup analysis of a prospective clinical study that included 35 patients diagnosed with nonsquamous cell carcinoma of the head and neck region and treated with C-ion RT between 2010 and 2014 in our institution. This study was approved by our Institutional Review Board and registered with the University Hospital Medical Information Network in Japan (trial registration number: UMIN000007886) [16]. All patients provided informed consent before treatment. For analyzing trismus, we excluded one case with previously existing C-ion RT-induced trismus, and three patients who underwent salvage surgery for recurrence after C-ion RT. Thirty-one cases were eventually analyzed ( Figure 5). Table 3 summarizes the patient and tumor characteristics. The primary cancer sites were the maxillary sinus (n = 8), nasal cavity (n = 8), parotid gland (n = 5), oral cavity (n = 4), pharynx (n = 4), and the external auditory canal (n = 2). The 3-year local control rate (93%) and the 3-year overall survival rate (88%) for these patients have already been reported [16].
Carbon Ion Radiotherapy
The techniques used for C-ion RT and the treatment plan have been reported previously [16]. Physical dose calculations were performed using a pencil beam algorithm. The clinical dose distribution was calculated using the physical dose and relative biological effectiveness (RBE). The dose of C-ion RT was expressed as "Gy(RBE)" (physical carbon ion dose (Gy)×RBE). The number of fractions was 16, and the overall treatment time was four weeks (4 fractions per week). Following the clinical protocol, 29 patients received 64.0 Gy(RBE) in 16 fractions, 2 patients received 57.6 Gy(RBE) (in these two patients, the mucosa and skin were considered to be widely irradiated).
Analysis of Temporomandibular Joint Structures
In the muscle and bone structures of the diseased side, around the temporomandibular joint, the doses of radiation received by the masseter muscle, temporal muscle, medial pterygoid muscle, lateral pterygoid muscle, coronoid process, and mandibular head were examined. Using commercially available software (MIM Maestro, version 6.9.3, Beachwood, OH, USA), the contour of each organ and a 3-dimensional image of the mandible (Figure 2b) were created. Trismus was evaluated using the CTCAE, version 4.0. [21] and grade 2 or higher was considered to be trismus. The relationship between the dose of radiation and trismus was analyzed in various temporomandibular joint structures.
Statistical Analysis
Data are represented as mean ± standard deviation (S.D.). Statistical differences were compared using a two-sided Student's t-test. A paired t-test was used to compare differences in the maximum doses between high-grade trismus and none. ROC curves were generated to anticipate the dose at the site of trismus. All data were analyzed using SPSS Statistics software, version 26.0 (IBM Corp., Armonk, NY, USA). Differences with p < 0.05 were considered statistically significant.
Conclusions
The masseter muscle showed the most significant difference between the presence and absence of trismus, in terms of maximum doses received among muscle tissues. The maximum and mean radiation doses that led to no trismus, or caused trismus, were significantly different in the coronoid process. The coronoid process can be suggested as a guideline for treatment planning considering the ease of contouring.
Supplementary Materials: The following are available online at http://www.mdpi.com/2072-6694/12/11/3116/s1, Table S1: Univariate analysis of risk factors for carbon ion radiotherapy-induced trismus. The study sponsors had no involvement in the study design, data collection, data analysis or interpretation, manuscript writing, or the decision to submit the manuscript for publication. | 2020-10-29T09:07:44.953Z | 2020-10-25T00:00:00.000 | {
"year": 2020,
"sha1": "7ab56c19d479e8c1bfa78e5517bb2c6ee391c61d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/12/11/3116/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "405ecc6a210b946cd5ada4a36c67118a78ba9e13",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260185523 | pes2o/s2orc | v3-fos-license | Effective Protective Polymer Materials Stable to Destructive Factors
– To protect of various objectsfrom destruction under the action of a(biotic) and man-made destructive factors, epoxy polyurethane compositions were created using an epoxy base, a polyurethane prepolymer modified with [Cu,Zn]organometallic compounds, which have high indicators of adhesion, cohesion
Introduction
The use of new polymer more effective materials as protective coatings for anti-corrosion protection of object structures ensures: reliable prolonged operation of metal, reinforced concrete structures, buildings and structures under conditions of dynamic abiotic, biotic and man-made loads; practically eliminating the destruction of concrete surfaces protected by polymer materials from the alternating effects of positive and negative temperatures and the duration of their operation; restoration of the mass of damaged metal, reinforced concrete, and concrete structures and guaranteed prolonged operation of objects after repair works.
Polymers based on epoxy resins have high operational properties, namely: high adhesion to metals, concrete and other materials, but they are fragile and short-lived for use as adhesives, coatings. Technological and physico-mechanical properties of compositions based on epoxy resins can be adjusted in a wide range by combining them with some polymers.
To protect materials and structures of various types from destruction under the action of a (biotic) and man-made destructive factors, epoxy polyurethane compositions (EPC) were created using an epoxy base (EP), polyurethane prepolymer (PFP) with terminal isocyanate groups, modified metal [Cu, Zn] organic compounds and an amine hardener (AH), which have high adhesion, cohesion, resistance to biocorrosion, UV radiation, chemical agents, and are waterproof and wear-resistant.
Polyurethane prepolymer is obtained on the basis of polyurethanes of different structure and composition by combining linear polyurethane (LPU) with terminal isocyanate groups, obtained by the interaction of 2,4(2,6)-toluenediisocyanate (TDI) or hexamethylenediisocyanate (HMDI) with 2,4-pentanedionate (PD) transition metal Cu(II) or Zn(II) (molar ratio of DIC: PD=10-15:1), with the addition of a polyester component, polyoxypropylene glycol, L-1000. Next, reticulated polyurethane (RPU) is introduced into the system based on a prepolymer -a product of the interaction of TDI and primethylolpropane (TMP) with subsequent interaction with L-1000.
Polyurethane modifier PU30/70 is obtained by combining linear and mesh polyurethanes (mass ratio LPU:RPU=30:70). The basis of the epoxy composition is modified with the obtained PU30/70 in the ratio of EP:PU30/70 = 100:10-50 parts by mass, for hardening of which an amine hardener is used (mass ratio EP:АH=100:20).
The results of the study of the influence of a complex atmospheric factors: UV and IR radiation (sunlight), elevated temperature (50±5) o C and air humidity (96%) on EPC based on the aromatic diisocyanate TDI showed ( Table 1) that the samples somewhat lose strength and change color. Research has also established that all EPC samples modified with polyurethanes of different structures and compositions based on aliphatic diisocyanate HMDI are resistant to UV and IR radiation. The stability of the unmodified source EP is lower than EPC [1][2][3].
And in general, as the research results showed, EPKs modified with polyurethanes of different structures and different compositions with the content of organometallic modifiers (MOM) are characterized by an increase in adhesive/cohesive properties. Studies of fungicidal properties showed that before the beginning of the study, one colony of mycodestructors (mold fungi) up to 1-2 mm in diameter, spore-bearing, was noted on the samples of the original EP, from which they were isolated and identified Репiсiliит cyclopium. .
Before the start of the study, mold fungi were not detected on the EPA samples containing organometallic modifiers (Table 2). All EPC samples with MOM content have fungicidal properties, their fungal resistance is 0 points in a humid chamber, on a nutrient medium without additional infection and on a nutrient environment with infection. An increase in the colony was noted on the initial EP sample.
A comparison of the parameters of the physical and mechanical properties of the original EP and EPK samples and samples after the action of mycodestructors (mold fungi) shows that EPC/Zn and EPC/Cu have resistance to biodegradation, unlike EP.
It should be noted that the fixation of functional compounds of MOM [Zn,Cu] in the macrochain of the polymer makes it impossible for them to diffuse to the surface of the material with their subsequent removal and, thus, prolongs the protective functions of epoxy-polyurethane compositions, which is an advantage of the latter in relation to similar materials, both domestic and imported production/ The results of the study of the resistance to chemical environments of the epoxy polyurethane composition obtained by the proposed method show that the obtained composition is resistant to water, oil, gasoline, dilute acids and alkalis (Table 3). The achievement of the set goal -the creation of effective polymer materials for the protection of structures and objects of various types from destruction under the action of a(biotic) and man-made destructive factors is ensured by the proposed method of obtaining an epoxypolyurethane composition modified with polyurethanes of different structures and different compositions with the content of organometallic modifiers, MOM[Zn,Cu ], which makes it possible to controllably vary the structure, and therefore the properties of the epoxy polyurethane macromolecule.
Owing to this, EPC for the protection of various types of surfaces from destruction under the influence of a(biotic) and man-made destructive factors has high indicators of adhesion, cohesion, resistance to biocorrosion, to UV radiation, to chemical agents, has waterproofness, wear resistance and high indicators of operational properties.
Epoxy polyurethane compositions are recommended for use in the chemical, light, food industry, at enterprises of the ministries of construction, architecture and housing and communal services, infrastructure, as protective compositions, adhesives and binders that have resistance to the action of microorganisms -biodestructors, waterproofing properties and resistance to UVirradiation, resistance to chemical agents. | 2023-07-27T15:16:21.519Z | 2023-06-26T00:00:00.000 | {
"year": 2023,
"sha1": "3bdc13ac648a0aaaf1b718456c62d233b1ee2b30",
"oa_license": "CCBY",
"oa_url": "https://openreviewhub.org/sites/default/files/cte/4541/ludmilamarkovskaabs_0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "81151f207eea0a8e7670f45fe1aaf411363049c7",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
249558907 | pes2o/s2orc | v3-fos-license | Arthroscopic Posterior Capsular Release Effectively Reduces Pain and Restores Terminal Knee Extension in Cases of Recalcitrant Flexion Contracture
Purpose To 1) evaluate the clinical efficacy of arthroscopic posterior capsular release for improving range of motion (ROM) in cases of recalcitrant flexion contracture and 2) determine patient-reported outcomes (PROs) postoperatively. Methods Retrospective chart review was performed to identify patients who underwent arthroscopic posterior capsular release due to persistent extension deficit of the knee despite comprehensive nonoperative physical therapy between 2008 and 2021. Knee ROM and PROs (International Knee Documentation Committee [IKDC], Tegner, and visual analog scale [VAS]) were collected at final follow-up. Results Overall, 22 patients were included with a median age of 37 years (interquartile range [IQR]: 20.5-44.3). Of these, 8 (36%) were male and 14 (64%) were female, and average follow-up was 3.7 ± 3.3 years. The most common etiology was knee flexion contracture after anterior cruciate ligament (ACL) reconstruction (59%). All patients failed a minimum of 3 months of nonoperative management. Prior to operative intervention, 100% of patients received physical therapy, 64% received extension knee bracing or casting, and 36% received corticosteroid injection. Median preoperative extension was 15° (IQR: 10-25) compared to 2° (IQR: 0-5) postoperatively (P < .001). At final follow-up, median extension was 0° (IQR: 0-3.5). Postoperative VAS pain scores at rest (2 vs 0; P = .001) and with use (5 vs 1.8; P = .017) improved at final contact, and most (94%) patients reported maintaining their extension ROM. Patients with ACL-related extension deficit reported better IKDC (81 vs 51.3; P = .008), Tegner (5.8 vs 3.6; P = .007), and VAS pain scores (rest: 0.2 vs 1.8; P = .008; use: 1.3 vs 5; P = .004) compared to other etiologies. Conclusion Arthroscopic posterior capsular release for recalcitrant flexion contracture provides an effective means for reducing pain and restoring terminal extension. The improvement in extension postoperatively was maintained for most (94%) patients at final follow-up with a 14% reoperation rate.
Introduction
F lexion contracture or terminal extension deficit is a troubling clinical problem even among the most experienced surgeons. Etiologies of this condition include acute injury, repetitive microtrauma, or, commonly, as a complication of surgical intervention to the knee joint. Unfortunately, 0.5-11% of patients fail to achieve satisfactory return of range of motion (ROM) despite appropriate nonoperative treatments, including physical therapy for range of motion, quadriceps training, and extension orthosis bracing. [1][2][3][4][5][6] Modifiable risk factors include surgical technique, preoperative ROM, concomitant or multiple procedures, pain management, and BMI, 7 though even with a preventionfirst approach, those who go on to experience a persistent extension deficit remain difficult to treat.
In many cases of persistent extension deficit, secondary to surgical insult or trauma, posterior capsular tissues become contracted, leading to subsequent limitations in range of motion and loss in terminal knee extension. 8,9 This is particularly disabling to knee function and results in poor patient outcomes, deterioration of knee function, and increased morbidity and disability by increasing stress on the quadriceps and patellofemoral articular cartilage. 10 Treatment of extension deficit requires early identification of motion limitation and potential causes, such as graft malposition following anterior cruciate ligament (ACL) reconstruction or capsular fibrosis and contracture. In most patients, motion can be successfully regained through physical therapy, splinting/bracing, and oral/intra-articular corticosteroids. 11 Manipulation under anesthesia (MUA) with or without arthroscopic debridement is another stepwise treatment option available for surgeons. 11 Despite exhausting these measures, extension deficit may persist in some patients; these recalcitrant cases pose a unique clinical challenge.
Variation in the surgical management of posterior capsule contracture is evident in the literature. Previous studies have demonstrated that an open posterior capsulotomy can be performed with satisfactory results. 12,13 Additionally, a mixed open and arthroscopic approach for severe flexion contractures was shown effective by Mariani,14 though these techniques come with significant risk of complication near neurovascular structures. 15,16 An arthroscopic approach has been described, with posteromedial release typically sufficient to achieve ROM, although additional posterolateral release is acceptable. [16][17][18] To our knowledge, the only investigation of an all-arthroscopic posterior capsule release in a comprehensive cohort of patients with extension deficit was a 15 patient series by LaPrade et al. in 2008. 19 This study reported efficacy in regaining ROM for patients failing nonoperative and operative management, including physical therapy, manipulations, or anterior compartment arthroscopic debridement. Despite these results, there remains a paucity of data on the clinical and patient-reported outcomes following arthroscopic posterior capsular release for persistent extension deficit. Therefore, the purposes of this investigation were to 1) evaluate the clinical efficacy of arthroscopic posterior capsular release for improving ROM in cases of recalcitrant flexion contracture and 2) determine patient-reported outcomes (PROs) postoperatively. We hypothesized that arthroscopic posterior capsular release would result in improved knee motion postoperatively with satisfactory PRO scores.
Methods
Primary Location where this investigation was performed: Mayo Clinic, Rochester, MN.
Ethical approval was obtained from the Mayo Clinic (Rochester, MN; Institutional Review Board [IRB]: 15-000601) and patients provided informed consent. After IRB approval, an institutional operative note database was queried for patients undergoing posterior capsular release procedures between January 2008 and March 2021. The terms "capsular release" and "capsule release" were used to identify the initial patient sample for screening. Operative notes and patient charts were screened for inclusion. Patients were included if they 1) underwent arthroscopic posterior capsular release for a symptomatic, relative extension deficit of at least 10 ; 2) had an inadequate response to conservative management, including 3 months of physical therapy, bracing, or injection; and 3) had clinical follow-up with recorded range of motion.
Patient medical records were reviewed to obtain patient characteristics, including age, sex, body mass index (BMI), smoking status, and history of diabetes, surgical history, prior conservative therapies, preoperative VAS pain scores, surgical details, and clinical outcomes. In patients with native knees, patient-reported outcomes were collected at final follow-up, including VAS pain, IKDC, and Tegner scores. 20 Further analysis was performed to determine factors related to achieving threshold patient-acceptable symptom state for knee function (IKDC PASS). 21 Patients were asked whether their knee extension ROM had improved, maintained, or worsened since their last consultation. Patients were contacted by phone when necessary for final follow-up.
Surgical Technique
Posterior capsular release was only performed after a failed course of nonoperative treatment. A standard diagnostic arthroscopy was used to assess for and address concomitant knee pathologies. Posterior capsular release was performed at the discretion of the treating surgeon for persistent terminal knee extension intraoperatively. This arthroscopic technique has been cited previously. [16][17][18]22 A transcondylar notch view was used to visualize the posteromedial compartment and establish a posteromedial portal. Next, a safe plane was created behind the capsule. Maintaining visualization throughout the entirety of this step was key. The transeptal approach allowed for posterior cruciate ligament (PCL) identification and manipulation anteriorly, effectively creating space anterior to the neurovascular structures. Both 30 and 70 scopes were used, with care to avoid the meniscus as well.
The posteromedial capsule was then transected, starting medially, and moving laterally to the posterior cruciate ligament and using an arthroscopic shaver to clean up the free edges, as necessary, until the medial head of the gastrocnemius muscle was well visualized. If the extension deficit persisted after posteromedial release, then the transcondylar notch was used once again to visualize the placement of a posterolateral portal, and a safe plane was created behind the posterolateral capsule. The capsule was transected from lateral to medial, and free edges were cleaned until the lateral head of the gastrocnemius muscle was well visualized (Fig 1).
Rehabilitation
All patients received intensive, in-person physical therapy starting immediately after surgery with additional at home exercises to be performed daily. Standard rehabilitation included turnbuckle extension orthosis bracing, active and passive range of motion exercises, and quad activation postoperatively. Continuous passive motion machines and dynamic extension braces were used at the discretion of the operating surgeon.
Statistical Analysis
Data are presented as n (%) or median interquartile range (IQR). Wilcoxon signed rank tests were used to compare changes in preoperative and postoperative VAS pain, knee extension, ROM, and flexion ROM.
Fisher's exact test or c 2 (Chi-square) analysis for categorical variables were utilized when appropriate. All tests were 2-sided, and P values <.05 were considered significant. Analysis was performed using SAS JMP version 14.1.0 (SAS, Inc., Cary, NC).
Results
The initial search returned 32 patients undergoing posterior capsular release. One patient underwent concomitant unicompartmental knee arthroplasty, and 9 patients had less than 3-month follow-up and were subsequently excluded. After application of exclusion criteria, 22 patients were included. Baseline patient characteristics are reported in Table 1. All patients failed nonoperative management, as 100% of patients received physical therapy, 64% received knee bracing or casting, and 36% received corticosteroid injection prior to requiring surgical intervention. The most common etiology of extension deficit was anterior cruciate ligament (ACL) reconstruction following ACL injury (59%). Previous manipulation under anesthesia (MUA) was performed in 9 (41%) patients and arthroscopic debridement in 11 (50%) patients. The median time from injury or most recent operation to capsular release was 8.0 months (IQR: 3.1-11.9). Two patients had no prior knee surgeries.
Overall, 3 (14%) patients required additional intervention for recalcitrant loss of extension: one underwent MUA, one underwent revision arthroscopic debridement with medial and lateral retinacular releases, and one underwent revision posterior capsular release and progressed to total knee arthroplasty at the time of final follow-up. One patient had persistent pain, decreased ROM, and functional deficits, and elected to undergo a through-knee amputation.
PROs were obtained for 18 (86%) of the 21 patients with native knees (one patient with a total knee arthroplasty was removed from analysis) at an average of 3.7 AE 3.3 years (range: 0.3-12.3). Three patients were unable to be contacted for PROs. VAS pain scores at rest and with use were both significantly improved at final contact (Table 4).
Discussion
The primary finding of this study is that arthroscopic posterior capsular release is an effective means to restore knee function, reduce pain, and improve range of motion in cases of persistent extension deficit of the knee. All patients except one (94%) reported maintaining the improvement in knee extension at final follow-up. In the present study, ACL reconstruction following injury was the most common etiology (59%), and patients who experienced posterior capsular contracture following ACL injury reported better subjective outcomes regarding pain and function at final follow-up compared to those with other etiologies of capsular contracture.
Regaining terminal knee extension is critical for achieving patient satisfaction and normal knee function. Sachs et al. reported that a loss of 5 of terminal extension could result in gait abnormality and contribute to patellofemoral pain with mild walking, and losses of 10 of extension is poorly tolerated. 23 Loss of knee flexion is better tolerated compared to loss in extension, particularly because of compensatory chronic quadriceps activation to maintain stance and increase contact forces in the patellofemoral joint. 10 Unfortunately, the opportunity for successful nonoperative management of flexion contractures decreases after 1 year from time of insult, with the ideal timeframe for surgical intervention within 9 months. 24 In the present study, nonoperative management was exhausted in all patients with mean time to capsular release of 8.0 months. Additionally, some patients in this cohort had prior intra-articular surgical intervention, such as debridement without success. LaPrade et al. described a similar cohort of patients who had failed multiple modes of conventional treatment, reporting efficacy with release as a technique for persistent cases. 19 The present investigation mirrors this result, with improvement of median extension to 0 at final follow-up, which was maintained at an average of 3. 19,26 The present study adds to this body of work as an all-arthroscopic technique was used with satisfactory results. Although more technically challenging, arthroscopic procedures, when compared to open procedures, generally have decreased operative times, less postoperative pain, faster recovery, and reduced risk of complication. 27 Arthroscopic posterior capsule release provides a less invasive means to treat capsular contracture than arthrotomy and open debridement.
Two previous studies have reported PROs to determine subjective patient knee function after posterior capsular release for extension deficits. In a cohort of post-ACL reconstruction patients treated with open posterior capsular release by Tardy et al., the average IKDC score was 86.4 at final follow-up of 38 months, and all patients except one (92%) reached the minimum PASS-IKDC threshold. 25 Additionally, Wierer et al. investigated post-ACL reconstruction patients treated arthroscopically for extension deficit and reported improvement in median Lysholm score from 52 to 92 at final follow-up of 25 months. 26 Of note, the literature suggests that surgery for loss of motion after ACL reconstruction does not significantly influence knee function at 2 years postoperatively. 28 Similarly, the present study found that most patients with ACLrelated etiology reached the IKDC-PASS threshold at final follow-up. It is possible that ACL-related pathology results in a lesser "hit" to the knee when compared to those who experienced osteocartilaginous injury, as studies have demonstrated increased rates of arthrofibrosis development with concomitant procedures or complex injuries. 6,29 Accordingly, patients with an ACL-related etiology of extension deficit may be appropriately counseled regarding a postoperative return to satisfactory knee function after arthroscopic intervention. Overall, arthroscopic posterior capsular release in conjunction with detailed rehabilitation is an effective option for cases of continued extension deficit after failed nonoperative management.
Limitations
This study is not without limitations. First, the retrospective nature of the current investigation introduces the possibility for surgeon and selection bias. Second, the relatively small sample size and diverse etiology makes it difficult to perform subgroup analyses that are sufficiently powered. This includes the analyses to determine factors associated with poor outcomes within our cohort. Lastly, while the heterogeneity of our patient cohort may be more generalizable, these differences must be taken into consideration when interpreting the presented results.
Conclusion
Arthroscopic posterior capsular release for recalcitrant flexion contracture provides an effective means for reducing pain and restoring terminal extension. The improvement in extension postoperatively was maintained for most (94%) patients at final follow-up with a 14% reoperation rate. | 2022-06-11T15:12:59.917Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "6fb266363b8034bf7128c4bcf47b76c0015a7890",
"oa_license": "CCBYNCND",
"oa_url": "http://arthroscopysportsmedicineandrehabilitation.org/article/S2666061X22000736/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa5ada80186bb9e822d091b808251c5580b9bd11",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18586346 | pes2o/s2orc | v3-fos-license | Effects of Internal and External Pressure on the [ Fe ( PM-PEA ) 2 ( NCS ) 2 ] Spin-Crossover Compound ( with PM-PEA = N-( 2 1-pyridylmethylene )-4-( phenylethynyl ) aniline )
The spin-crossover properties of the strongly cooperative compound [Fe(PM-PEA)2(NCS)2] (with PM-PEA = N-(21-pyridylmethylene)-4-(phenylethynyl)aniline) have been investigated under external in situ pressure, external ex situ pressure and internal pressure. In situ single-crystal X-ray diffraction investigations under pressure indicate a Spin-Crossover (SCO) at about 400 MPa and room temperature. Interestingly, application of ex situ pressure induces the irreversible enlargement of the hysteresis width, almost independently from the pressure value. Elsewhere, the internal pressure effects are examined through the magnetic and photomagnetic investigations on powders of the solid-solutions based on the Mn ion, [FexMn1 ́x(PM-PEA)2(NCS)2]. Growing the Mn ratio increases the internal pressure, allowing to control the hysteresis width and the paramagnetic residue but also to enhance the efficiency of the photo-induced SCO. The comparison of the quenching and light-induced behaviors reveals a complex phase-diagram governed by internal pressure, temperature and light.
Introduction
The use of pressure to perturb the electronic structure and the physical properties of metal-coordination compounds is an interesting way for the search towards switching materials [1].The role of pressure has been studied for many years on spin crossover (SCO) compounds since it has applications in many fields as diverse as biology [2][3][4], geology [5,6], physics and chemistry [7][8][9][10].The pressure-induced SCO opens the corresponding materials to many potential applications, including piezo-chromism [11].
The SCO materials usually contract under the application of an external pressure since the latter favors the diamagnetic low-spin (LS), state which has a lower volume than the paramagnetic high-spin (HS) state.This is due to the occupation by electrons of anti-bonding molecular orbitals in HS that increases the iron(II)-ligand bond lengths in contrast to the non-bonding molecular orbitals occupied by electrons in LS.Consequently, the expected straightforward pressure-induced modifications of the thermal SCO features are an increase of the SCO temperatures and of the LS residues together with a decrease in the sharpness of the transition.These features are confirmed by considerations based on thermodynamics and theoretical models [8][9][10][12][13][14] that also predict some interesting consequences such as the decrease of the hysteresis width with increasing pressure.From the experimental point of view, many counter-examples have been reported so far [15].For example, non-expected hysteresis broadening [16] as well as irreversible behaviors were reported for some compounds [10,17].Even in a family of very similar complexes, a variety of high pressure behaviors can be observed.This is the case of the [Fe(PM-L) 2 (NCS) 2 ] family (with PM-L = N-(2 1 -pyridylmethylene)-4-(aromatic function L)), showing that expectations can be fulfilled, or not, depending on the nature of the L part of the ligand [15,18].Furthermore, in some cases, the application of pressure on a SCO material can even favor the HS state.This non-intuitive behavior was observed in Mössbauer studies in mononuclear phenanthroline [19] or poly(1-pyrazol)borate [20] iron(II) complexes, and was also reported for the two-dimensional compound [Fe(btr) 2 (NCS) 2 ]¨H 2 O (btr = 4,4 1 -bis-(1,2,4-triazole)) [21].As a general matter, the number of discrepancies between expectations and experiments deserves a further investigation of the application of an external pressure on SCO materials.The concept of external pressure must be understood in a large view since it should include ex situ and in situ situations; taking in mind that SCO molecular materials are sensitive to a pressure range weaker than the one used in solid-state chemistry.The first part of this work is dedicated to such approach.
Elsewhere, another way to induce a pressure on the SCO molecule within the material is to synthetize solid-solutions mixing the iron(II) spin-transition compound and an isostructural compound; the latter being based on a metal-ion having no spin transition.This method refers therefore to a so-called internal pressure.In such a case, depending on the volume of this metal ion in comparison to that of the iron(II) ion, the substitution will act as a pressure or a negative pressure factor.Solid-solution investigations have been conducted since the 1970s on SCO complexes of iron(II) and iron(III) using a large variety of diluting ions.The size of the latter increases from Ni(II), Zn(II), Co(II), Mn(II) to Cd(II).The general tendencies observed upon metal dilution are (i) a more and more gradual spin crossover with increasing doping ratio because of the weakening of cooperativity; (ii) a lowering of the equilibrium temperature T 1/2 more effective when the doping ion volume is important and (iii) a potential appearance of LS fraction at high temperature, or HS fraction at low temperature [22][23][24][25][26][27][28][29].Regarding the decrease of the temperature of transition, the assumptions are based essentially on the concept of internal pressure, i.e., the relative sizes of the metal ions.A doping metal ion, having no spin transition, corresponds to a unit-cell volume that is different from the one obtained with the iron ion.In the solid-solution, the bigger the doping ion, the larger the unit-cell and the larger is the volume available for the iron-ion within the crystal packing.Consequently, large unit-cells favor the state of larger volume of the iron(II), therefore the HS state.Accordingly, the temperatures T 1/2 are shifted to low temperatures when large metal ions are used to dope the SCO material; HS residue being also favored.
The effect of internal pressure on light-induced spin-state lifetime has been mainly reported by A. Hauser [30][31][32].The lifetime of the metastable HS state reached by LIESST effect (Light-Induced Excited Spin-State Trapping) [33] has been extensively studied to evidence that its relaxation is governed by tunneling effect at low temperature and therefore temperature independent in this region.In contrast, at higher temperature, the relaxation is governed by a thermally activated process.The studies on solid solutions have demonstrated that the relaxation of the photo-induced metastable HS state of the iron(II) is weakly influenced by a small metal dilution in the thermally activated region while it is strongly accelerated in the tunneling region.This has also been evidenced by a systematic investigation of the T(LIESST) temperature that records the temperature range of observation of the HS metastable state.T(LIESST) is obtained by the inflexion point position of the derivative of the χ M T vs T plot (χ M stands for the molar magnetic susceptibility) recorded after light irradiation [34][35][36].This temperature, obtained after warming at a given temperature scan rate, was shown to be constant [28,29,37,38] or slightly increased [39][40][41] upon metal dilution.
In this article, we report the effects of internal and external pressure on the strongly cooperative compound [Fe(PM-PEA) 2 (NCS) 2 ] (with PM-PEA = N-(2 1 -pyridylmethylene)-4-(phenylethynyl) aniline).This compound has been described in 1997 [42] and shows a spin transition with a large hysteresis loop between 190 K and 232 K when measured on powder and between 215 K and 234 K when measured on single crystals [43].This hysteresis is accompanied by a phase transition between a monoclinic P2 1 /c space group in the HS state and an orthorhombic Pccn space group in the LS state.The crystal structure of the thermal quenched HS phase at 30 K is similar to the P2 1 /c phase at room temperature.Regarding the photomagnetic properties, the LIESST effect remains poorly efficient.A first magnetic study under external pressure was performed in 1998 [18] revealing a shift of the thermal hysteresis towards low temperature and an irreversible effect of pressure.This compound deserves a more detailed investigation, both on the structural behavior under external pressure and on the effect of internal pressure on the thermal and light-induced spin transition.This paper first describes the effect of in situ and ex situ external pressures from a crystallographic point of view.Subsequently, the magnetic behavior of the [Fe x Mn 1´x (PM-PEA) 2 (NCS) 2 ] solid-solution allows to examine the effects of an internal pressure; the thermally quenched and the light-induced HS states being also explored.
External Pressure on [Fe(PM-PEA) 2 (NCS) 2 ]
Early studies on the effects of applying a pressure on [Fe(PM-PEA) 2 (NCS) 2 ] clearly indicated from magnetic measurements that the hysteresis loop is affected in an irreversible mode using a relatively weak pressure [18].The hysteresis width looked indeed enlarged by a few degrees after that a pressure inferior to 300 MPa was applied to the material.The release of the pressure did not correspond to a return to the initial SCO properties; the broadening of the hysteresis being conserved.However, as a general matter, clear evidence for noticeable effects on the SCO properties of ex situ pressure has been rarely reported so far.This feature is probably a path that has been underestimated in the SCO materials study despite it may open this research field to new applications or simply to new solid-state phases and properties [44].It is worth defining what is called ex situ pressure; that is the situation when pressure has been applied on the sample, then released so as the physical measurements take place at ambient pressure (0.1 MPa).
The purpose of the present work was to confirm possible ex situ effects on the SCO features of the investigated compound.Because of the discrepancy of SCO properties shown by this compound when looking at different synthesis and different sample forms and diverse techniques [45], we decided to work only on single crystals coming from the same synthesis batch and to investigate the pressure effects with the same method, here X-Ray Diffraction (XRD).Note that this discrepancy has been notably attributed to the possible presence of defects, such as methanol inclusions at variable ratio for instance.
SCO at High-Pressure
Before investigating the consequence of ex situ pressure, we checked the effect of in situ pressure mainly to detect if this compound could show a SCO at room temperature under pressure.Some compounds of the same series have shown a SCO at room temperature but for pressure higher than 700 MPa [46,47].Let us recall that single crystals of [Fe(PM-PEA) 2 (NCS) 2 ] are well-known from variable temperature measurements [43] to crystallize in the monoclinic system in the HS state and in the orthorhombic system in the LS state; the unit-cell volume variation due to the thermal SCO being of the order of 3%.Note that this value corresponds to a small amplitude of the contraction-expansion process [44] which masks a strong anisotropy.In fact, the a unit-cell parameter strongly contracts (´9%) while the c parameter increases (+4%) at the SCO.One of the consequences of the structural modifications is that single-crystals are most of the time strongly damaged by the SCO.
The unit cell parameters were determined under in situ pressure in the range 60 MPa to 1135 MPa (Table 1).From 60 to 365 MPa there is a contraction of the unit-cell corresponding to about 5% of the initial volume following a regular variation of 0.54 Å 3 /MPa.At higher pressure, there is a modification of the crystal system, from monoclinic to orthorhombic concomitant with a sharp fall of the unit-cell volume by 4.3% between 365 and 465 MPa; therefore a modification of 1.57 Å 3 /MPa.The decrease of the volume corresponds to a strongly anisotropic change of the unit-cell parameters with a noteworthy decrease of a (´8%) and an increase of c (+4.8%).These modifications can be without a doubt attributed to the HS to LS conversion in this compound from the similitude with the thermal SCO.The alteration of the crystal quality at 465 MPa and above reinforces such conclusion.Consequently, [Fe(PM-PEA) 2 (NCS) 2 ] undergoes a SCO under a pressure of about 400 MPa.This compound appears therefore one of the molecular material showing a room-temperature SCO under the lowest pressure value.
Table 1.Unit-cell parameters and volume as a function of the in situ pressure (1 MPa = 10 bars) determined by X-ray diffraction using diamond anvil cells for a [Fe(PM-PEA) 2 (NCS) 2 ] single-crystal.The calculated standard deviations are of the order of magnitude of 0.005 Å on lengths, 1 Å 3 on the volume and 0.02 ˝on angles.In order to proceed to the fine analysis of the structural properties of the pressure-induced SCO, the determination of the crystal structure at high pressure is mandatory.Unfortunately, the strong alteration of the crystal quality has forbidden such approach.One full data collection in the HS state was however performed at 55 MPa and the results are used below.
Ex situ Pressure Effects
The study of the irreversible effects of applying a pressure on [Fe(PM-PEA) 2 (NCS) 2 ] is performed also on single-crystals.The latter were taken from the synthesis batch and the pressure applied for a few minutes before being released.The XRD experiment is then performed at atmospheric pressure.A fresh crystal is taken at each pressure investigation.Since pressures higher than 400 MPa induce a SCO that strongly damage the sample and since the aim is to investigate weak pressure effects, the study was limited to pressure lower than 200 MPa.For each sample, a full XRD data collection was run after the release of the pressure at room temperature and, in addition, a temperature dependence study of the unit-cell parameters was performed to track the thermal SCO.Investigations were performed pressures (MPa) of 0.1 (ambient pressure), 20, 80, 110 and 200 as well as 0.001 that corresponded to putting the sample under vacuum.The 0.1 and 20 MPa experiments were performed on the same crystal.Table 2 reports the main features of the crystal-structure determinations with in addition those of the above in situ study.
The unit-cell parameters variation as a function of temperature is an efficient tool to visualize the thermal SCO.The comparison of this variation for a single crystal that has not been submitted to pressure (0.1 MPa) with the variation for a single crystal that was previously submitted to 20 MPa is shown on Figure 1.Remarkably, the SCO features appear completely modified by the pressure treatment.The temperature of SCO are strongly translated to lower temperatures with a more pronounced effect for T 1/2down resulting in a colossal increase of the hysteresis width from 19 K to 63 K.The same procedure is applied for various ex situ pressure and the SCO temperatures are reported in Table 3.The same behavior is observed in each case resulting in an increase of the hysteresis that is apparently not dependent on the value of the applied pressure.Even the use of a negative pressure (0.001 MPa) gives the same conclusions.Consequently, changing once the pressure on the sample strongly enlarges the hysteresis width.This pressure effect is not reversible (at least for a few weeks) and looks independent from the pressure-value.The increase of the hysteresis width shown here on [Fe(PM-PEA) 2 (NCS) 2 ] confirms the irreversible effects of pressure already anticipated from magnetic measurements [18].A fine comparison of both cases is hardly applicable due to the differences in experimental protocols and techniques.Note that a similar behavior was observed on a quite different SCO compound, [Fe(sal 2 -trien)][Ni(dmit) 2 ], where the application of 50 MPa induced a very large increase of the hysteresis width together with signs for an irreversible feature of these modifications [17].Even though never clearly evidenced; other signs of irreversible effects of pressure were suspected in previous studies of SCO materials [10,19,48].
In the scrutiny of the structure-properties relationship of the [Fe(PM-L) 2 (NCS) 2 ] complexes, it was demonstrated that the SCO temperatures are mostly linked to the shortest S...C intermolecular distance between the sulfur atom of the NCS branch and the closest carbon atom of the neighboring molecule [49,50].
In the present case, this S...C distance is almost identical in all the crystal-packings since it is measured at 3.463(9) Å for the reference sample and is in the range [3.457(6)-3.475(8) Å] for all the determined crystal structures of the pressurized samples.Taking into account the standard deviations, there are therefore no noticeable differences induced by pressure on this crucial structural feature.Furthermore, a superimposition of the atomic positions (Figure 1) shows no obvious structural change with pressure as also illustrated by the small Root Mean Square Deviations (RMSD) between the initial crystal structure and the ones determined in this work (Table 2).A small difference can only be noted, even though at the limit of the resolution.It concerns the intramolecular S...S distance that is at 7.384(2) Å at 20 MPa and higher pressures while it is at 7.359(2) Å for the reference (Figure 1).This distance increases to 7.395(2) Å at 0.001 MPa.This change on the S atom positions could be the visible mark of a global but very subtle ordering of the crystal structure, not perceptible with the present results.As a general matter, the possible influence of very small change of structural properties on SCO features is well demonstrated, including hysteresis width [44,51,52].Another hypothesis is that the modifications of the SCO features observed here but not explain from the crystal-structures examination could be related to another physical scale, which is the microstructural one.The latter refers to coherent domains features and microstrains, not investigated here.
In any case, the pressure effect observed here should push to more systematic investigations of ex situ pressure effects on SCO materials.
Magnetic Properties
Magnetic measurements were performed on the ten powder samples labelled from 1 to 10. Figure 2 reports the temperature dependence of the HS fraction γ HS that was extracted from equations 1 and 2 for 1-9.This allows to remove the contribution of the paramagnetic manganese(II) ion (d 5 , S = 5/2).χ M T " xpχ M Tq Fe `p1 ´xq pχ M Tq Mn (1) As already reported [42] and described above, compound 1 exhibits a complete spin-crossover.In the present experimental conditions, the SCO temperatures are T 1/2 (Ó) = 170 K and T 1/2 (Ò) = 245 K, defining a large hysteresis loop, 75 K wide, which is larger than the previously reported values.This underlines the role of crystallinity in spin-crossover which can affect the conversion properties.As mentioned in part 2.1 of the present paper, small differences in crystalline quality are known to affect the SCO temperatures for this sample; which probably comes from the presence of defects [43,45].The differences of SCO temperatures in comparison with those presented in the above discussion on external pressure probably come from the fact the latter were performed on samples in the form of single-crystals.
Here, upon metal dilution from 1 to 9, (i) the hysteresis tends to disappear and (ii) to shift to low temperature with (iii) a paramagnetic residue appearing at low temperature.Below 50 K, the spin states are frozen but the (χ M T) Fe contributions exhibit a typical decrease assigned to the zero-field-splitting effect of HS iron(II) ions.This contribution is hard to quantify and was not removed to plot the γ HS fraction.The derived temperature data are listed in Table 4.Note that, the clear assignment of the transition temperature for 4-9 becomes risky since the spin conversion is not complete and very gradual.These values have been estimated at half of the transiting HS fraction.On the basis of this estimation, it appears that the warming branch is more significantly affected by the metal dilution than the cooling one.From 1 to 9, T 1/2 Ò decreases by 147 K while T 1/2 Ó only decreases by 72 K.This last shift could be underestimated due to an incomplete SCO.In previous works [39][40][41], we have shown that an increase of the low-temperature residual HS fraction may originate from static and kinetic effects.The static effect comes from the negative internal pressure generated by the Mn(II) ions in the Fe(II) SC matrix.Indeed the ionic radius decreases in the following order: r (Mn 2+ ) (r = 83 pm) > r (Fe HS 2+ ) (78 pm) > r (Fe LS 2+ ) (61 pm) [54].The negative internal pressure stabilizes the HS state which has a larger volume than the LS state and may lead to a stable paramagnetic (HS) residue at low temperature [22][23][24][25][26][27][28][29].The kinetic effect corresponds to an increase of the lifetime of the metastable state, which may be quenched at low temperature and gives rise to an alteration of the hysteresis loops due to the overlap between the T(LIESST) and the hysteresis curves [39][40][41].This alteration cannot be seen in the 1-9 compounds either because the loss of hysteresis occurs before the overlap with the T(LIESST) or by the absence of any kinetic effect.Another unusual aspect of these experimental curves is the occurrence of a two-step SCO in the warming branch of the hysteresis in 3-5.The first step is gradual and is superimposed on the cooling branch.Upon further warming, an intermediate plateau appears; opening the hysteresis loop.This particular feature could come from the phase transition occurring in the pure iron compounds that concerns less and less amount of domains in the powder when the dilution is increasing.Indeed between 1 and 2, there is a reduction of the hysteresis width from 75 K to 60 K while from 2 to 5, this width does not change but, the HS fraction involved in this hysteresis is decreasing and disappears in 6.One hypothesis to explain these two steps is the presence of two different phases for the LS state leading to two different SCO; one which exhibits the orthorhombic to monoclinic phase transition associated to the hysteresis loop; one which does not present this phase transition and exhibit a gradual spin conversion.This second type leads to gradual SCO and could correspond to the iron(II) located around the manganese(II) ions which act as defects in the crystal packing.
T(LIESST) and T(TIESST) versus Internal Pressure
The metastable state has been investigated by light irradiation and fast cooling of the HS phase.The LIESST effect was obtained by irradiating the sample at 10 K at 830 nm, which is the most efficient wavelength for this compound.Once the photostationary point reached, the light was switched off and the temperature was increased at 0.4 K/min to record the T(LIESST) temperature [34][35][36].The same procedure is followed after fast cooling of the high-temperature phase that led to the Thermally-Induced Excited Spin-State Trapping (TIESST).The T(LIESST) and T(TIESST) values are obtained by the minimum in the derivative of theses curves.
Figure 3 reports the experiments performed on 1-9 and Table 4 reports the T(LIESST) and T(TIESST) temperatures.The first observation is that upon metal dilution, the LIESST effect becomes more efficient by 20% in 1 and up to a complete photoswitching in 6-9.This increase of efficiency could follow from the loss of cooperativity and probably the loss of phase transition as discussed above.Indeed, in the pure iron compound, cooperativity is so important, that the rigidity of the structure could avoid an efficient light conversion by disfavoring the propagation of the inherent volume change upon LS Ñ HS conversion.Since the metal dilution effect is first to weaken the cooperativity and secondly to favor the HS state through the internal pressure applied by the manganese ions, these two effects should favor the efficiency of the light excitation.Another origin of this improved photo-conversion could be that the introduction of manganese ion reduces the strong Metal-Ligand Charge Transfer band of the compound, allowing light to penetrate into the sample more efficiently and therefore increasing the photo-switching.This observation contrasts with the TIESST effect whose efficiency increases from 1 (8%) to 3 (30%) but remains at this level upon further metal dilution.This is probably due to the experiment that is only a fast cooling from 300 K to 10 K and not an instant quench, which strongly differs from the thermal-quenching previously performed on an X-Ray diffractometer, showing similar structures between the high temperature and the metastable HS phases [55].Note that the discrepancy between magnetic and diffraction experiments in term of quenching effect investigation were discussed elsewhere on a similar material [56].This effect remains however quite surprising since the efficiency of thermal quenching is expected to increase with the decrease of T 1/2 .
Furthermore, the T(LIESST) and T(TIESST) values are not affected by the insertion of manganese in the structure.This behavior has already been reported on [Fe x M 1´x (phen) 2 (NCS) 2 ] and [Fe x M 1´x (bpp) 2 ](NCS) 2 [28,37] and has been interpreted as a clear illustration that the T(LIESST) is of molecular origin and is almost independent on the collective character of the compound; this assumption being demonstrated elsewhere [44].Similarly to these previous works, the apparition of the HS residue at low temperature occurs below 75 K which corresponds to the end of the T(LIESST) and T(TIESST) curves.The stability of this residue has been tested by relaxation kinetics.The magnetic signal remains constant and no relaxation could have been observed.This residue is either only due to the effect of internal pressure following the metal dilution or to the low value of T 1/2 leading to a freezing of the HS fraction.
Finally, the most surprising feature of the curves reported in Figure 3 is the T(TIESST) curves of 3-5.During the warming that follows the fast cooling, the HS fraction firstly decreases and the T(TIESST) value is measured, and upon further warming the HS fraction is stable (3) or increases up to a maximum at 100 K (4 and 5).After this maximum, the HS fraction decreases and reaches the warming branch of the hysteresis loop, especially in 5.The fact that this phenomenon is not observed in the T(LIESST) curves is in favor of a phase transition occurring during the warming.The crystal structure of the photo-induced phase is unknown while the thermally quenched state, HS 1 *, has the same crystal structure than the HS phase at room temperature, HS 1 .The photo-induced HS state is obtained from the LS state, labelled LS 2 , which has a different crystal arrangement from the HS 1 and HS 1 * ones.This so called HS 2 * state has probably the same structure than the LS 2 .In the case of the HS thermally quenched state, HS 1 *, its relaxation with temperature probably competes with the structural P2 1 /c-Pccn phase transition.In other words, these considerations reveal a complex interplay between structural phases and spin state in this compound.A deeper investigation of the metastable state is performed below with the aim to propose an overview of the phase diagram.
Focus on the Thermally-Quenched Metastable-State
The thermally-quenched metastable-state has been deeper investigated, especially for compounds 3-5.The behavior of compound 4 has been measured again on a freshly synthesized sample.Figure 4 reports The T(TIESST) curve followed by the warming branch of the hysteresis.The maximum observed around 95 K is recovered.Upon cycling, the further fast thermal cooling drastically changed the shape of the T(TIESST) curves where the maximum at 95 K is strongly reduced in intensity from 40% to 30% of the HS fraction.This result demonstrates a change associated with the first cooling/heating cycle without any further changes during the next cycles.Based on the different results we can propose a phase diagram to locate the different phases (Figure 5).At high temperature, the material is in its monoclinic HS 1 structure.Slow cooling brings the material into the orthorhombic LS 2 state which is converted back in the HS 1 state upon warming to room temperature.However, the presence of a stepped warming branch of the hysteresis of 3-5 indicates that at low temperature, two kind of LS states are present.LS 2 should be the one exhibiting the crystallographic change being responsible of the hysteretic part of the curve.The low temperature part of the transition, the one which does not exhibit any hysteresis could follow from the absence of any phase transition and therefore a LS 1 Ñ HS 1 conversion.
5.
Tentative summary of the spin-state phase-diagram (a) of the compounds [Fe x Mn 1´x (PM-PEA) 2 (NCS) 2 ] (0.85 < x < 0.70) from magnetic and photo-magnetic measurements (b).HS 1 and HS 1 * crystallize in the monoclinic crystal system and LS 2 crystallizes in the orthorhombic one.Label 1 refers to the monoclinic phase and label 2 to the orthorhombic phase.
Rapid cooling keeps the material in its HS 1 metastable phase (denoted HS 1 *).Upon slow warming (0.4 K/min) this phase relaxes toward LS 1 , from which a new HS metastable state, HS*, is reached by further warming (Figure 5).By heating above 90 K, this new HS* state relaxes to the LS 2 and further to the HS 1 state.In the second cycle, a new relaxation path is accessible, shortcutting the appearance of the HS* state.This HS* phase is slightly hidden along this second relaxation path.This could be due to a loss of crystallinity upon thermal cycling.One hypothesis is that HS* could be the resultant of the LS 1 Ñ HS 1 conversion that occurs in the 70-90 K range and the relaxation of the remaining HS 1 * that could have been stabilized by the LS 1 Ñ HS 1 conversion.Upon warming above 90 K, the remaining HS 1 * would relax to the LS 2 state.
A subsequent experiment was conducted to compare the photo-induced behavior of the HS state reached by irradiation from the LS 2 and the LS 1 states.Figure 6 reports the T(LIESST) curve of the HS state obtained from LS 2 (green curve).The following T(LIESST) curve recovers the spin-crossover curve and the hysteresis above 60 K.No further bump in the curve was observed.On a fresh sample, we have quickly cooled the sample to 10 K to reach the HS 1 *.The temperature was increased up to 71 K (corresponding to the minimum of the curve T(TIESST)), and then quickly cooled down to 10 K.The sample was then irradiated and the HS fraction reached almost 80%, indicating a better efficiency than from the LS 2 state.Once reached the photo-stationary state, the irradiation was switched off and the temperature increased at the rate of 0.4 K/min (Figure 6).The T(LIESST) curve exhibits a first decrease around 65 K, similarly to the first T(LIESST) from the LS 2 state.Upon further warming, a small bump is observed with a maximum at 90 K, corresponding to the HS* state.It seems that from irradiating the LS 2 state, the HS state reached, HS 2 *, is different from HS 1 * obtained from LS 1 .While HS 2 * relaxes directly to the LS 2 state, HS 1 * relaxes first to LS 1 and then to LS 2 , going through the intermediate HS* (Figure 5).
This particular shape of the T(LIESST) curve is rarely reported in the literature.One similar shape of T(LIESST) was however described in 2009 on a binuclear compound.This behavior has been explained by a competition between the different states of the binuclear that are HS-HS, HS-LS and LS-LS [56].Analogy with such conclusions might be performed here due to the presence of two different metallic sites within the materials, those of Fe that can undergo a SCO and those of Mn that do not.
6.
Thermal behavior of the HS fraction, γ HS , of a fresh sample of 4 (x = 0,79), upon slow cooling and warming (o), after first irradiation of the slowly cooled phase at 10 K (‚) and after irradiation at 10 K of the quickly cooled state previously relaxed at 70 K (‚).
Physical Measurements
Chemical CHNS elemental analyses were performed on a FlashEA-1112 microanalyzer (ThermoFisher Scientific, Waltham, MA, USA) with a Mettler Toledo MX5 microbalance (Mettler Toledo, Viroflay, France).Powder X-ray diffraction data were recorded using a PANalytical X 1 Pert MPD diffractometer (Panalytical, Almelo, The Netherlands) with Bragg-Brentano geometry, Cu Kα radiation and a backscattering graphite 370 monochromator.Magnetic susceptibilities were measured in the 5-300 K temperature range, under an applied magnetic field of 1 T, using a MPMS5 SQUID magnetometer (Quantum Device, San Diego, CA, USA).The samples were precisely weighted and were applied to account for the compound and sample holder diamagnetic contributions.Photomagnetic measurements were performed using an 830 nm photodiode coupled via an optical fiber to the cavity of the SQUID magnetometer.The optical power at the sample surface was adjusted to 5 mW¨cm ´2, and it was verified that this corresponds to a negligible change in the magnetic response due to the heating of the sample.Photomagnetic samples consisted of a thin layer of compound whose weight was obtained by comparison of the thermal SCO curve to that of a more accurately weighed sample of the same material.Our previously published standardized method for obtaining LIESST data [34][35][36] was followed.After slow cooling down to 10 K, the sample was irradiated and the change in magnetism was monitored.Once the saturation point was reached, the laser was switched off, and temperature was raised at 0.4 K¨min ´1.During the heating ramp, magnetization was measured every 1 K. T(LIESST) was determined from the minimum of the Bχ M T/BT vs. T plot, as previously published [34][35][36].In other experiments the sample was rapidly quenched at 10 K by inserting the sample holder (in less than 10 s) from room temperature down to the SQUID cavity previously cooled at 10 K.The same procedure as for T(LIESST) was then followed to record the T(TIESST).
External Pressure Investigated by Single-Crystal X-ray Diffraction
In situ pressure.The crystal structure at 55 MPa was determined from XRD data collected with a Bruker SMART CCD (Bruker AXS, Karlsruhe, Germany), the sample being placed in the laboratory-designed quartz pressure cell previously described [57].In this case, the crystal is set up in a water-filled capillary.Higher pressure data were obtained using a diamond anvil cell (DAC) using a laboratory-designed Ahsbahs-type cell and a protocol described elsewhere [15,58].The advantage of such DAC is the wide aperture angle (342 ˝) allowing to almost collecting the full reciprocal space, which appears essential when dealing with low-symmetry crystal systems.The pressure dependence of the unit-cell was run on a Bruker-Nonius κ-CCD diffractometer, Mo-Kα radiation (0.71073 Å) at room temperature.The crystal was seriously damaged after 400 MPa, allowing the unit-cell determination but not a full crystal-structure determination.When the pressure was released, the sample was even more damaged preventing from any further analysis.
Ex situ pressure.Single crystals were put in a leak-free tank under controlled nitrogen gas pressure in the range during 15 min each.The pressure was then release and crystals were directly mounted on a Bruker-Nonius κ-CCD diffractometer, Mo-Kα radiation (0.71073 Å) at 293 K, full data collection were run and the crystal structures refined starting from the referenced atomic parameters.The temperature dependence was done using an Oxford CryoSystem 700 (OxfordCryosystem, Oxford, United-Kingdom, 2001) installed on the diffractometer in the range 310-130 K at a cooling/warming rate of 2 K¨min ´1.Crystals were very often damaged at the SCO for this compound (even with faster or slower cooling rates).For the investigation at 0.001 MPa the single-crystals were submitted to a primary vacuous for a few minutes.The experiment at 20 MPa was reproduced a few times, leading to the same results, and permitting to check the irreversibility of the modifications due to the pressure treatment after a few weeks.
Conclusions
Firstly, the present result shows that playing with pressure allows to modulate the SCO features of the [Fe(PM-PEA) 2 (NCS) 2 ] compound; including the hysteresis width.Secondly, it opens the route to new characteristics such as the irreversible modifications of SCO temperatures or to a more effective photo-conversion rate.Thirdly, it reveals new aspects on this molecular compound that are interpreted as coming from metastable phases reached by a combination of internal pressure and temperature changes; the latter referring to quenching effects.Finally, it opens new questions such as for example the discrepancy of the SCO features modifications between the application of internal and external pressure; internal pressure decreasing the hysteresis width while the external pressure increases it.
This study probably marks the title compound as a peculiar one in the field of molecular iron SCO complexes but it also reinforces the need to examine SCO material under pressure.If the latter is only restricted to in situ and to high values but on the contrary taken in all its aspects, it therefore may be used to enhance SCO features and to get new structural phases; which can be envisaged as an alternative to the search for new SCO compounds.
Figure 1 .
Figure 1.(a) Temperature dependence of the unit-cell parameter a for [Fe(PM-PEA) 2 (NCS) 2 ] for a crystal as-synthesized (empty blue square) and after the crystal was submitted to a pressure of 20 MPa (filled red circle) and (b) views of the superposition of the molecular structure before (blue) and after (red) application of a pressure of 20 MPa.
Table 2 .
Selected experimental and structural parameters from the full X-Ray Diffraction data collections at room temperature for single crystals of [Fe(PM-PEA) 2 (NCS) 2 ] under diverse pressure conditions.The Root Mean Square Deviations (RMSD) with the reference crystal structure of the compound are given.
Table 3 .
Spin -Crossover features for [Fe(PM-PEA) 2 (NCS) 2 ] as a function of pressure applied ex situ.The SCO temperatures (T 1/2 Ó and T 1/2 Ò) and the hysteresis width (∆T) come from single-crystal XRD measurements at variable temperatures by analogy with the result presented in Figure1.The experimental temperature-accuracy is estimated to 2 K.
Table 4 .
Summary of the experimental data from the magnetic study of [Fe x Mn 1´x (PM-PEA) 2 (NCS) 2 ]. | 2016-03-14T22:51:50.573Z | 2016-03-04T00:00:00.000 | {
"year": 2016,
"sha1": "750830c02a80ed9e37cb85f4e25ed3ffe58feca4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2312-7481/2/1/15/pdf?version=1457087965",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "750830c02a80ed9e37cb85f4e25ed3ffe58feca4",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
88540323 | pes2o/s2orc | v3-fos-license | Studies of Cystoseira assemblages in Northern Atlantic Iberia
García-Fernández, A. & Bárbara, I. 2016. Studies of Cystoseira assemblages in Northern Atlantic Iberia. Anales Jard. Bot. Madrid 73(1): e035. The Iberian Peninsula contains 24 specific and infraespecific taxa of the genus Cystoseira, but only 6 inhabit in Northern Iberia: C. baccata, C. foeniculacea, C. humilis var. myriophylloides, C. nodicaulis, C. tamariscifolia, and C. usneoides. The Cystoseira assemblages exhibit a complex structure and stratification that allows the presence of a large associate biota and a rich epiphytic flora. Although in the Mediterranean Sea several species have been analyzed in depth, the Atlantic ones are less studied. A revision of the literature (1931-2014) and grey information was made to know the diversity of the North Atlantic Iberian Cystoseira assemblages. The community of C. baccata harbors the biggest number of species (215), followed by C. tamariscifolia (162) and C. usneoides (126), whereas the community with fewest species was the C. foeniculacea one (34). More than 70 species were present in the majority of the Cystoseira assemblages. In this article, are revised also environmental issues in the Cystoseira assemblages, as pollution and anthropogenic pressures or disturbances that cause regression in their communities, and effects of biological invasions by non-native species. As a conclusion, it will necessary to study the Cystoseira assemblage in depth, starting by research of C. baccata along Northern Iberia, as it is an exclusive and widely distributed Atlantic species with very scarce information concerning its role in structuring the communities.
DIVERSITY AND DISTRIBUTION OF THE GENUS CYSTOSEIRA
The genus Cystoseira C. Agardh was described in 1820, including 37 species, although its taxonomy and nomenclature has suffered many changes since then, because of variability within the genus occurs not only among species but also among individuals of a single species and, seasonally, within a single individual.Moreover, in some species, no holotype was designated in species description, and lectotypes have yet to be chosen (Furnari & al., 1999).To complete the knowledge of Cystoseira (taxonomy and evolutionary origin), Draisma & al. (2010) made a phylogenetic analysis of the Sargassaceae and found out that Bifurcaria, Cystoseira, Halidrys, and Sargassum (as currently recognized) are polyphyletic and should each be split into two or more genera.The genus Cystoseira was originated in the Thetis Sea during the Mesozoic, afterwards, some species stayed in the Indo-Pacific Ocean and others should have entered into the Mediterranean Sea from the Atlantic Ocean during the Cenozoic, starting a speciation process that continues nowadays (Oliveras Plá & Gómez Garreta, 1989).
According to the literature (Gómez Garreta & al., 2000;Cormaci & al., 2012) Cystoseira species are plants about 1 meter high with a single primary axis or several primary axes in caespitose thalli, attached to the substratum by a conical disc or haptera.Its apex is smooth or spinous and its ramification in branches is abundant, radial or distichous, sometimes with small spine-like or filiform appendages.These branches could exhibit a characteristic greenish-blue iridescence.Some species present conical or ovoid tophules, arranged along the axis or grouped in the apical zone; and aerocysts, isolated or arranged in chains at the apices of the terminal branchlets.Receptacles are developed usually at the upper parts of higher order branchlets, but they are variable in shape, sometimes bifurcate or branched and with spine-like appendages.Conceptacles are generally hermaphrodite, although they can be unisexual at least during some periods of the year.Cryptostomata are present in most species, normally sunk into the branchlets and, only occasionally, pedicellate.
Among the 51 specific and infraespecific taxa of Cystoseira (Guiry & Guiry, 2014;Thibaut & al., 2014), 36 are present in the Mediterranean Sea, and 30 are endemic of this Sea.The Iberian Peninsula contains 24 species (31 taxa, table 1) and 14 taxa are exclusive of the Mediterranean Sea, 1 taxa of the Atlantic Ocean, and 9 taxa are present in both, Mediterranean Sea and Atlantic Ocean.In Northern Iberian coasts (table 2, figs.1-2) inhabit 6 specific and infraespecific taxa: C. baccata, C. foeniculacea, C. humilis var. myriophylloides, C. nodicaulis, C. tamariscifolia, and C. usneoides.The diversity of the genus Cystoseira is relevant and necessary to protect and manage of their populations, but at present, it has been studied unevenly between regions and issues.Thus, although several species in the Mediterranean Sea have been analyzed in depth (morphology, taxonomy, diversity, assemblages, etc.), the Atlantic ones are less studied, especially in Northern Iberian Peninsula.
Taxa
Although there is a basic knowledge on the habitat preferences of Cystoseira species, there are no much studies about the environmental factors affecting their distribution in the Mediterranean.In this way, Sales & Ballesteros (2009) obtained values of 14 environmental parameters in 103 coves surveyed in Menorca Island, which were added sequentially in a model in order to predict Cystoseira assemblages' composition.They detected significant relationships between great part of the factors and Cystoseira spp.composition and abundance, what show a high predictability of Cystoseira distribution departing from environmental variables.
The Atlantic Iberian Cystoseira species typically inhabits in the subtidal forming the canopy of the community, from wave exposed to sheltered areas.Some common subtidal species are C. baccata and C. usneoides, and other such as C. humilis inhabits from upper to middle intertidal rocky pools (Gómez Garreta & al., 2000).In this region, according to Templado & al. (2012) the Cystoseira species play an escort role and they could be dominant when the other species are not present.Below the C. tamariscifolia band there are present other species as C. mauritanica, C. nodicaulis, and, deeper, C. usneoides.In the Cantabrian coasts, there is a characteristic community dominated by Gelidium corneum in exposed rocks, which could be accompanied by C. baccata and other species as Mesophyllum lichenoides, Zanardinia typus, Pterosiphonia complanata, Corallina officinalis, Rhodymenia pseudopalmata, and Cryptopleura ramosa (Gorostiaga & al., 1998;Templado & al., 2012).Bermejo (2014) studied the genetics of C. amentacea, C. tamariscifolia, and C. mediterranea in the south of the Iberian Peninsula and found that individuals previously identified as C. amentacea in Alboran Sea would be closer related to C. tamariscifolia from the Atlantic Ocean than to Mediterranean specimens of C. mediterranea or C. amentacea.Furthermore, the genetic patterns along southern Iberian Peninsula show an important genetic flux between Atlantic and Mediterranean populations in western and central Alboran.Therefore, the results suggest that all specimens of these three species found along Alboran Sea can be considered one specific entity, probably C. tamariscifolia, so the morphological differences observed between C. tamariscifolia and C. amentacea from southern Iberian Peninsula lack a genetic basis.Moreover, Bermejo (2014) results revealed that the highest distances occur between sites instead between groups of populations.The study of the genetic structure of threatened species with reduced dispersion such as C. tamariscifolia, which play an important role in the maintaining of the biodiversity and ecosystem functioning in littoral communities of the Mediterranean and the proximate coast of the Lusitanian provinces, could yield important information to favor the resilience of littoral communities or to develop a suitable restoration.
In the sublittoral seaweed vegetation on the Basque coast (Gorostiaga, 1995;Díez, 1997;Gorostiaga & al., 1998;Díez & al., 1999;Santolaria, 2014), C. baccata is a very common species that inhabits in a wide range of depth, exposure and sedimentation conditions.Gorostiaga (1995) compared the vegetation in the shallow zone of the French Basque coast, which is very similar, although with a greater abundance of Gelidium corneum and C. tamariscifolia.Gorostiaga & al. (1998) explains that, although the macroalgal cover was very homogeneous flostically, Plocamium cartilagineum, Pterosiphonia complanata, Asparagopsis armata, C. baccata, Halopitys incurvus, and Corallina officinalis were the most abundant macrophytes.However, under sedimentation increasing, Gelidium corneum cover decreased while the macrophytes C. baccata and Zanardinia typus become more abundant.The sedimentation was a determining factor in seaweed distribution and the main trends were: (i) the maximum algal cover corresponded to Gelidium corneum beds.At the same time as the sediment increased to moderate levels, the first change detected was the reduction of crustose and epiphytic layers, due to the decrease of Mesophyllum lichenoides, Plocamium cartilagineum, and Dictyota dichotoma.(ii) The most abundant species along the vegetation gradient presented patterns of distribution associated with sedimentation.Pterosiphonia complanata and C. baccata were well adapted to sedimentation, showing an optimum development at moderate to high levels.In habitats highly exposed to wave action without sediment, Pterosiphonia complanata is displaced by Gelidium corneum.In contrast, C. baccata does not tolerate heavy hydrodynamics and only competes with Gelidium corneum in semiexposed conditions.
The species of Cystoseira generally supports a considerable epiphytic flora (Belegratis & al., 1999).The epiphytes in two Mediterranean species (C.compressa and C. spinosa) were studied by Belegratis & al. (1999) by transplanting plants to different sites.Epiphytic seasonality was generally observed in the sites, what suggests the absence of host-specific epiphytes.Moreover, the distinct zonation pattern of epiphytes covering only certain host areas was not observed.Most floristic and vegetation studies carried out on Northwestern Spain list epiphytic species, but these are not used to characterize differences among communities, as there are not much researches focused on the Cystoseira epiphytes.According to Rull Lluch & Gómez Garreta (1989), Morales-Ayala & Viera-Rodríguez (1989), Arrontes (1990), andOtero-Schmitt &Pérez-Cirera (1996), an epiphytic stratification with three strata can be considered: (i) attaching discs, (ii) main axes and branches and (iii) branchlets and phylloids.However, the host plants occur in different vegetation belts and wave exposure and these factors may be more important in characterizing epiphytism on Cystoseira rather than the own structure of the host.In addition, the fall of phylloids and branchlets usually occurs in winter, causing important variations in epiphytic species that can be found in some parts of the hosts, so the perennial axes allow a more stable flora.
Otero-Schmitt & Pérez-Cirera (1996) studied the epiphytism on four species of Cystoseira (C.baccata, C. tamariscifolia, C. humilis var.myriophylloides, and C. usneoides), that develop large and differentiated communities in the Galician coast.According to these authors, the generic specificity is small: of 125 epiphytic species, nearly half where only found on a single Cystoseira species.Rodophyta were the most abundant epiphytic group and Cyanophyta were the scarcest.Most epiphytic species were Ephemerophytes or Hypnophytes (Otero-Schmitt & Pérez-Cirera, 1996).The cover of epiphytic species was maximal on C. tamariscifolia and C. humilis var.myriophylloides, whereas C. usneoides were much lower.The cover in C. baccata was also quite high, but less than in C. tamariscifolia.The greater number of epiphytic species on C. tamariscifolia could be in part explained because of its position in the littoral zone.On the other hand, the mechanical activity of sand grains among the fronds, mainly in winter, results in a lower abundance of epiphytes in C. humilis var.myriophylloides (Otero-Schmitt & Pérez-Cirera, 1996).The presence of epiphytes in C. tamariscifolia is more or less regular, with a higher abundance in spring and summer, except in C. humilis var.myriophylloides.By contrast, C. baccata and C. usneoides presented the lowest variations, probably because of their optimal development in the subtidal, with a maximum in summer and a minimum at the end of autumn (Morales-Ayala & Viera-Rodríguez, 1989;Otero-Schmitt, 1993;Otero-Schmitt & Pérez-Cirera, 1996).
DISTURBANCES IN THE CYSTOSEIRA ASSEMBLAGES
In the literature (Belegratis & al., 1999;Sales & al., 2011;Sales & Ballesteros, 2012;Templado & al., 2012) is reported that assemblages of Cystoseira have regressed considerably during the last decades in several Mediterranean localities, a fact attributed mainly to the negative impact of pollution and other anthropogenic pressures in the most of species of the genus Cystoseira.Moreover, five Cystoseira taxa convention (2010).Moreover, all the Mediterranean Cystoseira species are under surveillance by international organizations such as the IUCN, the RAC/ASP and MedPan (Thibaut & al., 2014).Monitoring studies generally suggest pollution as the main factor influencing the disappearance of Cystoseira spp., however, there are not much studies providing experimental evidences for the disappearance of Cystoseira species related to pollution.Belegratis & al. (1999) pointed out that one of the most negative effects is the eutrophication, as high nutrient levels trigger growth of epiphytes and phytoplankton, that concurrently inhibiting host growth through shading and, as a consequence, hostepiphyte complexes ultimately decline and are replaced by phytoplankton dominated systems.However, other factors like inorganic chemical pollution increased turbidity levels, overgrazing and climate change could be other possible causes (Sales & Ballesteros, 2009).In addition, their data shows a positive relationship of rich and well developed Cystoseira assemblages to urbanization distance and low levels of nutrient concentration (Sales & Ballesteros, 2009) and the results of the study of Sales & al. (2011) suggest that heavy metal pollution could be negatively affecting survival and growth of Cystoseira species with species-specific responses.In their study, individuals of three Cystoseira species were transplanted from non-polluted to slightly polluted and heavily polluted areas, in places known to have Cystoseira spp.populations before pollution increased one century ago.Effects of pollution were species-specific: negative effects in survival of C. barbata and growth of C. crinita were detected in specimens transplanted to the high polluted area.The pollution could have been the cause that led to the disappearance of Cystoseira species in the past; however, neither survival nor growth of any of the Cystoseira species was negatively affected at the slightly polluted area, and growth was favored for C. barbata (Sales & al., 2011).
Although great efforts are directed in the EU to improve water quality by the implementation of the Water Framework Directive and Cystoseira species are used as indicators of good water quality, no recovery of Cystoseira populations after improvement of water quality has been detected.Therefore, some authors (Belegratis & al., 1999;Sales & al., 2011;Bermejo & al., 2012;Sales & Ballesteros, 2012;Templado & al., 2012) claim for alternative management of measures that facilitates the re-establishment of Cystoseira populations in areas where water quality has improved.In the Balearic Island, Sales & Ballesteros (2007) found nine taxa of Cystoseira, some of them widely distributed around the island but other ones scarcely spread.Although these differences are probably due to physical causes more than pollution or anthropogenic disturbances, as sheltered Cystoseira assemblages are strongly determined by geomorphological features of the coast, Sales & Ballesteros (2007) proposed to used Cystoseira assemblages as ecological indicators in biological monitoring for water quality assessment according to the EEC Water Framework Directive since they are very good indicators.
Because of the sedentary condition of attached microalgae that integrates the effects of long-term exposure to nutrients and/or other pollutants, the use of these benthic organisms as bioindicators to assess pollution values in the marine environment was proved successful in many ecological studies (Gorostiaga & Díez, 1996;Díez & al., 1999;Bermejo & al., 2012;Santolaria, 2014).As macroalgal communities provide habitat and harbor for a wide variety of organisms, changes in these communities will have significant effects on shore ecosystems (Bermejo & al., 2012).Hernández & al. (2011) studied the vegetation in the intertidal zone of the port of Tarifa, (South Iberia).Some of the species were found in the catalogue of endangered species and can be used as bioindicators and should have a special attention, so recently the Cystoseira species have been included in the list of endangered species of the Mediterranean (Hernández & al., 2011).
In Northeastern Atlantic Iberia was detected regression of the Cystoseira assemblages (Gorostiaga & Díez, 1996;Díez, 1997;Díez & al., 2009;Santolaria, 2014) pointed out that, the Cystoseira species are sensitive to contamination, as C. baccata and C. tamariscifolia were not present in polluted areas.Gorostiaga & Díez (1996) found that in these unstable environments the community responds by simplifying its structure: reducing the number of layers, reducing vegetal cover and allowing a proliferation of opportunistic species with simple morphology, especially ceramiaceous algae.There is also proliferation of sciaphilous and sedimentation-resistant species.The crustose layer, made up of species having these characteristics, shows strong development in polluted environments.Díez & al. (2009) found that only the most degraded assemblages experienced a significant increase in algal cover, revealing that this structural community parameter is not relevant in distinguishing between moderately degraded and unaltered vegetation.These results suggest that a significant reduction in algal cover takes place when a threshold of pollution intensity is exceeded.Likewise, the degree of water motion, depth, salinity and the nature of the pollution discharged seem to play major roles in algal cover response.The conclusion of this study is that following pollution abatement there was a partial recovery of intertidal phytobenthic assemblages.Intertidal vegetation at the degraded sites has become progressively more similar to that of the reference site, characterizing five succession stages.The Cystoseira species only appears in the last recovery stages, the reference stage, so the first sign of degradation of natural communities is the loss of large perennial macrophytes as Cystoseira.In this way, Santolaria (2014) pointed out that where the contamination were worse, the macrophytes as Cystoseira spp.were absent and replaced by caespitose algae as Gelidium pusillum and Caulacanthus ustulatus; however, with the progressive recovery of the water quality, the Cystoseira species appear again, so it would indicate the full biological recuperation of the station.
The biological invasions is another disturbance in the Cystoseira assemblages, as in marine ecosystems they have been increasing all around the world, mainly due to human activities such as international shipping, aquaculture and aquarium activity.The brown macroalgae Sargassum muticum, native to East Asia, is considered an invasive species around the world, being distributed mainly in sheltered or semi-exposed rocky shores, and regularly invades the habitats of algal species from the genus Cystoseira (Vaz-Pinto & al., 2014).Previous studies (Sánchez & Fernández, 2005;Olabarria & al., 2009) showed the impact of the invasive Sargassum muticum on native assemblages, with a limited impact on native assemblages in northern Spain.Native species of Cystoseira can be displaced by Sargassum muticum (Critchley & al., 1986;Viejo, 1997;Engelen & Santos, 2009) and it causes changes in the structure of the native communities (Britton-Simmons, 2014).This could be explained because, although Sargassum muticum has a small basal disc, its bigger branches outshine the basal strata and compete for light and nutrients (Critchley & al., 1986;Viejo, 1997;Britton-Simmons, 2004;Sánchez & Fernández, 2005).Furthermore, the normal growth of Sargassum results in higher growth rates during shorter periods of time than those of Cystoseira (Rico & Fernández, 1997) and the productivity is higher than in native species as C. baccata or Saccorhiza polyschides (Fernández & al., 1990).In southwestern Portugal, Engel & Santos (2009) found out that the progression of the Sargassum muticum invasion modulates the environment to its own requirements and the combination of K-selected traits and an increase in population growth rate when Sargassum muticum became more dominant suggested that competition with the native species C. humilis was an important biotic filter for the establishment phase of Sargassum muticum invasion.However, Arenas & al. (1995) suggested that reproductive investment was higher in C. nodicaulis, so the successful colonization of Sargassum muticum in northern Spain shores is likely to be due to the large production of embryos.In addition, Vaz-Pinto & al. (2014) suggested a better nutritional strategy of C. humilis than Sargassum muticum to cope with limiting nutrient conditions of intertidal rocky pools, contrary to the expectations.In conclusion, Sargassum muticum has little effect in the native communities that are poorly invaded (Viejo, 1997;Sánchez & Fernández, 2005) but exhibits an important effect under high density and size of the nonnative species (Britton-Simmons, 2004).
Undaria pinnatifida, another non-native species, quickly colonizes the substrata and in some geographic areas is the dominant species, triggering decreasing in abundance of the native species.It is an opportunistic species with a high capacity to colonize new substrata, but, as it appears in empty spaces, it is not very competitive in natural and stable conditions (Eno & al., 1997).In Galicia it appears in the C. baccata assemblage, having not important impact in the community (Cremades & al., 2006).
The non-native species Codium fragile is affects the native species as Codium tomentosum because it is very competitive and aggressive.An empty space in the substrata due to exploitation or the damage of the habitats, make easier that Codium fragile colonizes the substrata, making changes in the benthic communities and affecting the sedimentation process (Harris & Tyrrel, 2001;Levin & al., 2002).
FUTURE RESEARCHES FOR THE CYSTOSEIRA ASSEMBLAGES IN THE NORTH ATLANTIC IBERIA
Taking into account that there is not much known about Cystoseira communities in the Atlantic Iberian coast, especially in the North coasts, it will be necessary to study their assemblages in depth concerning the habitat, the structure, the diversity, the seasonally changes, disturbing effects, nonnative and invasive species, long term changes, protected areas, etc.At the present, the more urgent study will be making an extensive research about C. baccata in Northern Iberia, as it is an exclusive Atlantic species that is widely distributed along the coast it is very little known about how its communities work, although it plays a key role structuring the communities.Furthermore, it is accompanied by the highest number of species and it has the highest number of epiphytes.In addition, C. baccata inhabits together with other four Cystoseira species so while studying the C. baccata communities we will obtain information of more species of Cystoseira.The necessity of making a study the Cystoseira communities in the North Atlantic Iberian coasts could be noticed reading Templado & al. (2012), since they stablish that, in general, the Cystoseira species play an accompanied role, while in the literature the Cystoseira assemblages exhibit an important role in the North Atlantic Iberian coasts communities, more than only escort species.What is more, Templado & al. (2012) only mentioned C. baccata in a sole paragraph as a species that sometimes appear in the Gelidium corneum communities.However, C. baccata is one of the most important species and widely distributed one in the North Atlantic Iberian coasts, developing their own communities, which have the highest diversity in the Cystoseira communities in these coasts.In addition, there are a several biological invasions that disturb the habitat by occupying the substrata and shading the Cystoseira canopy (Arenas & al., 1995;Sánchez & Fernández, 2005), so it is importante to know the distribution of non-native species and their impacts, especially in Galicia as some rías are important hotspot of introduced marine species (Bárbara & al., 2008).
Fig. 2 .
Fig. 2. Northern Atlantic Iberian species of Cystoseira: a, C. nodicaulis with Sargassum muticum in sand covered subtidal rocks; b, c, detail of C. nodicaulis with aerocysts (b) and basal tophules (c); d, shallow subtidal assemblage of C. tamariscifolia with C. baccata; e, detail of C. tamariscifolia and its aerocysts; f, g, lower intertidal community of C. tamariscifolia; h, subtidal community of C. usneoides; i, j, detail of C. usneoides with tophules (i) and chains of small aerocysts (j); k, big subtidal thallus of C. usneoides.
(C. amentacea, C. mediterranea, C. sedoides, C. spinosa, and C. zosteroides) are currently listed as species strictly protected under the Berne Convention (Annex I, 1979) and all the Mediterranean species of the genus Cystoseira, except C. compressa, have been listed under Annex II of the Barcelona
Table 2 .
North Atlantic Iberian species of Cystoseira and their features ).
Table 3 .
Associate flora of Cystoseira assemblages in Northern Atlantic Iberian Peninsula
Table 4 .
Summary of the associate flora of Cystoseira assemblages in the Northern Atlantic Iberian Peninsula. C.
baccata C. foeniculacea C. humilis var. myriophylloides C. nodicaulis C. tamariscifolia C. usneoides
The number of exclusive species by Cystoseira assemblages varies between communities.Neither species were exclusive of the C. foeniculacea and the C. nodicaulis communities, whereas the C. humilis var.myriophylloides and the C. tamariscifolia communities exhibit Amphiroa vanbosseae and Jania longifurca as typical species, respectively.The C. usneoides community contained two species (Erythroglossum laciniatum and Brongniartella byssoides) not present in other Cystoseira assemblages.Cystoseira baccata community comprises great number of species (Table3) that are absent or scarce in other Cystoseira assemblages, such as Phyllariopsis brevipes subsp. | 2018-12-12T05:19:31.015Z | 2016-06-30T00:00:00.000 | {
"year": 2016,
"sha1": "395ac28b3d89f736f91a2450786916f321880f78",
"oa_license": "CCBY",
"oa_url": "http://rjb.revistas.csic.es/index.php/rjb/article/download/444/447",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "395ac28b3d89f736f91a2450786916f321880f78",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
18198515 | pes2o/s2orc | v3-fos-license | Sustained efficacy, immunogenicity, and safety of the HPV-16/18 AS04-adjuvanted vaccine
HPV-023 (NCT00518336; ClinicalTrial.gov) is a long-term follow-up of an initial double-blind, randomized (1:1), placebo-controlled study (HPV-001, NCT00689741) evaluating the efficacy against human papillomavirus (HPV)-16/18 infection and associated cyto-histopathological abnormalities, persistence of immunogenicity, and safety of the HPV-16/18 AS04-adjuvanted vaccine. Among the women, aged 15–25 years, enrolled in HPV-001 and who participated in the follow-up study HPV-007 (NCT00120848), a subset of 437 women from five Brazilian centers participated in this 36-month long-term follow-up (HPV-023) for a total of 113 months (9.4 years). During HPV-023, anti-HPV-16/18 antibodies were measured annually by enzyme-linked immunosorbent assay (ELISA) and pseudovirion-based neutralisation assay (PBNA). Cervical samples were tested for HPV DNA every 6 months, and cyto-pathological examinations were performed annually. During HPV-023, no new HPV-16/18-associated infections and cyto-histopathological abnormalities occurred in the vaccine group. Vaccine efficacy (VE) against HPV-16/18 incident infection was 100% (95%CI: 66.1, 100). Over the 113 months (9.4 years), VE was 95.6% (86.2, 99.1; 3/50 cases in vaccine and placebo groups, respectively) against incident infection, 100% (84·1, 100; 0/21) against 6-month persistent infection (PI); 100% (61·4, 100; 0/10) against 12-month PI; 97·1% (82.5, 99.9; 1/30) against ≥ ASC-US; 95·0% (68.0, 99.9; 1/18) against ≥ LSIL; 100% (45.2, 100; 0/8) against CIN1+; and 100% (–128.1, 100; 0/3) against CIN2+ associated with HPV-16/18. All vaccinees remained seropositive to HPV-16/18, with antibody titers remaining several folds above natural infection levels, as measured by ELISA and PBNA. There were no safety concerns. To date, these data represent the longest follow-up reported for a licensed HPV vaccine.
The primary objective of HPV-023 was to evaluate longterm VE against incident cervical infection with HPV-16 and/ or HPV-18 in young women who were previously uninfected with HPV-16 or HPV-18. Secondary objectives were to evaluate long-term VE against persistent infection (6-and 12-mo definitions), and cyto-histopathological abnormalities associated with HPV-16 and/or HPV-18; VE against incident and persistent infection, and cyto-histopathological abnormalities associated with non-vaccine oncogenic types. Other objectives included the evaluation of long-term vaccine immunogenicity and safety. Further and specific to this paper, mathematical modeling was developed in order to predict the long-term vaccine-induced HPV-16 and HPV-18 antibody titers up to 20 y post-vaccination.
*Cervarix ® is a registered trademark of the GlaxoSmithKline group of companies.
Results
Of the 1113 women enrolled in HPV-001 (including 506 in Brazil), 776 continued into HPV-007 (448 in Brazil). Of the women from the Brazilian centers who were invited to participate in HPV-023, 437 agreed to continue, and 431 (98·6%) completed the study. A total of 399 women were included in the according-to-protocol (ATP) efficacy cohort and 304 in the ATP immunogenicity cohort. In summary, 85·2% of Brazilian women enrolled in HPV-001 completed HPV-023 (Fig. 1).
Demographic characteristics were similar between the ATP cohorts and the total vaccinated cohort (TVC), between both study groups in HPV-023, and between the Brazilian women enrolled in HPV-001 compared with those Brazilian women enrolled in HPV-023 (Table S1). 11 Mean age at HPV-023 study entry was 26.5 y (standard deviation [SD]: 3.1), and mean age at entry into HPV-001 was 19.9 y (3·0) for the Brazilian women entering HPV-023. The study population of HPV-023 was racially diverse with 57.7% being Caucasian. The mean followup time since first vaccination in HPV-001 was 107 mo (8·9 y [SD: 0.4]), with a maximum duration of 113 mo (9·4 y). # combined analysis of the initial study (hPV-001), first follow-up study (hPV-007) and current study (hPV-023) in the sub-population of women who were enrolled at Brazilian centers in the current study; *aTP efficacy cohort, women who met al. eligibility criteria, complied with study procedures in preceding and current studies and had data available for efficacy measures; **TVc-efficacy, women who were enrolled in the current study, had received all 3 doses of study vaccine or placebo in the initial study (as determined by the inclusion criteria of hPV-007) and for whom endpoint measures were available; † One subject reported a case of persistent infection (12-mo definition) associated with hPV-18 which was already taken into account for the hPV-16/18 analysis in hPV-001/007 and was therefore not counted twice in this study; Vaccine, hPV-16/18 vaccine study group; Placebo, placebo group; n, total number of women; n, number of women reporting ≥1 event; cI, confidence interval; ≥asc-Us, atypical squamous cells of undetermined significance or greater; ≥LsIL, low-grade squamous intraepithelial lesion or greater; cIN1+, cervical intraepithelial neoplasia grade 1 or greater; cIN2+, cervical intraepithelial neoplasia grade 2 or greater; T(year), sum of follow-up periods expressed in year censored at the first occurrence of event in each group.
Efficacy against incident and persistent infection Primary endpoint
During the entire 36-mo period of HPV-023, no incident HPV-16/18 infection occurred in the vaccine group whereas 9 cases occurred in the placebo group, resulting in 100% VE (95% CI: 66.1 to 100). Sustained VE against HPV-16/18 incident infection was also observed in the combined analysis ( Table 1 and Fig. 2; Fig. S1).
Secondary endpoints
There were no cases of either 6-or 12-mo HPV-16/18 persistent infection in the vaccine group vs. 4 cases and 1 case, respectively, in the placebo group during the 36-mo follow-up. In the combined analysis, sustained VE was observed for both 6-mo ( Fig. 2; Fig. S1) and 12-mo definitions of persistent infection with HPV-16/18 ( Table 1).
VE against incident or persistent infection (6-and 12-mo definitions) associated with 'any oncogenic' HPV type could not be demonstrated during the 36-mo follow-up of HPV-023 or over the 113 mo of follow-up ( Table 2).
Results from the TVC were consistent with the results obtained from the ATP cohort.
Sustained VE (combined analysis) was observed for ≥ASC-US # combined analysis of the initial study (hPV-001), first follow-up study (hPV-007) and current study (hPV-023) in the sub-population of women who were enrolled at Brazilian centers in the current study; § Oncogenic types hPV-16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 66, and 68 combined; *aTP efficacy cohort = women who met al. eligibility criteria, complied with study procedures in preceding and current studies and had data available for efficacy measures; **TVc-efficacy = women who were enrolled in the current study, had received all 3 doses of study vaccine or placebo in the initial study (as determined by the inclusion criteria of hPV-007) and for whom endpoint measures were available; Vaccine = hPV-16/18 vaccine study group; Placebo = placebo group; n = total number of women; n = number of women reporting ≥1 event; cI = confidence interval, ≥asc-Us = atypical squamous cells of undetermined significance or greater; ≥LsIL = low-grade squamous intraepithelial lesion or greater; cIN1+ = cervical intraepithelial neoplasia grade 1 or greater; cIN2+ = cervical intraepithelial neoplasia grade 2 or greater; T(year) = sum of follow-up periods expressed in year censored at the first occurrence of event in each group.
remaining stable thereafter (Fig. 3). Compared with levels following natural infection, IgG levels in the vaccine group were 10.8fold and 10.0-fold higher for HPV-16 and HPV-18, respectively. In a subset of women from the vaccine group (n = 55), 100% were positive for neutralising antibodies against both HPV-16 and HPV-18, up to 113 mo post-vaccination, with the neutralising antibody kinetic patterns similar to those of the total IgG antibodies. Neutralising antibodies in the vaccine group were 7.7fold and 4.0-fold higher for HPV-16 and HPV-18, respectively, compared with levels elicited by natural infection (Fig. 4).
Predicted long-term persistence of antibody responses
Using the data of up to 113 mo of follow-up, both the piecewise and modified power-law models predicted geometric mean titers (GMTs) remain well above natural infection levels when the data are extrapolated to the 20-y time-point (Figs. 5 and 6 and Table 3). The piece-wise model, which takes into account a decrease of antibody levels over time, predicts antibody GMTs to remain above natural infection levels for the duration of 32.3 y and 20.5 y for anti-HPV-16 and anti-HPV-18 antibodies, respectively ( Table 3). Unlike with the modified power-law model, the GMTs predicted by the piece-wise model tend to increase with an increasing duration of follow-up and increased amount of measured data narrowing the gap between both piece-wise and modified power-law modeling predictions.
Safety
The occurrences of medically significant adverse events (AEs) (listing those AEs with more than 1 case report in either the vaccine or placebo arm), serious adverse events (SAEs) (listing those SAEs with more than 1 case report in either the vaccine or placebo arm), new onset chronic diseases (NOCD), and new onset autoimmune diseases (NOAD) are described in Table 4. The most common medically significant AEs were reported in the vaccine group, with 5 (2.2%) cases each for gastritis, incomplete spontaneous abortion, depression, and hypertension, respectively, out of a total 224 women. The most common medically significant AE reported in the placebo group was genital herpes (3 [1.4%] cases). None of the women in the vaccine group reported genital herpes. There were no withdrawals due to AEs, and no AE was considered to be possibly related to the study vaccine or placebo.
Nine women reported at least one NOCD during HPV-023, 6 in the vaccine group ( Table 4). Of the 6 events in the vaccine group, 4 were identified as a NOAD (hypothyroidism, rheumatoid arthritis, and vitiligo).
A total of 103 pregnancies were reported by 94 women during the 36-mo follow-up period of HPV-023 ( Table 5). Eight women in the vaccine group reported a spontaneous abortion compared with just 4 women in the placebo group during HPV-023.
Principal findings
No breakthrough cases of infection or cyto-histopathological abnormalities associated with HPV-16/18 occurred in the vaccine group over the 36-mo period of HPV-023. This was associated with high and sustained levels of IgG (measured by enzyme-linked immunosorbent assay [ELISA]) and neutralising antibodies (measured by pseudovirion-based neutralisation assay [PBNA]) against HPV-16 and HPV-18, with vaccine-induced GMTs being well above titers associated with clearance of natural infection in other studies. 16,24 Neutralising antibodies against vaccine types are likely to be a major basis of protection against HPV infection, and the correlation between the antibodies measured by ELISA and PBNA has been previously demonstrated. 25 Further, there was some indication in the results that the vaccine offered protection against other oncogenic types beyond HPV-16/18. However, few cases of incident or persistent infection or cyto-histopathological endpoints associated with HPV-31 and HPV-45, the 2 types most closely related to HPV-16 and HPV-18, respectively, and the most frequent types associated with cervical cancer after HPV-16 and HPV-18, were observed. 3 Finally, the safety profile was also clinically acceptable. All reported SAEs and pregnancy outcomes in this study were considered as unrelated to the vaccine.
These data add an additional 36 mo to those already collected from extended follow-up studies of the HPV-16/18 vaccine in a sub-set of participants assessed at 48 (4.5 y) and 77 (6.4 y) mo post initial vaccination. 22,23 This now provides a total of 113 mo (9.4 y) of follow-up evaluating the durability of the immune response elicited by the primary vaccination.
In addition, the predicted duration of anti-HPV-16 and HPV-18 antibody responses following vaccination were explored by mathematical modeling. Factors that can influence long-term immunity include the peak level of antibody response 1 mo after the last vaccine dose, rates of B-cell decay and proliferation, B-cell immunologic memory, cell-mediated immunity, and individual variability. 26,27 Based on data from the initial and followup studies, the results of the modeling predict that anti-HPV-16 and anti-HPV-18 antibody levels will decrease but will remain several folds higher than those associated with natural infection for at least 20 y post-vaccination. These results provide circumstantial evidence that, should a booster be needed, this need will not occur before a substantial amount of time has elapsed after vaccination, which is consistent with previous modeling results. 26 Note the study population in HPV-023 is different to that in David 26 with respect to size and geographic origin; as the . seropositivity rates and geometric mean titers for anti-hPV-16 (A) and anti-hPV-18 (B) antibodies, measured by eLIsa (aTP immunogenicity cohort). aTP immunogenicity cohort = women who met al. eligibility criteria (all had received 3 doses of vaccine or placebo), complied with study procedures in the current and preceding studies, and had data available for at least one vaccine antibody blood sample. Data are shown for the women enrolled in the Brazilian centers for the initial, first follow-up, and current studies. histogram bars show the GMT and corresponding 95% confidence intervals (cI). eLIsa = enzyme-linked immunosorbent assay; hPV = hPV-16/18 vaccine group; Placebo = placebo group; PRe = pre-vaccination; PII = post dose II; PIII = post dose III; M = month. eL.U/mL = eLIsa units/mL. Figures above the bars are the seropositivity rates for the corresponding timepoint. horizontal line represents the IgG antibody level in women from a phase III efficacy study (hPV-008, NcT00122681) who had cleared a natural infection before enrolment. IgG GMTs corresponding to natural infection in study hPV-008 were 29·8 eL.U/mL (95% cI: [28·5 to 31·0]) for hPV-16 and 22·6 eL.U/mL (95% cI: [21·6 to 23·6]) for hPV-18; measured by eLIsa. 16 modeling of the HPV-023 data was solely based on the Brazilian cohort. The limitations of mathematical evaluations have been discussed. 26 While the clinical relevance of long-term antibody persistence is being investigated, modeling of predicted GMTs is informative for clinicians and policy makers until these longterm observational data are available. 26 Strengths of study To ensure consistency in the collection of efficacy, immunogenicity and safety data, the study design of HPV-023 closely followed the design of HPV-001 and HPV-007. Treatment allocation remained double-blinded throughout all studies. Further, we considered all endpoints, irrespective of causal HPV type. Histological diagnoses were determined by a panel of expert gynecological pathologists who were unaware of the women's treatment or history of cervical disease. Women who reached an endpoint (virological, cyto-histopathological) for a specific HPVtype in a previous analysis of the preceding studies were censored from the analyses related to the same endpoint in HPV-023. As a result of the high VE, this occurred more often for women in the placebo group than in the vaccine group. Importantly, the combined analysis presented here was not affected by censoring, and therefore provides valuable information. Finally, this study represents the longest follow-up of a clinical trial evaluating the efficacy, immunogenicity and safety of a licensed HPV vaccine to date, gathering data over 113 mo post-initial vaccination.
Comparison with other studies These data are very much in line with previous follow-up studies of the HPV-16/18 vaccine, which indicate that the vaccine provides long-term immunogenicity and prevents HPV-16/18 infection and associated disease in vaccinated women. 8, 11,28,29 Another study has shown that the HPV-16/18 vaccine generates an anamnestic response (renewed rapid production of an antibody on a subsequent encounter with the same antigen), in women from a similar cohort who were seropositive, after a fourth dose of vaccine. 10 A larger sample size, than that recruited in this study, is needed for a full assessment of type-specific cross-protection, however other larger studies of the HPV-16/18 vaccine have shown that it offers substantial cross-protection against cervical infection and cyto-histopathological endpoints associated with various combinations of non-vaccine oncogenic types, including HPV-31, HPV-33, HPV-45, and HPV-51 individually.6 ,17, 28 Regarding the assessment of safety, pooled safety analyses of HPV-16/18 vaccine have shown this vaccine to be generally well tolerated in women of all ages. 20,30,31 Limitations of study This study presents results showing significant VE in the cumulative analysis across HPV-001/007/023 and confirms a high rate of VE against HPV-16/18 infection. It did not have power to show efficacy against more stringent outcomes (persistent infection, cytology and histology including CIN2+) during the immediate study (HPV-023) period due to the small number of events in the control arm. While the clear explanation for this is the small sample size, limited length of follow-up (36 mo) and with this, a limited number of events, this could also relate to the aging of the study population given that HPV infection is of higher incidence right after sexual debut and declines thereafter.
Therefore we undertook a combined analysis, as it provided a global overview of the vaccine protection up to 113 mo postvaccination with high VE for all HPV-16/18 virological and most HPV-16/18 associated cyto-histopathological endpoints. Even for this combined analysis, the limited power also hampered any conclusions to be drawn on the long-term durability of protection elicited by the vaccine against non-vaccine types. As another limitation, no randomization occurred at the start of the followup study HPV-023. This might have an influence on the HPV-023 study analysis, as opposed to the combined analysis over the entire follow-up period up to 113 mo post-vaccination.
The safety data may suggest a non-significant signal toward more AEs occurring in the vaccine group, although there is no clinical pattern of increased incidence of (medically significant) AEs due to the vaccine since most reported events were only single events. However, again, as the power is low, these results may also be consistent with chance. A pooled analysis of safety data, including 57 580 subjects and 96 704 doses of HPV-16/18 vaccine, collected from 42 completed and on-going clinical studies, showed that the incidences and distribution of AEs were similar across the HPV-16/18 vaccinees and controls, with no new safety signals being identified. 30 Further, HPV-023 was not powered to assess pregnancy outcomes. However we present a descriptive table that summarizes the findings from the 36-mo follow-up, in women vaccinated 9 y earlier. The results do call to attention the occurrence of twice as many spontaneous abortions in the HPV-16/18 vaccine arm compared with the control arm. This again could be a result of chance. A large pooled analysis of miscarriage outcomes in 3599 pregnant women vaccinated with HPV-16/18 vaccine or placebo from 2 multicenter phase III randomized trials, conducted by the National Cancer Institute, concluded that there was no overall effect of the HPV-16/18 vaccine on the risk of miscarriage (estimated risk of miscarriage being 11.5% and 10.2% in the HPV-16/18 group and control group, respectively). 32 Pregnancy outcomes monitored in post-marketing settings in women who inadvertently received the HPV-16/18 vaccine during pregnancy are in line with published reports for similar populations. 31 Figure 4. seropositivity rates and geometric mean titers for (A) anti-hPV-16 and (B) anti-hPV-18 antibodies, measured by PBNa (aTP immunogenicity cohort). aTP cohort for immunogenicity = women who met al. eligibility criteria (all had received 3 doses of vaccine or placebo), complied with study procedures in the current and preceding studies, and had data available for at least one vaccine antibody blood sample. Data are shown for a subset of the women enrolled in the Brazilian centers for the initial, first follow-up, and current studies. histogram bars show the GMT and corresponding 95% confidence intervals (cI). PBNa = Pseudovirion-Based Neutralisation assay; PRe = pre-vaccination; PII = post dose II; PIII = post dose III; M = month. Figures above the bars are the seropositivity rates for the corresponding timepoint. horizontal line represents the IgG antibody level in women from a phase III efficacy study (hPV-010, NcT00423046) who had cleared a natural infection before enrolment. IgG GMTs corresponding to natural infection in study hPV-010 were 180·1 eD 50 (95% cI: [153·3 to 211·4]) for hPV-16 and 137·3 eD 50 (95% cI: [112·2 to 168·0]) for hPV-18; measured by PBNa). 24
Conclusion
No breakthrough cases of HPV-16/18 infection or related cervical lesions occurred in the vaccinated cohort over the 36-mo study period. The HPV-16/18 AS04-adjuvanted vaccine also continued to provide high and sustained levels of IgG and neutralising antibodies against HPV-16 and HPV-18, with antibody titers remaining several folds above natural infection levels, up to 113 mo post-vaccination. This study represents the longest follow-up in a clinical trial setting of a licensed vaccine containing the 2 most frequently observed oncogenic types, HPV-16 and HPV-18, confirming the previous estimations by mathematical models. 26 These results should provide confidence in the duration of protection offered by HPV mass vaccination programs existing in a number of countries around the world.
Study design and participants
In 2001, healthy women aged 15-25 y were recruited from North America (USA, Canada) and Brazil into an initial double-blind, randomized, multi-center vaccination study (HPV-001; NCT00689741). 21 Eligible study participants were HPV-16 and HPV-18 seronegative by enzyme-linked immunosorbent assay (ELISA), HPV DNA-negative in the cervix by polymerase chain reaction (PCR) for 14 oncogenic types (HPV-16,-18,-31,-33,-35,-39,-45,-51,-52,-56,-58,-59,-66,-68), and had normal cervical cytology at baseline. Women were randomized (1:1) to receive 3 doses of either the HPV-16/18 vaccine or placebo (Al[OH] 3 ) at 0, 1, and 6 mo as previously described. 21,22 Women who had received all 3 doses of vaccine or placebo and whose treatment allocation had (hPV-008, NcT00122681). 16 remained blinded were invited to take part in HPV-007 (NCT00120848). 23 Due to the high retention rate of subjects in the HPV-007 Brazilian cohort, women from 5 hospital-based Brazilian centers who had received all 3 doses of HPV-16/18 vaccine or placebo (Al[OH] 3 ) at 0, 1, and 6 mo as previously described, 21,22 and whose treatment allocation had remained blinded in both HPV-001 and HPV-007, were invited to take part in this long-term follow-up study, HPV-023. 11 HPV-023 started in November 2007 and lasted for 3 y. Results of the initial and follow-up studies have been previously reported. [21][22][23] Intervention No vaccine or placebo was administered in HPV-023. To ensure consistency in the collection of efficacy and safety data, the study design of HPV-023 closely followed the design of HPV-001 and HPV-007. Detailed methodologies of the initial and follow-up studies, including interim analyses of HPV-023, have been reported previously·8 ,11,21,23 Treatment allocation remained double-blinded throughout all studies.
Similar to HPV-001 and HPV-007, women continued to receive gynecological care according to Brazilian standards in HPV-023.
Women who received the placebo in HPV-001 and who have remained blinded throughout the studies have been offered a crossover vaccination course (0, 1, and 6 mo) with the HPV-16/18 vaccine at the completion of HPV-023. At the time of HPV-023 conduct, HPV vaccination was not routinely offered to women in the Brazilian Health System. Figure 6. anti-hPV-18 antibody responses predicted by the modified power-law (A), piece-wise model (B), and their comparison (C), up to 20 y. GMT = geometric mean titer; eL.U/mL = eLIsa units/mL; Natural infection = mean antibody titers associated with natural infection were obtained from women enrolled in a Phase III efficacy study (hPV-008, NcT00122681). 16 The follow-up evaluations performed in HPV-023 were conducted in accordance with the Declaration of Helsinki and the International Conference on Harmonisation Good Clinical Practice Guidelines. The protocol and other materials were approved by the Independent Ethics Committee or Institutional Review Board of each study center and the National Committee of Ethics and Research. Written informed consent was obtained from all women before any study procedure was performed.
Main outcomes measures
Virology and cyto-histopathology Gynaecological examinations were performed, cervical swabs were collected every 6 mo for HPV-DNA typing, and cytology specimens collected every 12 mo (or at 6-mo intervals, if driven by clinical management algorithm). Methods were described previously. 21,22 A broad spectrum PCR-system (SPF 10 -DEIA-LiPA 25 ) was used to test cervical samples and biopsy material for 25 HPV types including the well-defined 14 oncogenic types. 33 SPF 10 -DEIA-positive specimens were tested by line probe assay and type-specific HPV-16 and HPV-18 PCR-DEIA as previously described. 21,22 VE was calculated against the following clinical endpoints associated with oncogenic HPV types: incident infection, 6-mo and 12-mo persistent infection, cytological abnormalities (≥ASC-US), and histopathological lesions (CIN1+ and CIN2+). Definitions of these endpoints were described previously. 8, 11 Immunogenicity Blood samples were collected at months 0, 7, 12, and 18 during HPV-001 and on a yearly basis during the follow-up studies (HPV-007 and HPV-023). As women were enrolled into the follow-up studies independently of the date of first vaccination, results from these studies are presented according to 6-mo intervals relative to the time of the first vaccination for each woman.
Antibody titers to HPV-16 and HPV-18 (total IgG) were measured by ELISA in all women as described previously. 21,22 Neutralising antibody titers to HPV-16 and HPV-18 were assessed using the PBNA in a subset of women. 24,34 Results are presented along with the GMTs of 'cleared' natural infection (i.e., women DNA-negative and seropositive at enrolment) obtained from Phase III studies HPV-008 and HPV-010 (NCT00423046) as benchmarks. 16,24 The ratios between GMTs and natural infection levels were calculated for both HPV-16 and HPV-18 antibodies when measured either by ELISA or PBNA. For each calculation, the timepoint considered was the timepoint with the lowest GMT value between the timepoints M101-106 and M107-113.
Safety SAEs, medically significant AEs (i.e., AEs or SAEs prompting emergency room or physician visits that were not related to common diseases) and NOCDs (e.g., NOADs, asthma, type I diabetes) were recorded. Pregnancies and their outcomes were also recorded.
Statistical methods
Two interim analyses were performed after 1 and 2 y of HPV-023 follow-up. 8, 11 The overall α value for all analyses was 0·05 (two-sided test). Alpha values were adjusted for the 2 interim analyses: for the first and second interim analyses α = 0·001 (two-sided), and for the final analysis, α = 0·049 (two-sided). Based on an estimated 6% cervical infection rate, minimum 80% VE, and a 10% discontinuation rate per year for women enrolled in the trial, the power at the end of the trial was estimated at 83%. Combined analyses (HPV-001/007/023) were descriptive.
VE was calculated against each of the clinical endpoints associated with HPV-16/18 and also considered all oncogenic HPV types, for the time period of HPV-023. The conditional exact method was used to estimate VE and exact 95% confidence intervals (CIs) around the rate ratio (ratio of the event rates in the vaccine group vs. placebo group). The calculation took into account the follow-up time of the women within each group (T[year]). VE was defined as one minus the rate ratio.
Primary analyses of efficacy were performed on the ATP efficacy cohort for virological endpoints (incident and persistent infection). Because of more limited number of events, primary efficacy analyses related to cyto-histopathological endpoints (≥ASC-US, ≥LSIL, CIN1+, and CIN2+) was performed on the TVC with efficacy results available (TVC-efficacy). The analyses of immunogenicity were performed on the ATP immunogenicity cohort. The primary safety analysis and the mathematical modeling analysis were performed on the TVC. The ATP immunogenicity and efficacy cohorts included women who met al. eligibility criteria, complied with study procedures in the current and preceding studies, and had data available for at least one vaccine antibody blood sample (ATP immunogenicity cohort) or data available for the efficacy measure considered (ATP efficacy cohort). The TVC included women who were enrolled in the current study, had received at least one dose of study vaccine or placebo in the initial study and for whom end-point measures were available. All women enrolled in the initial study were randomized and received at least one dose of vaccine or placebo in accordance with the randomization schedule. 21 Women were not included in immunogenicity assessments if HPV infection was detected for the type under consideration during the study periods in order to exclude any influence of a natural infection on the immune response. Women were censored from efficacy assessments once a specific endpoint was met in the current or preceding studies.
Incidence rates were compared between groups using Fisher exact test. The null hypothesis was that the expected incidence rate during the considered period was similar in both groups.
In addition, a descriptive combined analysis was performed (for each virological and cyto-histopathological endpoint, VE estimates were calculated and 95% CI provided) for the total follow-up period, up to 113 mo post-vaccination. Timeto-event curves for HPV-16/18 incident infection and HPV-16/18 6-mo persistent infection, which include VE data from HPV-001/007/023, were generated for the ATP efficacy cohort and the TVC.
Furthermore, an exploratory analysis was performed for each cyto-histopathological endpoint, irrespective of HPV DNA association for the total follow-up period, up to 113 mo post-vaccination.
Safety data are presented for all 3 y of HPV-023 with data collected from end of HPV-007 up to the final visit (month 36) in HPV-023. Total vaccinated cohort (TVc) included all subjects who came to the first visit and received at least one dose of vaccine or placebo; Vaccine = hPV-16/18 vaccine study group; Placebo = placebo group; n = number of subjects; n (%) = number (percentage) of subjects reporting at least one symptom; cI = confidence interval; NOcD (GlaxosmithKline assessment) = New onset chronic disease; NOaD = New onset autoimmune disease; *listing those events with more than 1 case report in either the vaccine or placebo arm.
Mathematical modeling
To assess the persistence of HPV-16 and HPV-18 vaccineinduced antibody responses, the individual antibody levels of each woman at each timepoint in HPV-023 (i.e., up to 7.3 y [89 mo], up to 8.4 y [101 mo] and up to 9.4 y [113 mo]), were retrospectively fitted into 2 different statistical mixed effects models, the modified power-law and piece-wise models, separately for both total anti-HPV-16 and anti-HPV-18 antibodies as previously described. 26,27 The modified power-law model aims to describe and predict antibody titers (in logarithm) over time as a logarithmic function based on parameters such as the biological dynamic of B-cell turnover and the proportion of antibodies produced by memory B-cells. The piece-wise model describes and predicts the antibody titers (in logarithm) over time according to 3 linear functions and 3 breaking time points (month 7-month 12, month 12-month 21, and month 21 onwards), and is also based on parameters reflecting the dynamic of B-cell turnover. 26
Disclosure of Potential Conflicts of Interest
All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that the Institutions of P.
Authors and Sponsor Contributions
The trial was funded by GlaxoSmithKline Biologicals SA, who designed the study in collaboration with investigators, and coordinated collection, analysis, and interpretation of data. Investigators collected data for the trial and cared for the participants. All authors had full access to the trial data. All authors contributed to the study design, and/or analysis and/or interpretation of data, reviewed manuscript drafts, approved the final manuscript as submitted, and had final responsibility for the decision to submit for publication. All authors take the responsibility for the overall content of the manuscript. GlaxoSmithKline Biologicals SA took in charge all costs associated with the development and publishing of the present publication. The manuscript was developed and coordinated by the authors in collaboration with an independent medical writer and a publication manager working on behalf of GlaxoSmithKline Vaccines.
Ethical Approval
Studies were conducted in conformity with country or local requirements regarding ethics committee review, informed consent, and other statutes or regulations regarding the rights and welfare of human subjects participating in biomedical research. The protocol and other materials were approved by the Independent Ethics Committee or Institutional Review Board of each study center and the National Committee of Ethics and Research. Written informed consent was obtained from all participants before any study procedure was performed. The data presented in this manuscript do not contain any personally identifiable information. TVc, Total vaccinated cohort which included all subjects who came to the first visit and received at least one dose of vaccine or placebo; Vaccine, hPV-16/18 vaccine study group; Placebo, placebo group; n, number of pregnancies; n (%), number (percentage) of pregnancies in a given category. | 2018-04-03T06:21:12.661Z | 2014-06-19T00:00:00.000 | {
"year": 2014,
"sha1": "762f0ff850bdd742fb8fc1270ea09da091e226b9",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/hv.29532?needAccess=true",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "762f0ff850bdd742fb8fc1270ea09da091e226b9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13040186 | pes2o/s2orc | v3-fos-license | Quality management systems for your in vitro fertilization clinic’s laboratory: Why bother?
Today, ART refers not only to IVF but also to several variations tailored to patients’ unique conditions. The technology transfer in the ART has resulted in the advanced practices being offered in almost all countries. ABSTRACT Several countries have in recent years introduced prescribed requirements for treatment and monitoring of outcomes, as well as a licensing or accreditation requirement for in vitro fertilization (IVF) clinics and their laboratories. It is commonplace for Assisted Reproductive Technology (ART) laboratories to be required to have a quality control system. However, more effective Total Quality Management systems are now being implemented by an increasing number of ART clinics. In India, it is now a requirement to have a quality management system in order to be accredited and to help meet customer demand for improved delivery of ART services. This review contains the proceedings a quality management session at the Indian Fertility Experts Meet (IFEM) 2010 and focuses on the creation of a patient‑oriented best‑in‑class IVF laboratory.
INTRODUCTION
In reproductive medicine, a new frontier was opened up and new hope was given to infertile couples when the first baby conceived in vitro was born in 1978. [1] According to the World Health Organization, 80 million couples worldwide are infertile, with 15% of them residing in India. [2,3] On a conservative basis, the market for infertility treatment could be estimated at over Rs. 25,000 crores per year. [2,3] Per started cycle, the success rate is <30% for the Assisted Reproductive Technology (ART) procedures, [4] and the whole exercise could be both expensive and emotionally draining, given the personal and societal pressures on the individual subjects. Nowadays, in vitro fertilization (IVF) has become a routine and commonly accepted practice treatment option for infertility. New IVF-based techniques, such as cryopreservation using vitrification, micromanipulation of gametes and embryos and pre-implantation genetic diagnosis (PGD), have provided prospective parents with a wide range of new reproductive options. Today, ART refers not only to IVF but also to several variations tailored to patients' unique conditions. The technology transfer in the ART has resulted in the advanced practices being offered in almost all countries.
Review Article
Quality management systems for your in vitro fertilization clinic's laboratory: Why bother?
ABSTRACT Several countries have in recent years introduced prescribed requirements for treatment and monitoring of outcomes, as well as a licensing or accreditation requirement for in vitro fertilization (IVF) clinics and their laboratories. It is commonplace for Assisted Reproductive Technology (ART) laboratories to be required to have a quality control system. However, more effective Total Quality Management systems are now being implemented by an increasing number of ART clinics. In India, it is now a requirement to have a quality management system in order to be accredited and to help meet customer demand for improved delivery of ART services. This review contains the proceedings a quality management session at the Indian Fertility Experts Meet (IFEM) 2010 and focuses on the creation of a patient-oriented best-in-class IVF laboratory.
Journal of Human Reproductive Sciences / Volume 6 / Issue 1 / Jan -Mar 2013
THE CONCEPT OF QUALITY
Quality of care is a multi-dimensional concept, encompassing treatment efficacy and impact on health and welfare of both patients and offspring. In addition, the concept of quality includes the cost in financial and human terms of achieving the desired outcome.
To optimize quality using the Total Quality Management approach, it is necessary to map all processes, to thoroughly describe all procedures involved in the processes, and to define performance targets for each procedure. There should also be an assessment of how the procedures might fail; the impact of the failure and the possible cause(s) of such failure. It is then necessary to ensure that the clinic and its staff have the requisite skills, knowledge and equipment to achieve the performance targets. Finally, the performance must be monitored, both with regard to absolute measures as well as trends. In all instances where performance falls outside the set limits or is trending towards a non-conformance with targets, corrective actions must be taken and documented. As a first step in the corrective action it must be clearly established what has occurred. Then there must be an analysis of possible causes of the failure with a view to identifying deficiencies in the system that allowed the failure to occur [ Figure 1]. Quality management is a larger concept than quality assurance and quality control, which are subsets of quality management. [6] There is growing recognition that quality management not only ensures improvement of the clinical aspects of the operations of a clinic, but also leads to improved financial performance and increased staff satisfaction. [7]
CREATING A PATIENT-ORIENTED BEST-IN-CLASS IN VITRO FERTILIZATION LABORATORY
Setting up of an ideal IVF laboratory is vital to the success in ART and the critical determinants of the function of the IVF laboratory are people, procedures, equipment and the laboratory design.
People, their characteristics and management
An embryologist using his or her personality, knowledge and skills must be capable of taking initiatives and improving the laboratory. Skill is an absolute requirement along with the ability to collate and structure information from the laboratory and literature, so that one can analyze, foresee and solve problems that occur in the daily practice. Also, communication skills are important to connect with staff, patients and society. The embryologist must also have good writing skills in order to author quality management documents. Furthermore, in any organization, relationship skills are important to enable honest and open discussion in the workplace. The knowledge of cell and reproductive biology must be of sufficient depth to allow the embryologist to analyze and solve problems from basic physiological principles. It is important that the embryologist is imbued with a culture of intellectual and scientific rigor that allows independent search for knowledge and solutions instead of unquestioning references to statements of authorities in the field. Trust and integrity are fundamental in any workplace and it is crucial that there is comprehensive and honest reporting of all data. Good working conditions and good equipment should be provided for all staff to help them do a good job. Maintaining openness drives accountability, which in turn drives performance.
Procedures, documentation and data
All processes should be mapped, using appropriate flow chart methodology. The process map then forms the basis of descriptions of procedures and how they should be performed and what outcome is to be expected (performance indictors). These descriptions are often called standard operating procedures (SOPs). The SOPs should be structured in a standardized format and their distribution must be controlled. Finally, they should be based on documented scientific evidence and require regular updating. The performance indicators should be collected in a computer database. Many comprehensive systems are available commercially and provide administrative as well as medical functionalities and allow easy report generation. Some clinics develop their own IT solutions, sometimes based on spreadsheets (e.g., MS Excel) and sometimes built in relational database applications (e.g., MS Access), the latter being preferable. Ideally, the database should provide information on demographics, medical history, investigations, treatments, observations and outcomes. Data should be analyzed regularly, both with regard to absolute levels as well trends in the data.
Procedures should maximize the chance of success and minimize risk. Prior to the implementation of any new method, it needs to be validated and monitored in the current setting. Importantly, the laboratory staff members need to undergo training and prove competence for each procedure performed.
The data on the performance of the clinic as a whole, but also of the individuals should be collected and analyzed regularly. Data should be audited, assessed and structured to discern the input quality, the process quality and the output quality as appropriate. The list of data that is required for collecting and auditing is presented in Table 1. In addition, data on the functioning of equipment and technical systems, e.g., air quality and level of microbial contamination, must be collected and regularly audited.
For laboratory functioning assessment (process quality), the proportion of embryos reaching developmental milestones at pre-defined time points of embryo culture, are often used. The most sensitive indictor of the performance of the culture system is the average cell numbers at e.g., 42-44 h post insemination. The most significant measure of the output quality is the implantation rate rather than clinical pregnancy rates. Obviously, detailed records must be maintained for proper risk assessment in the clinic.
Equipment and consumables and supporting physiological processes
All consumables that come in contact with gametes and embryos must be validated as non-toxic to embryos and supportive of normal embryo development. All new equipment must be validated before use; it should be reliable in function, and be tolerant of non-ideal conditions. The equipment must be acquired on the characteristics of performance, but not on its price. The four key environmental variables; temperature, pH, osmolarity and contamination, must be controlled and carefully monitored from the "Tip-to-Tip" of the culture system (tip of the oocyte aspiration needle to the tip of the embryo transfer catheter). It is critical that the variables are measured at the point where the embryos are handled. For example, it must be ensured that the temperature of the culture medium in the dish is 37°C. The temperature of the heated stage then needs to be set at whatever value is needed to maintain the temperature of the medium in the dish at the desired level.
Work stations and laboratory design
Work stations should provide filtered air and a heated surface to maintain optimum temperature without any contamination. Ergonomically designed equipment is preferred to alleviate unnecessary strain on the embryologist. The laboratory design should be based on clean room technology, using low emission materials with a clearly defined air quality. Work-flow should be planned to ensure a minimal distance between the incubator, the work station and the microscope. This will also to minimize the risk of collisions between laboratory staff members. Before starting operations, the reliability of the laboratory equipment and processes must be checked in dry runs and all problems should be corrected before treatment of patients is commenced.
Quality management standards
There are several quality management standards like International Organization for Standardization (ISO) 9000-2008 for certification of QMS (ascertaining that conditions will allow quality targets to be achieved) and ISO 15189 for accreditation of clinical laboratories (ensuring that the lab is doing what it says it does). Data acquired between the period of 2006 and 2010 from Nurture IVF, UK, [ Figure 2], shows that over time, the variations in clinical pregnancy rate were moderate (5% with an upward trend) and the rate of improvement was 0.2% per quarter or 0.9% per year. However, the average improvement rates in the UK and in Sweden were between 0.3% and 0.4% per year and that is the benchmark against which improvement initiatives should be measured. American Society for Reproductive Medicine (ASRM) and European Society for Human Reproduction and Embryology (ESHRE) provide guidelines for good IVF laboratory practice. [8,9] The guidelines give information for support and guidance to the laboratory staff and deals with all aspects required to provide a safe working system for people in an IVF laboratory.
Future developments in embryology
The future of IVF is postulated to involve the use of microfluidics for culture systems, embryo selection using 'Omics' and automatic acquisition of information, including video time-lapse recordings of cultures into electronically stored databases. Microfluidics is an emerging concept, embedded on a chip to provide a stable environment. It allows for better control of conditions, continuous supply of nutrients and removal of waste products. To facilitate the identification of top quality embryos, time-lapse video, metabolomics and other 'Omics' might play an important role in future. A broader use of databases will also help in the development of the IVF laboratory.
In vitro fertilization scenario in India and points to consider before starting an IVF clinic in India
Issues surrounding pregnancy, childbirth and motherhood are complex in all societies but particularly so in patriarchal settings such as India where infertility is a life crisis, and its consequences can be manifold. [10] The Indian public health system does not provide access to adequate preventive and curative services or counseling for infertility. A postal survey of 6000 gynecologists in India revealed that, none of the public sector providers offered ARTs like IVF and only 36% offered intrauterine insemination (IUI). Hence services are available predominantly in the private sector, but the quality and the costs of these services vary considerably. [11] A major concern in the present context of ART services in India is the quality of care. Services are not regulated and the quality of treatment is variable, ranging from clinics that offer professional, high-quality services to those that are run by unqualified practitioners. [11] India has been known to play host to visiting foreign IVF infertility practitioners to carry out procedures that have been banned in the home country of patients and/or the visiting practitioners. [12] The Indian Council of Medical Research (ICMR) reports an average take-home baby rate of 20-30% per IVF cycle, [13] while leading clinics in India claim a success rates varying from 40% to even 75%. [14] The ICMR recently finalized national guidelines for the regulation of ART clinics. [15] According to ICMR guidelines, Infertility clinics have been categorized in to three levels based on the availability of complexity of ART service. The guidelines provides minimum requirement regarding Staff in infertility clinics as well as physical requirements for an ART clinic [ Table 2].
It remains to be seen how these will be implemented in India. Implementing and following any ethical guidelines would mean that doctors would have to look critically at issues of informed consent, screening donors, legal and ethical issues including the misuse of sex-pre-selection technologies like pre-implantation diagnosis. [11] ESSENTIAL REQUIREMENTS FOR AN ASSISTED REPRODUCTIVE TECHNOLOGY CLINIC As described in the ICMR Guidelines, [16] a well-designed ART clinic should have a non-sterile and a strictly sterile area. The non-sterile area must have a reception area, waiting room(s) for patients, medical procedure rooms (e.g., for blood sampling, injections), doctors' offices and examination rooms with ultrasound equipment, a general-purpose clinical laboratory, storage rooms for equipment, utensils and pharmaceuticals, nursing quarters, postoperative recovery rooms, records storage and/or IT server room, autoclave room, semen collection room, etc., In addition, the availability of other in-house clinical facilities (e.g., staff meeting rooms, private counseling rooms), and location of the hospital in the city must also be carefully considered. Adequate steps must be taken for vermin proofing and disinfection. The sterile area shall house the operation theatre, a room for intra uterine transfer of These clinics will require registration under the act. They shall have facilities for artificial insemination using husband's semen, artificial insemination using donor semen, and intrauterine insemination using husband's or donor semen may have infrastructure for further in-depth investigation and extended treatment of infertility except where oocytes are handled outside the body Level 3 These clinics will require registration and will have three functions to perform, viz., diagnostic and therapeutic at the highest level of specialization and with the best of facilities, and research (excepting on human embryos) Journal of Human Reproductive Sciences / Volume 6 / Issue 1 / Jan -Mar 2013 sperm or embryos and an adjoining embryology and sperm processing laboratory. Entry to the sterile area must be strictly controlled by an anteroom for changing footwear, in addition to an area for changing into sterile garments and a scrub-station.
The embryology laboratory must have facilities for the control of temperature and humidity and must have filtered air. The infertility clinic need not have in-house facilities to perform all the procedures necessary to diagnose infertility, such as those for complete hormone and other assays. These can be subcontracted to specialty laboratories specializing in delivering such services, as long as they are located in close proximity with short transportation times. Each clinic should maintain in writing, standard-operating manuals for all the different procedures carried out in the clinic. Quality of consumables used in the laboratory must be procured from reliable sources after ensuring that they are non-toxic to the embryos. Special measures and equipment should be installed to secure uninterruptible power supply to critical areas of the clinic, e.g., the operation room and vital equipment in the laboratory such as the incubators, as well as to other essential services in the clinic.
Essential qualifications of an art team
The practice of ART requires a well-orchestrated team work between the five functional areas of an IVF clinic: Clinical, nursing, embryology, counseling and administration. The staff should have formal qualifications in their area of responsibility and their actual performance should be monitored against set standards.
The Australian experience and the effect on the quality management system
In Australia, prior to the legislation being passed in 1988, IVF had been treated as research to which prescriptive guidelines were attached. Clinical and ART laboratories must be accredited, usually by the National Association of Testing Authorities. These requirements have been strengthened over time, and were upgraded to ISO 17025:1999 and are now becoming aligned to ISO 15189:2003. However, IVF laboratories are regulated separately by the Reproductive Technology Accreditation Committee (RTAC) [17] which works with the government to create specific item numbers for IVF; to determine the eligibility rules and to implement a process of accreditation for all IVF units in Australia and New Zealand [ Table 3]. All clinics are now required to have a QMS and other new requirements include risk management policies for several procedures and management review. The quality policy is now a requirement [18] The standard ISO 9001:2008 [ Table 4] helps an organization build QMS that allows it to achieve its stated quality targets. However, ISO does not prescribe what those targets should be. A more stringent standard is ISO 15189, which specifies requirements for quality and competence in medical laboratories.
SUMMARY
There has been increased demand for transparency pertaining to the safety aspects, effectiveness and health outcomes of ART treatments both from the general public, patients and regulators. The important components that make a laboratory great are the people, procedure, equipment and the laboratory design. Recruiting the right people is the critical step in setting up the best IVF laboratory. An IVF center is only as good as the staff it employs and there is an absolute need to ensure consistent and continuous education and training. QMS, that have been proven effective by other industries, can be used by ART clinics to meet existing requirements by monitoring and controlling the way in which services are delivered. Thus, by determining and meeting the expectations of patients and other stakeholders, implementation of QMS Table 3: Requirements of the quality management systems for accreditation of assisted reproductive technology facilities in Australia and New Zealand The quality management system include documented policies and procedures that have regard to • Quality management policy, personnel and resources, documents requirements, patient focus, compliance with legislation and guidelines, research and technology design and development, purchasing policy, control of technical components, quality assurance and monitoring systems, and management review of the quality system There shall be written policies and protocols for • Access to treatment • All procedures undertaken • Identification and witnessing protocols (patient/gamete/ embryo/confidentiality) • Storage conditions for gametes and embryos and storage timeframes The organization shall • Identify the process needed for the quality management system and their application throughout the organization • Determine the sequence and integration of these processes • Determine criteria and methods needed to ensure that both the operation and control of these processes are effective • Ensure the availability of resources and information necessary to support the operation and monitoring of these procedures • Monitor, measure, and analyze these processes; and • Implement actions necessary to achieve planned results and continual improvement of the system These processes shall be managed by the organization in accordance with the requirements of this international standard Journal of Human Reproductive Sciences / Volume 6 / Issue 1 / Jan -Mar 2013 will also be a vital tool for setting, planning and meeting advancements through documented processes enabling continuous improvement. | 2018-04-03T00:36:37.395Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "077fa558aeb57fcfff5a7693f60f0364c9695321",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0974-1208.112368",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1c0419a71e1e88745ff367b6c8839344f351887e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237485185 | pes2o/s2orc | v3-fos-license | How May I Help You? Using Neural Text Simplification to Improve Downstream NLP Tasks
The general goal of text simplification (TS) is to reduce text complexity for human consumption. This paper investigates another potential use of neural TS: assisting machines performing natural language processing (NLP) tasks. We evaluate the use of neural TS in two ways: simplifying input texts at prediction time and augmenting data to provide machines with additional information during training. We demonstrate that the latter scenario provides positive effects on machine performance on two separate datasets. In particular, the latter use of TS improves the performances of LSTM (1.82-1.98%) and SpanBERT (0.7-1.3%) extractors on TACRED, a complex, large-scale, real-world relation extraction task. Further, the same setting yields improvements of up to 0.65% matched and 0.62% mismatched accuracies for a BERT text classifier on MNLI, a practical natural language inference dataset.
Introduction
The goal of text simplification (TS) is to reduce text complexity (while preserving meaning) such that the corresponding text becomes more accessible to human readers. Previous works explored how TS can assist children (Kajiwara et al., 2013), nonnative speakers (Pellow and Eskenazi, 2014), and people with disabilities (Rello et al., 2013). While this can be achieved in a variety of approaches (Sikka et al., 2020), most TS research has focused on two major approaches: rule-based and neural sequence-to-sequence (seq2seq). Since 2017, there is a significant increase of neural seq2seq TS methods (Zhang and Lapata, 2017;Zhao et al., 2018;Kriz et al., 2019;Jiang et al., 2020).
In this paper, we analyze another potential use of the latter TS direction: assisting machines performing natural language processing (NLP) tasks.
To this end, we investigate two possible directions: (a) using TS to simplify input texts at prediction time, and (b) using TS to augment training data for the respective NLP tasks. We empirically analyze these two directions using two neural TS systems (Martin et al., 2019;Nisioi et al., 2017), and two NLP tasks: relation extraction using the TACRED dataset , and multi-genre natural language inference (MNLI) (Williams et al., 2017). Further, within these two tasks, we explore three methods: two relation extraction approaches, one based on LSTMs (Hochreiter and Schmidhuber, 1997) and another based on transformer networks, SpanBERT (Joshi et al., 2020), and one method for MNLI also based on transformer networks, BERT (Devlin et al., 2018).
Our analysis shows that simplifying texts at prediction times does not improve results, but using TS to augment training data consistently helps in all configurations. In particular, after augmented data is added, all approaches outperform their respective configurations without augmented data on both TACRED (0.7-1.98% in F1) and MNLI (0.50-0.65% in accuracies) tasks. The reproducibility checklist and the software are available at this link: https://github.com/vanh17/TextSiM.
Related Work
Recent work have effectively proven the practical application of neural networks and neural deep learning approaches to solving machine learning problems (Ghosh et al., 2021;Blalock et al., 2020;Yin et al., 2017).
With respect to input simplification, several works have utilized TS as a pre-processing step for downstream NLP tasks such as information extraction (Miwa et al., 2010;Schmidek and Barbosa, 2014;Niklaus et al., 2017), parsing (Chandrasekar et al., 1996), semantic role labeling (Vickrey and Koller, 2008), and machine translation (Štajner and Popović, 2016 (Papineni et al., 2002) between original and simplified text generated by two TS systems, ACCESS and NTS, in TACRED training and dev datasets.
the use of rule-based TS methods. In contrast, we investigate the potential use of domain-agnostic neural TS systems in simplifying inputs for downstream tasks. We show that, despite the complexity of the tasks investigated and the domain agnosticity of the TS approaches, TS improves both tasks when used for training data augmentation, but not when used to simplify evaluation texts. On data augmentation for natural language processing downstream tasks, previous work show significant benefits of introducing noisy data on the machine performance (Van et al., 2021;Kobayashi, 2018). Previous efforts used TS approaches, e.g. lexical substitution, to augment training data for downstream tasks such as text classification (Zhang et al., 2015;Wei and Zou, 2019). However, these methods focused on replacing words with thesaurus-based synonyms, and did not emphasize other important lexical and syntactic simplification. Here, we use two out-of-the-box neural TS systems that apply both lexical and syntactic sentence simplification for data augmentation, and show that our data augmentation consistently leads to better performances. Note that we do not use rulebased TS systems because they have been proven to perform worse than their neural counterparts (Zhang and Lapata, 2017;Nisioi et al., 2017). Further, rule-based TS systems are harder to build in a domain-independent way due to the many linguistic/syntactic variations across domains.
Approach
We investigate the impact of text simplification on downstream NLP tasks in two ways: (a) simplifying input texts at prediction time, and (b) augmenting training data for the respective NLP tasks. We discuss the settings of these experiments next.
Input Simplification at Prediction Time
We pose the run-time input simplification problem as a transparent data pre-processing problem. That is, given an input data point, we simplify the text while keeping the native format of the task, and then feed the modified input to the actual NLP task. For example, for the TACRED sentence "the CFO Douglas Flint will become chairman, succeeding Stephen Green is leaving for a government job.", which contains a per:title relation between the two entities Douglas Flint and chairman, our approach will first simplify the text to "the CFO Douglas Flint will become chairman, and Stephen Green is leaving to take a government job.". Then we generate a relation prediction for the simplified text using existing relation extraction classifiers.
Data Augmentation for Training
Here, we augment training data by simplifying the text of some original training examples, and appending it to the original training dataset. First, we sample which examples should be used for augmentation with probability p. Second, once an example is selected for augmentation, we generate an additional example with the text portion simplified using TS. For example, for the data in section 3.1, we generate an additional training data with the corresponding simplified text. p is a hyper parameter that we tuned for each task (see next section).
Experimental Setup
NLP tasks and methods: We evaluate the impact of TS on two NLP tasks: (a) relation extraction (RE) using the TACRED dataset , and (b) natural language inference (NLI) on the MNLI dataset (Williams et al., 2017 (Martin et al., 2019) as the TS method. The different rows indicate the different data augmentation strategies applied on the training data, while the columns indicate the type of simplification applied at runtime on the test data. We investigated the following configurations: Original: unmodified dataset; Simplified + Complement: consists of simplified data that preserves critical information combined with original data when simplification fails to preserve important information; Simplified + Original: consists of all original data augmented with additional simplified data that preserves critical information. (AD) annotates models using data augmented by neural TS systems during training.
with an average sentence length of 36.4 words. Each sentence contains two entities in focus (called subject and object) and a relation that holds between them. We selected this task because the nature of RE requires critical information preservation, which is challenging for neural TS methods (Van et al., 2020). That is, the simplified sentences must contain the subject and object entities.
The MNLI corpus is a crowd-sourced collection of 433K sentence pairs annotated for NLI. The average sentence length in this dataset is 22.3 words. Each data point contains a premise-hypothesis pair and one of the three labels: contradiction, entailment, and neutral. We selected MNLI as the second task to further understand the effects of TS on machine performance on tasks that rely on long text, which is a challenge for TS methods (Shardlow, 2014;Xu et al., 2015).
We train three approaches for these two tasks. First, for TACRED, we use a classifier based on LSTMs 1 (Hochreiter and Schmidhuber, 1997), and a second based on SpanBERT 2 (Joshi et al., 2020). For MNLI, we trained a BERT-based classifier 3 (Devlin et al., 2018). For reproducibility, we use the default settings and general hyper parameters recommended by the task and creators of the transformer networks Joshi et al., 2020;Devlin et al., 2018). Through this, we aim to separate potential improvements of our approaches from those coming from improved configurations.
Text simplification methods: For TS, we use two out-of-the-box neural seq2seq TS approaches: ACCESS (Martin et al., 2019), and NTS (Nisioi et al., 2017). Tables 1 and 2 show the BLEU scores (Papineni et al., 2002) between original and simplified text generated by these two TS systems for the two tasks. The tables highlight that both systems change the input texts, with ACCESS being more aggressive.
Evaluation measures: We directly followed the evaluation measures proposed by the original task organizers Williams et al., 2017). Specifically, we used these main metrics: (a) F1 on TACRED relation extraction, and (b) (Devlin et al., 2018). This is likely due to the different hardware and library versions used (Belz et al., 2021).
matched/mismatched accuracies on MNLI.
Hyper parameter tuning: We tuned the only hyper parameter for data augmentation, the percentage of augmented data points, p, for MNLI. On this task we augmented 5, 10, and 15% of sentence pairs from training data, and found 5 and 10% of training data as the best thresholds for ACCESS and NTS respectively. For TACRED, we did not use this hyper parameter. Instead, we used all simplifications that preserve critical information for data augmentation. That is, we added all simplified sentences that preserve the subject and object entities necessary for the underlying relation. We found that 66% of training data sentences could be simplified while preserving this information by ACCESS, and 72% by NTS.
Results and Discussion
Tables 3 and 4 summarize our results on TACRED for the two distinct TS methods. Because we tuned the hyper parameter p for MNLI, we report results on both development and test for this task (Tables 5 and 6, respectively). Further, for MNLI we also report average performance (and standard deviation) for 3 runs, where we select a different sample to be simplified in each run. This is not necessary for TACRED; for this task we simplified all data points that preserved critical information i.e., the two entities participating in the relation. 4 Input simplification at prediction time: Tables 3 and 4 show that simplifying inputs at test time does not yield improvements (compare the Original column with the third one). There are absolute decreases in performance of 1.38-2.58% and 1.67-2.80% in F1 on TACRED for ACCESS and NTS systems, respectively (substract column 3 from column 2 in rows 1 and 4). Similarly, on MNLI, the performance on simplified inputs is lower than the classifier tested on the original data. The performance drops on MNLI are more severe (11.68-49.53% and 11.70-49.55% in matched and mismatched accuracies) (substract column 1 from column 2 in row 1 and row 3 in Table 6 pairwise). We hypothesize that this is due to the quality of simplifications in MNLI being lower than those in TACRED. In the latter situation Table 7.
Augmenting training data: As shown in row 3 and 6 in Table 3 and 4, all methods trained on augmented data yield consistent performance improvements, regardless of the RE classifier used (LSTM or SpanBERT) or TS method used (ACCESS or NTS). There are absolute increases of 1.30-1.82% F1 for ACCESS and 0.70-1.98% F1 for NTS on (substract row 1 from row 3 and row 4 from row 6 for ACCESS and NTS respectively). The best configuration is when the original training data is augmented with all data points that could be simplified while preserving the subject and object of the relation (rows 4 and 8 in the two tables). These results confirm that TS systems can provide additional, useful training information for RE methods. Similarly, on MNLI, the classifier trained using augmented data outperforms the BERT classifier that is trained only on the original MNLI data. For two TS systems, ACCESS and NTS, we observe performance increases of 0.59-0.65% matched accuracy, and 0.50-0.62% mismatched accuracy (compare rows 1 vs. 2, and row 3 vs. 4 in Table 6). This confirms that TS as data augmentation is also useful for NLI.
All in all, our experiments suggest that our data augmentation approach using TS is fairly general. It does not depend on the actual TS method used, and it improves three different methods from two different NLP tasks. Further, our results indicate that our augmentation approach is more beneficial for tasks with lower resources (e.g., TACRED), but its impact decreases as more training data is available (e.g., MNLI).
Conclusion
We investigated the effects of neural TS systems on downstream NLP tasks using two strategies: (a) simplifying input texts at prediction time, and (b) augmenting data to provide machines with additional information during training. Our experiments indicate that the latter strategy consistently helps multiple NLP tasks, regardless of the underlying method used to address the task, or the neural approach used for TS. | 2021-09-13T01:15:37.540Z | 2021-09-10T00:00:00.000 | {
"year": 2021,
"sha1": "b75131ed4f50fa556ff0ad91c970a2df38b24e5f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b75131ed4f50fa556ff0ad91c970a2df38b24e5f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
243471322 | pes2o/s2orc | v3-fos-license | Radical Transformation of Universities to Prepare the Next Generation of Climate Champions
The threat and reality of climate change must be acted upon individually and collectively. Universities have a decisive role to play in this regard – by creating the capacity in all its academic activities to lead in taking on the challenge and by graduating students with the capacity to solve the problems that the climate change situation poses. To take on these roles, universities must accept a “radical transformation”. Radical transformation is a process that requires two integrated activities: radical think-ing and transformative action. We propose that it is radical to think of universities as microcosms of society; that is, universities face the same need as everyone else to find ways to mitigate and adapt to climate change. We also propose that it is transformative for universities to inspire and be agents of change for the world: by creatively developing strategies to mitigate and adapt to climate change, universities can become global leaders in demonstrating workable solutions capable of being broadly diffused and scaled up. We present a set of design aspirations that can help universities undergo a radical transformation and thereby make headway in addressing the climate crisis.
Introduction
It is increasingly important that universities dedicate themselves to solving the most pressing issues facing the planet (Crow & Dabars, 2020). Universities have a social responsibility to become what the world needs. These needs are well articulated in the United Nations' Sustainable Development Goals (SDG) framework, consisting of 17 SDGs, 169 "targets", and 223 "indicators" adopted by 193 member states in 2015. These markers are a universal call to action to end poverty, protect the planet, and ensure that all people enjoy peace and prosperity (United Nations, n.d.). One SDG requires immediate focus -SDG 13, Climate Action. Climate change is occurring at a rate much faster than anticipated, and accelerated action is needed on climate mitigation and adaptation to stay within planetary boundaries (Rockström et al., 2009). The scientific consensus is that the earth's climate has warmed significantly since the late 1800s, that human activities are the primary cause, and that continuing greenhouse gas emissions will increase the likelihood and severity of global adverse effects. What is needed is for people and nations to act individually and collectively to slow the pace of global warming (mitigation) while also preparing for unavoidable climate change and its consequences (adaptation) (Ripple et al., 2020). Governments, industries, and societies need to make rapid and systemic changes as to how climate change is addressed (Ripple et al., 2020). This injunction is particularly vital, since the other SDGs cannot be achieved, or ultimately sustained, unless the earth's climate system is stabilized (Stechow et al., 2016;Fuso et al., 2019;Keys et al., 2019).
Universities will be essential in shaping the way people think and feel about climate change and act to address it. Collective knowledge, with all of its diversity, along with holistic approaches to the creation of tools to understand and solve complex problems, will be needed to achieve climate change mitigation and adaptation. As motivating centers of teaching, learning, discovery, innovation, and entrepreneurial activities, universities can be an engine of transformation by finding ways to stabilize the world's climate and by supporting the local and regional transitions that are needed.
The University Plan 2025 commits our university, the University of Saskatchewan, to being "the university the world needs". We have set out a bold vision to harness our talents and resources to respond to contemporary challenges and opportunities. To fulfill this vision, we are placing high priority on sustainability and on the UN SDGs. Only by addressing the interlinked social, economic, and environmental challenges captured by the SDGs will it be possible to tackle climate change and protect the planet and at the same time to create a prosperous, just, and equitable society. We recognize that sustainability is not merely another problem to be tackled or solved. It needs, rather, to pervade all decisions within our institution. It requires transformations in the very DNA of our institution. With only 10 years remaining before the UN 2030 Agenda for Sustainable Development deadline, the time to act is now. Over the past year, members of the university community -administrators, faculty, staff, students -came together to identify the many initiatives already under way on campus, to scope out areas of improvement or areas where actions are needed, and to forge ahead with a cohesive strategy that defines our critical paths to sustainability. Here, we present our plan for climate action.
Need for Radical Transformation
Universities are motivating centers of teaching, learning, discovery, innovation, and entrepreneurial activities. As engines of transformation, universities can deliver on actions needed to stabilize the world's climate and drive local and regional transitions to a just and sustainable future. However, the actions needed to address climate change will require a radical transformation, a process that requires two integrated activities, (a) radical thinking, and (b) transformative action. As a starting point for a radical transformation, we propose that it is radical to think of universities as microcosms (from the Greek mikros kosmos, or "little world") of society. Although universities are typically thought of as elite institutions and ivory towers cut off from the rest of society, in actuality they are microcosms of society. They are complex organisms, inhabited by highly diverse individuals and organizations, with their own cultures, languages, norms, and governance rules. They share the same problem as does the rest of society: the need for people and organizations to work together to find ways to mitigate and adapt to climate change. We also propose that universities must function as transformative entities meant not just to inspire but also to be positive agents of change for the world. A university's commitments to climate action must be designed to benefit not only its own community but also the communities beyond it, and to facilitate the rapid dissemination of climate-action expertise and experience.
Ambitious climate action requires climate knowledge to be incorporated into all levels and all aspects of the education system, including post-secondary education. Universities, at least in theory, are well placed to leverage the power of teaching, learning, and discovery and to leapfrog conventional action. Below, we present design aspirations universities can adopt to radically transform themselves to create the next generation of climate champions.
Design Aspirations to Guide the Transformation
A vision for a new wave of universities is emerging, one that combines worldclass research enterprises with broad accessibility to learners -all to effect a shift in social outcomes toward equality and equity (Crow & Dabars, 2020). This new wave reflects a radical transformation in which universities convene, facilitate, and mediate relevant and allied conversations related to the SDGs, advance teaching and research about the SDGs, and affect social outcomes that will help achieve the SDGs (Crow & Dabars, 2020). The design aspirations for this new wave of universities (Crow & Dabars, 2015) form the basis for the proposed radical transformation at the University of Saskatchewan. Here are summaries, and we further discuss each in subsequent sections.
-Leverage Our Place. Be responsive to the university's social, economic, environmental, and cultural settings, and influence and be influenced by them as solutions to climate-related problems are created, mobilized, and shared. Taken together, these design aspirations outline a major shift for the university, a radical transformation in which it sees itself much more embedded in the society of which it is part and much more responsive to that society's needs.
The university's radical transformation will require responsive, flexible, and agile governance structures, starting with the university itself. See Figure 4.1 for a graphic depiction of a potential organizational structure (Walls, 2020). Applying the organizational framework to climate action, universities can be supported by a cluster of climate innovation teams in the areas of operations, teaching and learning, and discoveries, innovations, and entrepreneurship that can design and implement climate solutions for the university as well as for the city, region, and country to help the university achieve its commitment to climate action. Universities can also be supported by an external climate advisory table comprised of advisors, collaborators, and partners who will work alongside the university to achieve shared goals. For this external climate advisory table, special attention should be placed on the inclusion of Indigenous elders and knowledge keepers.
The university needs to become an open system (Ermine, 2007), in which the campus itself is viewed as an experientially driven classroom that daily, figure 4.1 Proposed organizational framework for universities to not only survive but thrive in the face of rapid, unpredictable change (adapted from Walls, 2020) through organizational behavior, campus policies, procedures, and practices, and community engagement, incorporates responsible citizenship and environmental stewardship. Further, the university needs to foster an interconnected, creative, innovative, and entrepreneurial campus spirit, and to use its campus as a living laboratory, a place to pilot and perfect climate solutions, both those collaboratively and interprofessionally developed, and those requiring coordinated local, regional, and national efforts. Finally, the university needs to bind together -through equitable partnerships -the exuberance of youth and the wisdom of experience as people explore, discover, and find ways to implement new ideas. Youth are just as invested as older people, possibly more so, and have a right to influence decisions. In enlisting youth, we will help in building a generation of leaders more influential and more capable than those we have now.
Design Aspiration: Leverage Our Place
The need for swift and immediate action on the interconnected global impacts of climate change has led to an alignment of local, regional, national, and international agendas. At the 2015 UN Framework Convention on Climate Change (UNFCCC) Conference of the Parties (COP) 21 meeting, more than 170 countries (including Canada) adopted the Paris Agreement to strengthen the global response to the threat of climate change by keeping a global temperature rise this century well below 2.0 degrees Celsius above pre-industrial levels and to pursue efforts to limit the temperature increase even further to 1.5 degrees Celsius. (United Nations, 2015) World leaders agreed that meeting this goal of 1.5 degrees Celsius will require reducing our greenhouse gas emissions 45% from their 2010 level by 2030 and achieving net-zero emissions by 2050 (IPCC, 2018).
This push for action has seen broad support among Canadians. For example, during the 2019 Canadian federal election, 35% of Canadians listed climate change among their top three most pressing issues at the ballot box (Shah, 2019). Under the Paris Agreement, Canada committed to reducing its greenhouse gas emissions to 30% below 2005 levels by 2030. This change would require a national reduction of 218 metric tons (Mt) of carbon dioxide equivalent (CO2-eq) below 2018 emissions levels (Government of Canada, 2020). Canada has projected that its various economic sectors will contribute a reduction of 199 Mt CO2-eq, with additional projected emissions to come from offset credits, land sector contributions, and future reductions (such as clean electricity, greener buildings and communities, and electrification of transportation). Substantial efforts will be required if this target is to be achieved. Current projections by the Government of Canada still place our national emissions 77 Mt CO2-eq short of its 2030 target (see Figure 4.2). In order for Canada to meet its national target, significant work will be needed in the following areas: clean energy sources; low-carbon transportation strategies; low-carbon building strategies; biodiversity; sustainable fisheries, forestry, and agriculture practices that limit greenhouse gas emissions and enhance carbon sequestration while protecting water resources; carbon pricing and other economic and policy incentives designed to promote and encourage these practices; and participatory governance institutions (Sustainable Canada Dialogues, 2015). This work will require the contributions of regional and municipal governments, Indigenous sovereign nations, industry, not-for-profits, and civil society (including, not least, universities). The Government of Saskatchewan has recently released its Growth Plan, which includes 30 goals for 2020-2030 (Government of Saskatchewan, 2020). Among these goals is a growth in population to 1.4 million people, 100,000 new jobs, and ambitious growth targets across sectors: private capital investment and the agriculture, oil, mining, and forestry industries. These goals will have profound impacts on the province's greenhouse gas emissions. At the same time, climate change will have profound impacts on these sectors. As an institutional leader in the province, the University of Saskatchewan can support the Government of Saskatchewan's Growth Plan while stressing the need for climate action. The university can gather influential voices and lead informed discussions for the purpose of coordinated climate actions. Much of the university's own success comes from working in a coordinated way with the City of Saskatoon and the Province of Saskatchewan, and these entities can benefit from working together to develop ways to mitigate and adapt to climate change.
The University of Saskatchewan commits to becoming more responsive and to influence, and be influenced by, our social, economic, environmental, and cultural settings so as to be better situated to create, mobilize, and share climate solutions. Our goal is to be an engaged university that works in a coordinated and innovative way with communities to develop climate solutions. To achieve this goal, we aim to: -Establish a joint university-community advisory table to share, exchange, create, and identify synergies. The table will include representation from government, industry, not-for-profits, and all communities wanting to co-create and co-implement climate solutions for society. -Nurture public discourse and convene public discussions on climate change with the goal of inspiring widespread climate awareness, engagement, and action. -Build bridges and create portals through which external partners can easily and effectively engage with the university community as well as offer new perspectives and opportunities to together drive shared action on climate change.
Design Aspiration: Model the Way
The University of Saskatchewan faces the same need as everyone else to reduce greenhouse gas emissions -climate change is occurring at a rate much faster than anticipated, and accelerated action is needed to stay within the safe operating space for humanity (Rockström et al., 2009). Our strength lies in our ability to leverage the power of cutting-and leading-edge discoveries to do our part to support the local, regional, and national transitions that are needed for a more just, equitable, and sustainable future. In deploying our resources in service of our core mission -generating new and meaningful knowledge -we can serve as living laboratories for setting priorities and designing and implementing climate solutions that can be adopted and adapted elsewhere.
University greenhouse gas emissions fall into three categories, which we denominate and measure in "scopes" as follows: Scope 1, direct emissions produced from activities on property the university owns or controls (such as emissions resulting from heating with natural gas, running a fleet of vehicles, and conducting agricultural operations); Scope 2, specific indirect emissions produced by electricity the university consumes; and Scope 3, all other indirect emissions from sources not owned or controlled by the university. There is an emerging idea of Scope 4 emissions, which are emissions avoided by working in a coordinated way to lead (or to participate where others are leading) in developing strategies and in investing in projects and initiatives that align with regional, national, and international climate agreements.
The University of Saskatchewan began monitoring its greenhouse gas emissions in 2010, using a baseline of 2006/07 emissions levels (Figure 4.3).
Since then, our total greenhouse gas emissions have not changed significantly, increasing by 6.4% to 171,299 Mt CO2-eq for the 2019/20 fiscal year; however, building floor space has increased 21% since 2006/07, resulting in a 14% reduction of emissions per square meter (also referred to as emissions intensity). In terms of the university's individual scope emissions over time: The University of Saskatchewan commits to reduce greenhouse gas emissions in keeping with the UN Intergovernmental Panel on Climate Change's science-based targets to limit global warming to 1.5 degrees C above the pre-industrial norm. Our goal is to take bold steps to reduce our greenhouse gas emissions by 45% from our 2010 levels by 2030 by fostering an entrepreneurial campus spirit that utilizes the campus operations and community as a living laboratory to pilot both collaboratively developed climate change solutions and those solutions requiring coordinated local and regional efforts. This goal is ambitious -more ambitious than those of the City of Saskatoon and the Province of Saskatchewan, than the average of the top 15 research-intensive universities in Canada, and than of the federal government of Canada -and will require rapid and far-reaching changes. Systemic changes will be required to reduce the university's greenhouse gas emissions. The university will need to implement operational changes, and to make sure these changes do not stall, it will need to align institutional priorities, policies, programs, and services to achieve the reduction targets. To achieve this goal, we aim to: -Seek further opportunities to divest from fossil fuels and to continue to engage in socially and environmentally responsible investing. -Implement operational solutions to reduce our Scope 1, 2, and 3 emissions and to raise avoided Scope 4 emissions that we can avoid by working together at regional, local and international levels. -Ensure that climate actions are bolstered and barriers removed by reviewing the university's strategic planning processes, decision-making processes, policies, and practices in order to confirm their alignment with the emission goals. Where needed, we will design new climate-sensitive policies that directly address reductions in Scope 1, Scope 2, and Scope 3 emissions. We will leverage our capital investments by working with governments, industries, and communities to increase the quantity of Scope 4 emissions we avoid. -Map finance and accounting structures, norms, and practices (both capital and operations) to align with the emission goals. Improve our processes for allocating resources to revenue and support centers, making sure that they create the incentives and rewards required for effective climate action (for example, consider novel finance and accounting approaches to facilitate climate action such as piloting an internal carbon accounting strategy). Use a portion of budgetary savings from reduced emissions to advance climate action on campus and in the community. -Ensure accountability and transparency in reporting on progress in achieving climate action goals. Design and implement more comprehensive measures of the university's emissions, make clear deadlines for on-campus climate action, and report annually to our governing bodies on progress toward achieving this commitment.
Design Aspiration: Empower Action
The challenge of mitigating and adapting to climate change represents a great opportunity for research-intensive universities to mobilize new forms of teaching and learning directed toward climate action.
Canada is uniquely situated to take advantage of this opportunity to re-examine how higher learning engages with students. More than half (62%) of Canadians aged 25 to 64 have either college or university qualifications (OECD, 2019). The Canadian population is aging, however, and the rapid growth in the senior population -16% of the country in 2014 and poised to grow to 23% by 2030 -creates a quickly expanding gap in the supply of educated young people, whom we will need to take on the novel challenges of the 21st century (Statistics Canada, 2015).
The Canadian population is also changing with respect to its Indigenous population (First Nations, Métis, or Inuk [Inuit]), which has grown by 43% since 2006, more than four times faster than the rest of the population (Statistics Canada, 2018), and which is becoming increasingly educated. In 2016, 11% of Indigenous people overall aged 25 to 64 had a bachelor's degree or higher, up from 8% in 2006, while those with a college diploma rose from 19% to 23% over the same time period (Statistics Canada, 2017). Increased university education in the Indigenous population brings increased opportunity to engage their traditionally educated elders and knowledge keepers, with their access to a thousand years of knowledge about the land, sky, and environment in their territories. Indigenous views on sustainability offer an indispensable advantage in addressing the climate crisis.
At the University of Saskatchewan, we are seeking new forms of teaching and learning that help students shift or reorder priorities -as to values (ways of relating to one another and the world), mindsets (forms of understanding), and skill sets (modes of action) (Kemmis et al., 2014) -so that they contribute to climate change mitigation and adaptation.
A shift in values is needed because societally we have become accustomed to living our lives based on values that are increasingly at odds with a sustainable planet (Hoffman, 2019). This shift is one of the most challenging to get to take root in society. It requires grassroots changes, formal changes (in rules and regulations), and informal changes (in norms). If we wish today's students to act on climate issues, we need them to gain the sort of learning experience that enables them to uncover and question their tacit perspectives and personal values (Shephard, 2008) and to develop the capacity to act individually, collectively, and in partnership with communities, governments, and industry.
A shift in mindsets is needed to empower people to devise disruptive (innovative and groundbreaking) solutions to climate change. This shift will require extending the modes of preparation beyond the purely cognitive and into the physical, emotional, and spiritual (Kemmis et al., 2014), a holistic pedagogical framework that has been known to, and practiced by, Indigenous peoples for centuries. Today's students need holistic teaching approaches (by which we mean extending beyond the cognitive to include ways of being) to help them understand the causes and consequences of climate change and their capacity for agency with respect to it.
To shift values and mindsets means also developing new forms of learning, ones that are personally relevant to students. It means giving students an action-oriented experience in place of the more traditional passive student role. Too often we educate in ways that are inclusive of large numbers but lack significance for individual learners. Today's students are looking to solve problems, to see and feel the real-life applications of their course work, and to develop the confidence and mastery they need to effect change after graduation.
We also need a shift in skill sets in order to equip all learners with those skills that are in high demand (Royal Bank of Canada, 2018). In particular, today's students need problem-solving skills, including critical thinking, analytics, math and numeracy, communication, collaboration, global competencies, and the ability to adapt and learn new things (Royal Bank of Canada, 2018). Problem-solving skills can be developed through involvement in creating and implementing climate solutions on campus, in our communities, and beyond. We also need to equip all learners with an understanding of ethics and activism, as well as the experience and ability to implement policy changes.
To shift values, mindsets, and skills effectively, we need to enable diverse learners to have access to what they need. To expand knowledge about climate change, we need to support both master learners (students who move forward at their own pace as they master knowledge and skills) and lifelong learners (students who learn continually throughout life, especially outside of, or after the completion of, formal schooling) (Crow & Dabars, 2020). This conveying of "learning how to learn" is key in preparing students for an uncertain future, one likely marked with disruption and the need to pivot as circumstances change. Universities do not do well with this "learning how to learn", despite the centrality of this skill to student success post-graduation (Knight & Yorke, 2003;Livingston, 2003). To facilitate development of this skill, universities could deviate from traditional degrees and offer alternative formats. For example, universities could allow master's degree candidates to "stack" several flexibly delivered modules. They could offer an accelerated bachelor's degree for those wishing to pivot to a new area of study after they complete an undergraduate program. They could offer credentials that are not a full degree, often called micro-credentials. Micro-credentials can be earned in short, bite-sized chunks. A micro-credential approach could provide an opportunity to engage students in all areas of study in climate education, and in formats that transcend disciplines. Other forms of skill validation, such as "badging", can facilitate students' finding, and engaging in, opportunities that develop essential skills in ways that are less restrictive than is a formal class. Micro-credential candidates can undertake relevant activities that align with their passions and, more practically, their schedules. They could find opportunities to do so either within or outside the formal curriculum, or both. In the assessment of skill development, measures could be broadened to include formal and informal educational experiences. In the efforts to nurture climate champions, such an increase in flexibility could act as an incentive as well as an acknowledgment. Figure 4.4 plots the curriculum continuum against different modes of delivery (face-to-face to entirely online), demonstrating how combining various credential types with unique models for participation may open access for students with varied motivations and circumstances. The ability to access these alternative learning paths would need to be extended to all, an expansion that would require transformational changes to the structures of our institutions. The University of Saskatchewan commits to creating a generation of learners and achievers focused on exploring and crafting innovative and workable solutions to the various aspects of climate change. Our goal is to ensure that every individual faculty member, staff member, and student has a holistic understanding of the need for climate action. In support of this goal, the entire institution will promote measures, enable participation, and get everyone engaged in exploring, discovering, and implementing new ideas. Specifically, we aim to: -Equip faculty, staff, and students in all disciplines to be climate champions throughout their lives by ensuring that they have access to climate change educational experiences. To do so will require the university's mastering diverse bodies of knowledge about climate change and incorporating them into curricula across the campus. -Develop mechanisms to engage faculty and academic units in changing or modifying their course and program curricula to advance climate literacy. Such mechanisms can accelerate the required shifts in values, mindsets, and skillsets and reduce the distance between where we are and where we need to be.
-Give diverse learners access to climate change curricula, including enabling them to select their optimal mode of learning -in-person, synchronous, or asynchronous online -bearing in mind that all trainees will need access to the appropriate equipment. Additionally, do advance work on providing varied credential types so as to offer such learners increased flexibility and access. -Enable all students to show local community leaders how climate change could affect their communities, and to create climate solutions through experiential learning programs involving projects, placements, and practicums, both within the institution and with the community.
Design Aspiration: Capitalize on Strengths
A key strength of any research-intensive university is its capacity for innovation. In the face of the 21st century challenges, the University of Saskatchewan needs to capitalize on its strengths and empower a "daring culture of creativity and innovation with the courage to confront humanity's greatest challenges and opportunities" (University of Saskatchewan, n.d.). Such a culture of innovation will "foster a problem-solving, entrepreneurial ethic among faculty, students, and staff, harnessing opportunities to apply our research, scholarly and artistic efforts" (University of Saskatchewan, n.d.). As a result, the university will co-create ideas and co-produce solutions within our communities. This innovative culture will focus on supporting people to create, diffuse, and scale more effective solutions to entrenched social problems (McConnell Foundation, n.d.).
The University of Saskatchewan has designated six signature areas in recognition of its existing and ongoing research into addressing the world's most pressing and challenging problems. For over a decade, these signature areas have shaped and guided institutional efforts and investments. Most important, these signature areas are not limited to a single discipline. Their relevance across the university -in the natural sciences, engineering, health sciences, social sciences, and humanities -has deepened the impact of the work locally, regionally, nationally, and internationally. Implicit in the choice of our signature areas is our understanding that meeting contemporary challenges must involve supporting a convergence of disciplines, whereby different disciplines cooperate to integrate their various bodies of knowledge, and whereby novel frameworks are formed to catalyze discovery and innovation, a "pinnacle of evolutionary integration across disciplines" (NSF, 2016).
The University of Saskatchewan will similarly achieve climate solutions through a whole-of-university response, creating opportunities for every instructor and researcher to explore the climate relevance of their work. For example, the university is recognized for its excellence in energy, food, and water security -that is, the adaptive capacity to safeguard the availability of, and access to, reliable and resilient energy, food, and water for human health and well-being. The university will seek out interactions among and between these signature research strengths and climate change (for example, how to enhance energy, food and water security in ways that enhance climate change mitigation and adaptation). Our convergent (coming-together-on-climate-change) response will include people in many roles. Instructors who create active learning environments. Discoverers working in use-inspired basic research. Entrepreneurs who can move discoveries into action. Artists who will translate discoveries into forms that inspire communities to act. Capacity builders who empower communities to act. Outstanding leaders capable of making national and global impacts. All knowledge thus attained will be put to work to reduce the risk of climate change in a just and equitable way for the benefit of society.
The University of Saskatchewan thus commits to creating and mobilizing new understandings, with a focus on innovative and workable ways to address and meet climate and other sustainability challenges. Our goal is to integrate learning, discovery, innovation, and entrepreneurship, and thereby put our knowledge to work to solve the problems presented by climate change. To achieve this goal, we aim to: -Build leadership and capacity in innovation, encouraging every member to devote some of their energy toward a common project of addressing climate challenges. -Create "convergent" innovation hubs, with the capacity not only to pilot and perfect technological innovations for solving local, regional, national, and global climate problems, but also to support and facilitate social innovations, such as the institutional changes that must accompany technological innovations. -Forge and lead unique multi-community, multi-partner, and multi-sector collaborations to tackle the full spectrum of climate change mitigation and adaption challenges, from idea germination to translation into real-life approaches and solutions.
Design Aspiration: Catalyze Social Change
Confronting and tackling climate challenges requires cognizance of the local dimension of the problem as well as its global context. Universities can tap into the global pool of knowledge through global partnerships to spur climate innovation. This approach will require new forms of connecting spaces (forums), where competing world views can converge and a cooperative spirit can emerge that will create "new currents of thought that flow in different directions and overrun the old ways of thinking" (Ermine, 2007). This approach will also require new forms of, and an unprecedented level of, collaboration, in which the focus is on outcomes that benefit society and enhance society's capacity to act. Global dialogue will be an important tool for informing climate actions and translating lessons learned into policies, programs, and practices that can be disseminated and scaled up, enabling learning for all. By engaging in meaningful global dialogue, we can learn from one another, support each other, and chart a path for more ambitious action to tackle climate challenges.
The University of Saskatchewan commits itself to sharing knowledge, expertise, and experiences, and to effect the social change needed. By learning how to successfully meet climate challenges, we can share solutions that are capable of being broadly diffused and scaled up. Our goal is to inspire, and be agents of, "positive climate change" for the world. To achieve this goal, we aim to: -Ensure that voices in our learning environments and in the research that we undertake are grounded in principles of equity, diversity, and inclusion. -Engage in both local and global dialogue to develop a shared understanding of the challenges of, and solutions to, climate change. -Leverage networks and partnerships between universities and the private sector, the public sector, not-for-profits, and civil society here and abroad to create collaborations that can harness opportunities for scalable social and technological climate solutions, and that can influence political leaders to accept and act on these solutions.
Conclusion
Universities have a pivotal role to play in the climate crisis, because they sit at the nexus of local, regional, national, and international cooperation and are positioned to contribute courageous leadership and inspiring thinking. To take on this role, however, universities must be willing to undergo a radical transformation. This means adopting responsive, flexible, and agile governance structures, becoming living laboratories that foster creative, innovative, and entrepreneurial campus spirit, and establishing diverse partnerships to implement coordinated climate action and climate solutions across all spheres of influence. Young people and young minds are perhaps the most powerful resources to meet the challenges associated with climate action. They need to be empowered through new methods of teaching and learning. Through combining the powerful resource of young people with the world-class researchers and facilities that universities can provide, and with government, industry, and community expertise and experience, the potential for meeting international action goals for the climate can be realized. This radical transformation will require unapologetic ambition and appropriate impatience as we move swiftly on climate action, paving a path toward a resilient future for universities and for the local and global communities in which they are embedded. | 2021-11-05T15:25:26.340Z | 2021-10-27T00:00:00.000 | {
"year": 2021,
"sha1": "adafebf630ad659416154258f8dd7acfdc686939",
"oa_license": "CCBY",
"oa_url": "https://brill.com/downloadpdf/book/9789004471818/BP000017.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cb412bedb759dbd4643157dfe0c3e08714f9a208",
"s2fieldsofstudy": [
"Environmental Science",
"Education"
],
"extfieldsofstudy": []
} |
209406630 | pes2o/s2orc | v3-fos-license | Regional Distributions of Iron, Copper and Zinc and Their Relationships With Glia in a Normal Aging Mouse Model
Microglia and astrocytes can quench metal toxicity to maintain tissue homeostasis, but with age, increasing glial dystrophy alongside metal dyshomeostasis may predispose the aged brain to acquire neurodegenerative diseases. The aim of the present study was to investigate age-related changes in brain metal deposition along with glial distribution in normal C57Bl/6J mice aged 2-, 6-, 19- and 27-months (n = 4/age). Using synchrotron-based X-ray fluorescence elemental mapping, we demonstrated age-related increases in iron, copper, and zinc in the basal ganglia (p < 0.05). Qualitative assessments revealed age-associated increases in iron, particularly in the basal ganglia and zinc in the white matter tracts, while copper showed overt enrichment in the choroid plexus/ventricles. Immunohistochemical staining showed augmented numbers of microglia and astrocytes, as a function of aging, in the basal ganglia (p < 0.05). Moreover, qualitative analysis of the glial immunostaining at the level of the fimbria and ventral commissure, revealed increments in the number of microglia but decrements in astroglia, in older aged mice. Upon morphological evaluation, aged microglia and astroglia displayed enlarged soma and thickened processes, reminiscent of dystrophy. Since glial cells have major roles in metal metabolism, we performed linear regression analysis and found a positive association between iron (R2 = 0.57, p = 0.0008), copper (R2 = 0.43, p = 0.0057), and zinc (R2 = 0.37, p = 0.0132) with microglia in the basal ganglia. Also, higher levels of iron (R2 = 0.49, p = 0.0025) and zinc (R2 = 0.27, p = 0.040) were correlated to higher astroglia numbers. Aging was accompanied by a dissociation between metal and glial levels, as we found through the formulation of metal to glia ratios, with regions of basal ganglia being differentially affected. For example, iron to astroglia ratio showed age-related increases in the substantia nigra and globus pallidus, while the ratio was decreased in the striatum. Meanwhile, copper and zinc to astroglia ratios showed a similar regional decline. Our findings suggest that inflammation at the choroid plexus, part of the blood-cerebrospinal-fluid barrier, prompts accumulation of, particularly, copper and iron in the ventricles, implying a compromised barrier system. Moreover, age-related glial dystrophy/senescence appears to disrupt metal homeostasis, likely due to induced oxidative stress, and hence increase the risk of neurodegenerative diseases.
Microglia and astrocytes can quench metal toxicity to maintain tissue homeostasis, but with age, increasing glial dystrophy alongside metal dyshomeostasis may predispose the aged brain to acquire neurodegenerative diseases. The aim of the present study was to investigate age-related changes in brain metal deposition along with glial distribution in normal C57Bl/6J mice aged 2-, 6-, 19-and 27-months (n = 4/age). Using synchrotronbased X-ray fluorescence elemental mapping, we demonstrated age-related increases in iron, copper, and zinc in the basal ganglia (p < 0.05). Qualitative assessments revealed age-associated increases in iron, particularly in the basal ganglia and zinc in the white matter tracts, while copper showed overt enrichment in the choroid plexus/ventricles. Immunohistochemical staining showed augmented numbers of microglia and astrocytes, as a function of aging, in the basal ganglia (p < 0.05). Moreover, qualitative analysis of the glial immunostaining at the level of the fimbria and ventral commissure, revealed increments in the number of microglia but decrements in astroglia, in older aged mice. Upon morphological evaluation, aged microglia and astroglia displayed enlarged soma and thickened processes, reminiscent of dystrophy. Since glial cells have major roles in metal metabolism, we performed linear regression analysis and found a positive association between iron (R 2 = 0.57, p = 0.0008), copper (R 2 = 0.43, p = 0.0057), and zinc (R 2 = 0.37, p = 0.0132) with microglia in the basal ganglia. Also, higher levels of iron (R 2 = 0.49, p = 0.0025) and zinc (R 2 = 0.27, p = 0.040) were correlated to higher astroglia numbers. Aging was accompanied by a dissociation between metal and glial levels, as we found through the formulation of metal to glia ratios, with regions of basal ganglia being differentially affected. For example, iron to astroglia ratio showed age-related increases in the substantia nigra and globus pallidus, while the ratio was decreased in the striatum. Meanwhile, copper and zinc to astroglia ratios showed a similar regional decline. Our findings suggest that inflammation at the choroid plexus,
INTRODUCTION
Trace metals are essential for biochemical and physiological processes as components of various vitamins, enzymes, and cofactors (Li et al., 2017). Iron is present in copious amounts in the brain, performing pleiotropic functions including neurotransmitter synthesis, neuronal myelination and adenosine triphosphate (ATP) synthesis (Ashraf et al., 2018). Other metals such as copper, zinc and calcium are required alongside iron for modulation of synaptic activity and neuronal plasticity (Popescu and Nichol, 2011;Wang et al., 2012;Braidy et al., 2017;Li et al., 2017). Their balance within the brain is regulated in a complex fashion by brain-barrier systems (blood-brain-barrier, BBB; blood-cerebrospinal fluid-barrier, BCSFB) and glial cells in the CNS milieu. Also, glial cells, i.e., microglia and astrocytes, can sequester metals to protect neurons from metal-induced toxicity (Bishop et al., 2011). Astrocytes appear to have a central role in attenuating neural excitotoxicity by scavenging metals that cross brain-barrier defenses, with microglia also partaking in the immunity against metal accumulations (Zheng et al., 2010;Morales et al., 2017).
Since aging is characterized by perturbed permeability of brain-barriers and glial dystrophy/senescence, brain metal concentration, which would otherwise be kept within narrow limits, becomes dysregulated (Bishop et al., 2011;Popescu and Nichol, 2011;Rathore et al., 2012;Braidy et al., 2017). The altered compositions of trace metals may induce oxidative stress and contribute to advanced age, being the major risk factor for neurodegenerative diseases including Alzheimer's and Parkinson's disease (PD; Popescu et al., 2009;Popescu and Nichol, 2011;Grochowski et al., 2019). The analysis of metal content, in terms of their age-related accumulation in different brain regions, may be useful for monitoring the changes accompanying normal and abnormal aging, and provide avenues for maintaining optimal brain health. While there have been many reports indicating brain iron increases with aging between different regions, some have used bulk measurements of tissue samples from different areas rather than by spatial iron mapping (Markesbery et al., 1984); others by histochemical Perl's staining which is not quantitative (Connor et al., 1990;Benkovic and Connor, 1993;Burdo et al., 1999); and some are based on relaxometry or quantitative susceptibility mapping MRI measurements which indirectly assessed iron by its effects on the surrounding protons (Langkammer et al., 2012). Most notably, none to our knowledge have spatially mapped transition metals at reasonably high resolutions throughout the life-course of a normally aging organism. Therefore, the aim of this study was to examine brain metal levels using synchrotron-radiation X-ray fluorescence (SRXRF) elemental mapping over the normal aging life-course of C57Bl/6J mice at 2-, 6-, 19-and 27-months, corresponding to ''post-adolescence, young adults, elderly and very elderly,'' respectively. Assessment of microglia and astrocytes by ionized calcium binding adaptor molecule 1 (Iba1) and glial fibrillary acidic protein (GFAP) immunohistochemistry (IHC), respectively, was also performed. We have focussed on the basal ganglia which we and others have demonstrated to accumulate iron with aging (Aquino et al., 2009;Walker et al., 2016). We have previously reported the quantitative assessments of iron; and ferritin-, Iba1-and GFAP-immunopositive cells in the basal ganglia at these ages (Walker et al., 2016). Here, we have extended the study/reanalyzed the data to include both qualitative and quantitative assessments of other metals, and particularly focussed on investigating the relationships between metals and ferritin-, Iba1-and GFAP-immunopositive cells in the basal ganglia. Additionally, metal contents from the ventral hippocampus and the cingulate cortex are also included in this study, as these regions are routinely affected in normal aging and neurodegenerative diseases.
Animals
Male C57BL/6J mouse brains were obtained from Shared Aging Research Models (ShARM, Sheffield, UK). Mice were culled at 2-, 6-, 19-and 27-months of age (n = 4/age) by rising CO 2 inhalation. Ages of mice chosen for study were comparable to different stages of life from just post-adolescence, adult, midlife, elderly and the very elderly. The heads were removed from the body and heads fixed in 4% paraformaldehyde for 1 week. The brains were then dissected out of the skulls and brains stored in phosphate-buffered saline (PBS, 4 • C) with 0.05% sodium azide prior to cryosectioning for SRXRF and IHC (see below). Ethical approval was not required for this study according to the Animals (Scientific Procedures) Act 1986 (ASPA).
Brain Cryosectioning
Brains were cryoprotected in 30% sucrose in PBS with 0.05% sodium azide and then cryosectioned coronally to produce 40 µm thick frozen sections that were mounted onto 4 µm thick Ultralene film (Spex Sample-Prep, NJ, USA) secured to a customized holder for SRXRF. Adjacent 20 µm thick cryosections sections were also obtained and mounted onto Superfrost plus microscope slides for IHC.
SRXRF Elemental Mapping and Analysis
SRXRF of the whole/right hemisphere of brain tissue sections was performed at the Diamond Light Source synchrotron radiation facility (microfocus beamline I18; Didcot, UK). Brain sections were mounted at a 45 • angle with respect to the incoming X-ray beam and the detector to minimize scatter contribution and scanned raster fashion using a beam with a diameter (resolution) of 100 µm and 11 keV energy. Full energy dispersive spectra were collected for each sample point exposed to the beam, fitted, and the net peak areas of the characteristic peaks of iron, zinc, and copper (Supplementary Table S1) were evaluated using PyMca (Solé et al., 2007). The photon flux on the samples, required for quantification was estimated by measurement of a reference metal film (AXO, Dresden, GmbH). SRXRF elemental maps of pixel-by-pixel elemental metal concentrations (parts per million, ppm) were obtained. Regions of interest (ROIs) were placed on the elemental metal maps to obtain average concentrations in brain regions: substantia nigra (Bregma −3.08 to −3.64 mm), globus pallidus (Bregma −0.22 to −0.70 mm), striatum (Bregma +1.10 mm to +0.14), cingulate cortex (Bregma +1.10 to +0.26 mm) and ventral hippocampus (Bregma −2.80 to −3.52 mm).
Four sections from the whole/right hemisphere including the striatum, globus pallidus, substantia nigra, ventral hippocampus and cingulate cortex were scanned (digitized) on a LEICA SCN400F scanner to produce 20× magnification digital images. Depending on the region, 2-4 optical fields were taken, each comprising a 628.0 × 278.5 µm 2 area. Ferritin-, GFAP-or Iba1-immunopositive cells in each optical field were manually counted using the cell counter plugin of ImageJ (NIH) and expressed as a number of immunopositive cells per unit area.
Statistical Analysis
Data analysis was performed using Prism version 8 (GraphPad Software, CA, USA). Normal distribution was checked graphically using Q-Q plots, residual plot, homoscedasticity plot, and numerically using Shapiro-Wilk's test. The following variables were log-transformed to normality: iron, zinc, copper, microglia, astrocytes, and ferritin. Analysis of variance (ANOVA) was used to determine differences between the different regions of the brain, and at different mouse brain ages. We also performed linear regression modeling to assess the association between metals and glia. Metals to glia ratio were computed to assess how the ratios changed relative to one another with aging. A statistical value of p ≤ 0.05 was considered significant. Values are quoted as mean ± standard deviation (SD).
SRXRF Elemental Mapping of Metal Distributions in the Brain
Metal distributions in the basal ganglia, cingulate cortex, and ventral hippocampus, were mapped with SRXRF and found to be both heterogeneous and altered with aging (Figures 1-3). Qualitative assessment of iron revealed increased deposition in the basal ganglia in 19-and 27-month-old mice, compared to the younger age-groups. Interestingly, the stria medullaris of the thalamus, fornix and the ventricles showed some qualitative iron increments at 19-and 27-months of age especially at the latter age (Figure 2).
Copper was strikingly enriched in the choroid plexus/ventricles at 19-and 27-months of age (Figure 2). Zinc, on the other hand, appeared generally elevated throughout the brain but particularly in the stria medullaris of the thalamus, the corpus callosum, fornix and hippocampal fissure at 19-and 27-months (Figure 2).
Immunohistochemistry of Glia Distribution
Microglial (Iba1) immunostaining at the level of fimbria and ventral commissure appeared to be lower in 6-month-old mice compared to those at 2-month, but higher at ages 19-and 27-months relative to younger mice ( Figure 4A). On the contrary, astrocytes (GFAP) seemed to show an age-related decline at the fimbria/ventral commissure level ( Figure 4B).
Since age is routinely associated with glial dystrophy, we performed a qualitative evaluation of aged glia compared to younger glia in the basal ganglia (Figure 4). Aged microglia (27-month) exhibited signs of hypertrophy, with enlarged soma compared to 2-month-old mice in the substantia nigra and striatum (Figure 4Ci). The cell body of aged microglia exhibited a spheroid shape that was distinctive from the round shape of young microglia. Equally, the microglial processes observed in 27-month-old mice were shorter and thickened compared to the thinly ramified processes apparent in the 2-month old mice. Also, aged microglia commonly clustered together and was accompanied by frequent fragmentation of processes (Figure 4Cii).
Aged astrocytes exhibited hypertrophy, somata were larger and more elongated with processes appearing shorter and thicker, reminiscent of loss of distal processes (Figure 4Di). FIGURE 1 | Synchrotron-radiation X-ray fluorescence (SRXRF) elemental iron, copper and zinc maps of tissue sections (30 µm thick) for quantification of the striatum of C57Bl/6J mice at 2, 6, 19 and 27-months of age. Other brain structures of interest are also labeled on the maps. High signal intensity artefacts arising from folded tissues and specks/pieces of tissue from cryosectioning are visible in some images.
FIGURE 2 | SRXRF elemental iron, copper and zinc maps of tissue sections (30 µm thick) for quantification of globus pallidus and cingulate cortex of C57Bl/6J mice at 2, 6, 19 and 27-months of age. Other brain structures of interest are also labeled on the maps. High signal intensity artefacts arising from folded tissues and specks/pieces of tissue from cryosectioning are visible in some images.
Younger astrocytes were marked by manifold long processes to serve a greater territory (Figure 4Dii).
SRXRF Assessment of Brain Metals Regional Brain Concentrations
ROIs were used to probe the metal concentrations in the basal ganglia, cingulate cortex and the ventral hippocampus (Supplementary Figure S1). Iron distribution was similar between different brain areas at age 2-month (Supplementary Figure S2i). However, at 6-months of age, significantly higher iron content was observed in the substantia nigra (p < 0.01) and globus pallidus (p < 0.05), compared to the cingulate cortex and ventral hippocampus. A similar trend was observed in 19-and 27-months old mice, where, significantly augmented iron concentrations were observed in the substantia nigra (p < 0.05 and p < 0.001, respectively) and globus pallidus (p < 0.001) compared to the striatum, cortex, and hippocampus.
The copper concentration was significantly lower in the globus pallidus compared to the striatum (p < 0.01), cingulate (p < 0.01) and ventral hippocampus (p < 0.05) at 6-months, FIGURE 3 | SRXRF elemental iron, copper and zinc maps of tissue sections (30 µm thick) for quantification of substantia nigra and ventral hippocampus of one hemisphere, of C57Bl/6J mice at 2, 6, 19 and 27-months of age. High signal intensity artifacts arising from folded tissues and specks/pieces of tissue from cryosectioning are visible in some images. High signal intensity artefacts arising from folded tissues and specks/pieces of tissue from cryosectioning are visible in some images. *The tissue sections collected from mice 27-month-old were not fully scanned by SRXRF to minimize scan time.
but not observed at other ages (Supplementary Figure S2ii). Zinc levels were significantly elevated in the globus pallidus and hippocampus compared to the substantia nigra at 2-months of age (p < 0.05), but were lower in the globus pallidus compared to the nigra (p < 0.05) at 6-months (Supplementary Figure S2iii), with no changes observed at later ages.
Alterations of Regional Brain Metal Concentrations With Aging
With advancing age, at ages 19-and 27-months, iron concentrations were significantly increased compared to 2and 6-month-old mice (Figure 5i), in the substantia nigra (p < 0.001, p < 0.05, respectively), globus pallidus (p < 0.001) and striatum (p < 0.01, p < 0.001, respectively). In the cingulate cortex, iron-enrichment was observed in the 19-and 27-months old mice, compared to both 2-and 6-months old mice (p < 0.05), but were comparable between ages in the ventral hippocampus.
Higher levels of copper were found at 19-month (p < 0.05) and 27-month (p < 0.01) compared to 6-months old mice in the globus pallidus (Figure 5ii). Zinc levels were augmented in the 6-month-old mice compared to mice aged 2-months in the substantia nigra (p < 0.05) and cingulate cortex (p < 0.05; Figure 5iii). The globus pallidus exhibited increased zinc at aged 27-months (p < 0.05) compared to mice at 6-months.
Immunohistochemical Assessments of Ferritin, Microglia and Astrocytes in the Basal Ganglia
Regional Numbers of Ferritin-, Microglia-and Astrocyte-Immunopositive Cells
IHC revealed ferritin-immunoreactive cells ( Figure 6A) to be significantly enriched in the substantia nigra and globus pallidus compared to the striatum at age 2-months (p < 0.01), however, only the globus pallidus (p < 0.05) had significantly more ferritin-immunopositive cells compared to the striatum at 6-months (Supplementary Figure 3i). A higher number of ferritin-immunoreactive cells (p < 0.001) was observed in the substantia nigra relative to both the globus pallidus and striatum at older ages (19-and 27-months). Also, the striatum demonstrated higher numbers of ferritin-immunoreactive cells compared to the globus pallidus at 19-months of age (p < 0.001).
Interestingly, IHC for Iba1 did not reveal significant differences in microglial positivity between the brain regions FIGURE 4 | Immunohistochemically stained images of (A) Iba1-stained microglia and (B) glial fibrillary acidic protein (GFAP)-stained astroglia acquired at the level of the hippocampal fimbria and ventral commissure (20× magnification, scale bar = 100 µm). (Ci) The images show differences in Iba1-immunopositive microglia morphology in the basal ganglia, in 2-months old C57Bl/6J with microglia having small soma and ramified processes, compared to hypertrophic cell bodies and thickened processes (arrows) at 27-months of age (40× magnification, scale bar = 40 µm). (Cii) Fragmentation of processes is demonstrated by microglial staining in the basal ganglia of 27-month-old C57Bl/6J mice (40× magnification, scale bar = 40 µm). (Di) GFAP-stained astrocytes in 2-months old C57Bl/6J mice exhibit a small cell body with long, thin primary processes, while at 27-months of age, astrocytes show a spheroid cell body and thickened processes with a loss of distal processes (40× magnification, scale bar = 40 µm). (Dii) astrocytes in 2-month-old C57Bl6/J mice show manifold processes and therefore serve a greater territory, while those in 27-month-old mice exhibit loss of distal processes, de-ramified processes that have become shorter and thickened (40× magnification, scale bar = 40 µm). Figure S3ii). On the other hand, GFAP IHC revealed a consistent pattern of significant differences between the regions at all ages, with higher number of astrocytes in the globus pallidus compared to the substantia nigra (p < 0.001) and the striatum at ages 2-, 6-and 19-months (p < 0.001) and 27-months (p < 0.05; Supplementary Figure S3iii).
Alterations in Numbers of Ferritin-and Glial-Immunopositive Cells With Aging
Ferritin-immunopositive cells were less abundant in the substantia nigra at 19-months of age compared to that at 2-and 6-months (p < 0.05, p < 0.001, respectively), while the cell numbers at 27-months were lower relative to that at 6-months (p < 0.05) but not at 2-months (Figure 6Bi). On the contrary, the globus pallidus (p < 0.001) showed significantly enriched ferritin-immunopositive cell populations at 19-and 27-months compared to that at younger ages, 2-and 6-months. As expected, striatal ferritinimmunoreactive cells were augmented at aged 6-, 19-and 27-months relative to that at 2-months (p < 0.05, p < 0.001 and p < 0.001, respectively).
GFAP-immunostaining revealed significantly higher astrocytes in the 2-month and 19-month-old mice compared to that in the 6-month old, in the substantia nigra (Figure 6Biii). The astrocytes were observed to be augmented in the 19and 27-month-old in relation to the 6-month-old mice. In the globus pallidus, astrocytes were significantly enriched at 27-month (p < 0.05) compared to both 2-and 6-month-old mice. Striatal astrocyte (p < 0.001) content was higher in the 19-and 27-month old compared to the 2-and 6-month old. Also, astrocytic staining was significantly higher at aged 27-month relative to 19-month (p < 0.05).
We also formulated an iron to ferritin ratio to assess how both measures change in relation to one another in the basal ganglia with age. A significant increase in the iron:ferritin ratio (Supplementary Figure S4i) was observed at aged 19-and 27-months compared to 2-and 6-months, in the substantia nigra (p < 0.01, p < 0.05, respectively) and globus pallidus (p < 0.05, p < 0.01, p < 0.001, respectively). In the striatum, a higher iron: ferritin ratio was observed only at 27-months (p < 0.05) of age relative to 2-and 6-months.
Ferritin-and Glial-Immunoreactive Cells
Since glial cells predominantly contain light-chain ferritin, we correlated ferritin with Iba1 or GFAP immunoreactivities and found heterogeneity in associations between ferritin and glial cells in the basal ganglia (Figure 7). Higher levels of ferritin immunopositive cells were correlated to lower levels of microglia in the substantia nigra (R 2 = −0.25, p = 0.0475), while a positive association was observed between ferritin and microglia in the globus pallidus (R 2 = 0.55, p = 0.001) and striatum (R 2 = 0.46, p = 0.0041; Figure 7ii). Interestingly, a similar trend was apparent for ferritin and astrocytes (Figure 7iii), where ferritin was negatively correlated to astrocytes in the substantia nigra (R 2 = −0.54, p = 0.0013) but positively correlated to astrocytes in the globus pallidus (R 2 = 0.44, p = 0.0051) and striatum (R 2 = 0.66, p = 0.0001).
Relationship Between Metal Levels and Glia Numbers
We reasoned that glia, particularly those that surround neurons, may accumulate metals to maintain homeostasis locally to protect neurons from metal-induced oxidative stress, leading us to a plausible examination of the association between metal levels and glial cell numbers. Higher levels of iron were associated with higher number of microglia in the globus pallidus (R 2 = 0.57, p = 0.0008) and striatum (R 2 = 0.48, p = 0.0028), but not in the substantia nigra (R 2 = 0.17, p = 0.1084; Figure 8i). A positive linear correlation (Figure 8ii) was observed between iron levels and astrocytes in substantia nigra (R 2 = 0.37, p = 0.013), globus pallidus (R 2 = 0.49, p = 0.0025) and striatum (R 2 = 0.84, p = 5.26 × 10 −7 ). While generally, a higher concentration of copper was related to higher numbers of microglia in the basal ganglia, significance was only reached in the globus pallidus (R 2 = 0.43, p = 0.0057), and not in other basal ganglia regions (Figure 8iii). There was also a generally positive association between copper and astrocytes in the substantia nigra (R 2 = 0.21, p = 0.0755), globus pallidus (R 2 = 0.16, p = 0.1189) and in the striatum (R 2 = 0.24, p = 0.0564), albeit not significant at the p < 0.05 level (Figure 8iv).
FIGURE 7 | Linear regression analysis was performed to examine the association between ferritin with iron (i), microglia (ii) and astrocytes (iii) in the basal ganglia. The regression equation for each graph, correlation coefficient (R 2 ) and p-value (n = 16) for each analysis is noted.
FIGURE 8 | Linear regression analysis was performed to assess the association between (i) iron and microglia, (ii) iron and astrocytes, (iii) copper and microglia, (iv) copper and astrocytes in the basal ganglia. The regression equation for each graph, correlation coefficient (R 2 ) and p-value (n = 16) for each analysis is noted.
Aging-Associated Changes in Metal to Glia Ratio
Aging is associated with perturbed brain metal homeostasis and altered glial biology and therefore the aim of the present study was to evaluate how metal levels change in relation to the glia in the basal ganglia with advancing age. To investigate this, we calculated metal to glia ratios to elucidate the changing metal concentrations relative to glia (Supplementary Figure S4).
The iron:microglia ratio was augmented in the substantia nigra in 19-(p < 0.01) and 27-month (p < 0.05) old compared to that in 2-month-old mice (Supplementary Figure S4ii). Similarly, the globus pallidus demonstrated a higher iron:microglia ratio at aged 19-and 27-months, relative to that at 2-months (p < 0.001) and 6-months (p < 0.01, p < 0.001). The striatum also exhibited a higher iron:microglia ratio at aged 19-and 27-months compared to that at 2-months and a lower iron:microglia ratio was observed at 6-months compared to 19-months. The iron:astrocytes ratio was elevated in the substantia nigra and globus pallidus in the 19-(p < 0.05, p < 0.001, respectively) and 27-month-old mice (p < 0.001) compared to the 2-month-old (Supplementary Figure S4iii). Moreover, aged 19-(p < 0.001) and 27month-old (p < 0.01) mice showed a higher iron:astrocytes ratio in the globus pallidus compared to in the younger 6month-old mice. Meanwhile, the striatal iron:astrocytes ratio was found to be significantly lower in the older 19-and 27-months mice in relation to the younger aged 2 and 6-months old mice (p < 0.001, p < 0.05 and p < 0.001, p < 0.001, respectively).
Copper:microglia ratios were similar across the younger and older aged groups in the basal ganglia (Supplementary Figure S4iv), however, the copper:astrocytes ratio was attenuated at ages 19-and 27-month-old compared to their younger counterparts (Supplementary Figure S4v), 2-month (p < 0.001) and 6-month old (p < 0.01, p < 0.001). The substantia nigra exhibited increased zinc:microglia at 6-months vs. 2-(p < 0.01), 19-(p < 0.05) and 27-months (p < 0.05, Supplementary Figure S4vi), a similar pattern to that for the nigral zinc:astrocytes ratio (Supplementary Figure S4vii). An augmented striatal zinc:astrocytes profile was found in the 2-month old (p < 0.001) compared to the 6-and 19month-old, while a lower zinc:astrocytes ratio was evident at 19-(p < 0.01) and 27-months (p < 0.001) compared to 6-months. FIGURE 9 | Linear regression analysis was performed to assess the association between (i) zinc and microglia, (ii) zinc and astrocytes in the basal ganglia. The regression equation for each graph, correlation coefficient (R 2 ) and p-value (n = 16) for each analysis is noted.
DISCUSSION
We demonstrate altered metal deposition and glia dystrophy in the basal ganglia regions with increasing age, which could explain the susceptibility of selected brain regions to neurodegenerative disease. Most noticeably, increased heterogeneity and differential distributions of the metals in the various brain regions with aging was apparent. Specifically, iron was highly enriched in the basal ganglia and exhibited age-related increases. Meanwhile, copper and zinc seemed to show modest increments in their concentration confined to the globus pallidus with aging, with zinc appearing to be enriched in nerve bundles. Moreover, levels of microglia and astrocytes increased as a function of age in the basal ganglia, but the aged brain showed dystrophic glia, reminiscent of senescence. Since aging is associated with a perturbed BCSFB, metal compositions were observed to be altered, particularly striking was copper deposition at the choroid plexus/ventricles. Taken together, we demonstrate that changes in brain barrier permeability and glial dystrophy with aging may induce differential regional content of various metals and are a hallmark of the aging brain.
Iron
Consistent with previous reports including our own (Walker et al., 2016), iron was present at higher concentrations in the substantia nigra and globus pallidus compared to that in the striatum, cortex, and hippocampus at the older ages (Hallgren and Sourander, 1958). The iron levels increased with age predominantly in the basal ganglia, suggesting iron to be an age-dependent enriched metal in the brain (Connor et al., 1990). The augmented iron levels can be attributed to the constant delivery of iron into the brain and decreased release of iron into the blood via the BBB and BCSFB (Burdo and Connor, 2003). It is tempting to postulate that iron may be drained from the cerebrospinal fluid (CSF) via the lymphatic system (Ashraf et al., 2018). Interestingly, an in vitro study showed the transport of unbound iron from CSF to the blood via a DMT1-mediated transport mechanism (Wang et al., 2008). Furthermore, following in vivo exposure of rats to toxic amounts of manganese, the clearance of iron from the CSF to the blood was significantly diminished, leading to increased CSF iron deposition (Wang et al., 2008). Defective iron clearance as a function of aging may explain the qualitative increases in ventricular iron in older-aged mice we observed. Moreover, astrocytes which serve vital functions including transport of iron across brain-barrier systems and maintenance of brain iron homeostasis (Kubik and Philbert, 2015;Ashraf et al., 2018), demonstrated qualitatively age-related decreases in the regions close to the BCSFB (fimbria/ventral commissure level) in this study, reinforcing apparent impaired iron clearance from the brain with aging.
Usually, elevated iron is consistent with altered expression levels of iron-binding proteins, especially of ferritin, as its expression is induced by increased iron (Burdo et al., 1999). Indeed, increased ferritin mirrors the iron deposition in the globus pallidus and striatum we observed. However, a dissociation between iron accumulation and ferritin upregulation was observed in the aged substantia nigra, where higher levels of iron were not associated with increased levels of ferritin as we and others have reported previously (Benkovic and Connor, 1993;Walker et al., 2016). Ferritin is composed of 24 subunits of two types, heavy and light chains, forming a soluble hollow shell capable of storing up to 4,500 ferric iron atoms (Harrison and Arosio, 1996). This cellular iron storage protein is present in neurons, microglia, and oligodendrocytes, with oligodendrocytes being the highest iron-containing cell in the brain (Benkovic and Connor, 1993). Interestingly, the detection of ferritin in astrocytes is unusual, in that astrocytes exhibit weak ferritin immunoreactivity (Mirza et al., 2000). Heavy chain ferritin exhibits ferroxidase activity and converts the reactive toxic ferrous ion to the more stable ferric ion, so that iron can be stored by light chain ferritin (Muhoberac and Vidal, 2013). Microglia predominantly contain light chain ferritin as they are more concerned with scavenging iron (Ashraf et al., 2018) and have been shown to accumulate more iron than neurons, with microglia being better scavengers than astrocytes (Bishop et al., 2011). In the present study, we found a positive linear correlation between ferritin-and glial cell-immunoreactive cells in the globus pallidus and striatum, consistent with a previous report (Schipper et al., 1998). Interestingly, ferritin immunopositive microglia has been demonstrated to become more pronounced with age, whereas neuronal ferritin staining remains unaltered despite elevated iron as assessed by Perl's histochemical staining (Benkovic and Connor, 1993). Elevated iron in neurons, in the absence of a concomitant increase in neuronal ferritin, may predispose neurons to iron-induced free radical damage and ensuing oxidative stress. In the vulnerable aged brain, the role of glial ferritin in sequestrating and detoxifying iron is even more paramount.
Negative correlation was observed between ferritin-and GFAP-immunopositive cells in the substantia nigra. Due to its exceptionally high iron content, the substantia nigra adopts two different iron storage systems, one based on ferritin and the other, on neuromelanin (Tribl et al., 2009), enabling synergistic regulation of iron homeostasis. However, our age-related observation of decreased ferritin along with reports signifying declining neuromelanin content (Tribl et al., 2009;Xing et al., 2018) in the aged substantia nigra, suggest an increased ferrous iron pool that can precipitate oxidative stress during brain aging, contributing to the increased risk of neurodegenerative diseases, particularly PD, with aging (Dexter et al., 1987;Xu et al., 2018).
We and others have previously reported increased microglial and astroglial cell numbers with aging in the basal ganglia, providing evidence for a more primed or inflammatory profile (Codazzi et al., 2015;Walker et al., 2016;Boisvert et al., 2018). Aging is associated with chronically elevated levels of circulating cytokines including TNFα, IL1β and TGFβ, with glial cells being major driving factors for brain aging (von Bernhardi et al., 2010). Interestingly, microglial and astroglial iron homeostasis has been shown to be differentially regulated by TNFα and TGFβ. Treatment of astrocytes with pro-inflammatory TNFα induced expression of DMT1 and suppressed ferroportin expression, while anti-inflammatory TGFβ-treatment did not affect DMT1 expression but increased ferroportin expression (Rathore et al., 2012). On the contrary, treatment of microglia with either TNFα or TGFβ leads to augmented DMT1 expression together with suppression of ferroportin. The findings demonstrate that TNFα leads to iron uptake and retention by both microglia and astrocytes, while TGFβ promotes iron efflux from astrocytes but increased microglial iron retention (Rathore et al., 2012). Astrocytes appear to be amenable to modulation, while microglia are fuelled by iron to perpetuate inflammation, hastening the aging process and contributing to increased susceptibility to acquiring the neurodegenerative disease.
Interestingly, we found an increased iron to microglia ratio in the basal ganglia, while the iron to astroglia ratio was elevated in the substantia nigra and globus pallidus but decreased in the striatum during aging. This could be indicative of the different roles performed by astrocytes on a regional basis per se. The nigral astrocytes have been found to be overly sensitive to acute ischemia compared to other brain regions (Karunasinghe et al., 2018), and exhibit vulnerability to oxidative insult (Cardoso et al., 2012(Cardoso et al., , 2014. The aged nigra is associated with both iron and copper deposition in subsets of astrocytes (Schipper et al., 1998), rendering these astrocytes inherently prone to perturbations in metal redox homeostasis. Oxidative inactivation of aconitase, aka iron-responsive protein-1 (IRP1), which is involved in regulating cellular iron, in astrocytes has been linked to increased ferrous iron and hydrogen peroxide production (Cantu et al., 2009), promoting bioactivation of dopamine and other catechols to neurotoxic free radicals. Of note, augmented release of IL-1β and TNFα by iron-loaded microglia induced upregulation of IRP1, DMT1, hepcidin and transferrin receptor-1 (TfR1) expression in ventral mesencephalic neurons via production of reactive oxygen species (ROS), enhancing neuronal iron accumulation (Xu et al., 2016). Inefficient sequestration of redox-active iron by aging nigral glia, concomitant with the dissociation between iron and ferritin upregulation mentioned above, may predispose the senescent nervous system to oxidative stress-mediated neurodegeneration (via Fenton reactions) in Parkinsonism and other neurodegenerative diseases (Zecca et al., 2004).
While microglia may be unequivocally involved in perpetuating the generation of ROS in the striatum, astrocytes may play more of a neuromodulatory role (Pelizzoni et al., 2013). The declining iron to astroglia ratio in the striatum observed in the present study could be indicative of a compensatory response mediated by astrocytes to regulate iron levels (Knott et al., 1999). Striatal astrocytes are commissioned with the task of physiological clearance of age-related synaptic debris, particularly of degenerated dopaminergic neurons (Morales et al., 2017). Dopaminergic neurons undergo an insidious degeneration (∼6-8% of cells every decade) during normal aging while in PD, a loss of 60% of striatal synapses is evident (Rodriguez et al., 2015). Previous data suggests that if astrocytes can perform functional trans-autophagy of cell debris, this should be sufficient to ensure striatal tissue homeostasis. However, the onset of PD may be characterized by impaired astroglial trans-autophagy, necessitating microglial activation to complete the clearance of dopaminergic neuronal debris, thereby aggravating the onset and progression of PD (Morales et al., 2017).
As previously documented, microglia and astroglia demonstrated pronounced morphological changes with aging in the basal ganglia, where senescent dystrophic glia exhibited hypertrophic soma and decreased arborization of processes (Jyothi et al., 2015). The glial cells may undergo aberrant signaling, leading to age-related metal dyshomeostasis, and represent a concause in neurodegenerative processes.
Copper
Copper levels were higher in the striatum, cingulate cortex, and ventral hippocampus compared to the globus pallidus in 6month-old mice. The older mice exhibited increases in copper levels in the globus pallidus compared to 2-and 6-month-old mice, as well as high copper levels, were correlated with an augmented number of microglia. Akin to iron, copper can also undergo Fenton chemistry and so high copper levels promote ROS production, and altered copper homeostasis is prevalent in neurodegenerative diseases (Wang et al., 2010;Zheng et al., 2010;Zheng and Monnot, 2012). Aging is characterized by a chronic neuroinflammatory state and interferon γ-stimulated microglial cells have been associated with augmented copper uptake accompanied by an expression of the copper importer, copper transporter-1 (CTR1) particularly in the choroid plexus (Zheng et al., 2010). Copper was strikingly enriched in the ventricles with aging in the present study, consistent with previous observations of increased copper at the choroid plexus (Fu et al., 2015). It has been suggested that the BCSFB is the predominant barrier for regulated copper uptake in the brain (Choi and Zheng, 2009). Astrocytes in the vicinity of ventricles are in a prime location to balance brain copper content and achieve detoxification, having access to both interstitial fluid and CSF (Pushkar et al., 2013). Moreover, the concentrations of other metals did not reach the striking levels exhibited by copper with aging, implying that astrocytes are chiefly involved in regulating brain copper levels. Our finding of increased ventricular copper deposition is further strengthened by literature showing augmented genetic and protein expression levels of copper transporters (e.g., ATP7A) at the choroid plexus compared to the brain parenchyma (Choi and Zheng, 2009;Fu et al., 2015). We found attenuated astrocytic expression close to the ventricles at the level of ventral commissure/fimbria with aging and concomitant with astrocytes being intimately involved in sequestrating copper, age-associated dysregulation of astroglial copper-buffering may occur and lead to toxic copper accumulation (Zheng and Monnot, 2012). As increased copper levels have been linked to reduced neurogenesis (Pushkar et al., 2013) which may be another contributory factor to the susceptibility of the aged brain to neurodegenerative diseases.
Zinc
Zinc is a pleiotropic modulator of synaptic plasticity, neuronal activity, and cognitive processes, with ∼30% of zinc in the brain existing as the free/chelatable form and stored within synaptic vesicles of glutamatergic forebrain neurons and the remaining ∼70%, in proteins (Portbury and Adlard, 2017). Zinc levels in the brain are regulated by metallothioneins, zinc-and iron-like regulatory proteins (ZIPs) and zinc transporter proteins (ZnTs; Hennigar and Kelleher, 2012). Zinc is released from presynaptic vesicles into the synaptic cleft, coincident with glutamate and regulates e.g., long-term potentiation via activation of Nmethyl-D-aspartate (NMDA) receptors (Takeda and Tamano, 2012). Histochemical staining visualizes free/chelatable zinc, usually synaptic zinc, has long-established zinc to be highly localized in nerve bundles and subsequently proved by protoninduced X-ray emission spectroscopy (Danscher et al., 1985). Similarly, we observed zinc to be enriched in the stria medullaris of the thalamus, corpus collosum, the fornix, and the hippocampal fissure, especially in older aged brains. Further, zinc accumulation had been shown to be higher in the hippocampus compared to other brain regions with the exception of the cerebellum following the administration of radioactive zinc to rats (Sawashita et al., 1997). This is consistent with the higher zinc in the ventral hippocampus compared to the substantia nigra at 6-months of age that we observed and underscores the importance of zinc in learning and memory consolidation (Sindreu and Storm, 2011). Zinc levels were also higher in the globus pallidus than in the substantia nigra at 2-months of age, albeit comparable to that in the hippocampus. Zinc has been known to modulate GABAergic transmission in the globus pallidus (Chen and Yung, 2006) and zinc is likely to have a major role during neurodevelopment in this brain region. However, the globus pallidus demonstrated lower zinc levels compared to the substantia nigra at 6-months of age. Indirect inputs from the striatal medium spiny neurons to the substantia nigra pars recta are via the external part of the globus pallidus and mediated by inhibitory neurons such that medium spiny neurons excitation further inhibits the thalamus via this pathway (Frank, 2005). The reversal in the zinc gradient between the globus pallidus and the substantia nigra as mice matured from 2 to 6 months of age suggests alterations in electrical activity/tone of this indirect basal ganglia circuit during neurodevelopment. We also observed a similar pattern of increased zinc in the cingulate cortex as mice aged from 2 to 6 months of age, consistent with zinc-containing neurons being required in brain development to form complex and elaborate associational network to interconnect the cortex with the limbic system (Corona et al., 2011).
Brain zinc levels have been reported to be increased with aging in both rats (Sawashita et al., 1997) and in man (Markesbery et al., 1984). We also observed augmented zinc levels in the globus pallidus of 27-months old mice compared to 2-, 6-and 19-months old mice. Interestingly, zinc accumulation has been reported in the globus pallidus in the 6-hydroxydopamine rat model of PD (Tarohda et al., 2005) and in the substantia nigra and striatum of PD subjects (Dexter et al., 1989). Thus, the augmented zinc in the globus pallidus of very old mice suggests zinc dysregulation may be another factor in aging being the major risk factor for neurodegenerative disease. While we did not observe significantly increased hippocampal zinc levels, this has been observed in aging rats and zinc chelation therapy was shown to attenuate deficits in synaptic plasticity (Shetty et al., 2017). Note, the discrepancy may arise as SRXRF maps elemental zinc content, rather than free/chelatable zinc, which is often the form detected by other zinc measurement techniques.
Zinc has primarily been known to be associated with neuronal function, however, glial cells are commissioned with the task of maintaining zinc homeostasis, thereby maintaining optimal synaptic signaling (Hancock et al., 2014). We found a significant association between zinc and glial cells in the globus pallidus. Indeed, astrocytes accumulate zinc (Nolte et al., 2004) via expression of the zinc transporter, ZIP14 (Bishop et al., 2010), enabling astrocytes to maintain glutamatergic and GABAergic synaptic transmission (Hancock et al., 2014). Also, microglia can directly uptake zinc via another zinc transporter, ZIP1, which serves as a trigger for sequential microglial activation (Higashi et al., 2011). The glial senescence observed in this study may induce aberrant connectivity at the tri-partite synapse by enhancing zinc dyshomeostasis with aging.
During neurodevelopment (from 2 to 6 months of age), the increased zinc to glial ratio observed may reflect increased availability of zinc to neurons and glia for synaptogenesis/synaptic pruning. In humans, peak synaptic density has been demonstrated to occur in mid-childhood (Liu et al., 2012), with synaptic pruning extending to a third decade of life (Petanjek et al., 2011). However, the ratio of zinc to astroglia was reduced in the basal ganglia at the older ages which may be attributed to the age-associated astrogliosis observed. Thus, a situation of functional zinc deficiency may be induced, as opposed to zinc excess as suggested above, regardless, either scenario may contribute to the synaptic dysfunction observed in aging (Szewczyk, 2013).
CONCLUSION
Metal dyshomeostasis and the presence of a low-grade chronic inflammation owing to dystrophic glial cells are pervasive features of normal aging, rendering the brain susceptible to neurodegenerative diseases.
DATA AVAILABILITY STATEMENT
According to UK research councils, Common Principles on Data Policy, data supporting this study will be openly available in the Supplementary Material.
ETHICS STATEMENT
Ethical review and approval was not required for the animal study because brain samples were purchased/received, the authors did not have live animals at any time.
AUTHOR CONTRIBUTIONS
P-WS contributed to the conception and design of the study. AE, TW, MS, AS, and SA performed the histology and data analysis. P-WS, CM, HP, AP, TW, and KG acquired and analyzed SRXRF data. AA performed the statistical data analysis. P-WS and AA wrote the first draft of the manuscript with contributions from AE, MS, and TW. AA, P-WS, AE, HP, MS, and SA revised and approved the submitted version.
FUNDING
We would like to thank the Biotechnology and Biological Sciences Research Council (BBSRC), King's College London and Perspectum Diagnostics Limited for funding AA's industrial Ph.D. studentship; the BBSRC and Agilent Technologies Limited for funding TW's Cooperative Awards in Science and Technology (CASE) studentship (BB/J012777/1); and the Engineering and Physical Sciences Research Council (EPSRC) and Agilent Technologies for funding CM's CASE studentship (voucher 11330179). Open access publication fees were received from King's College London Library. | 2019-12-19T14:08:48.638Z | 2019-12-19T00:00:00.000 | {
"year": 2019,
"sha1": "b59a3f1afbac3f3568d2f90a6eae593723427667",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2019.00351/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b59a3f1afbac3f3568d2f90a6eae593723427667",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
4818758 | pes2o/s2orc | v3-fos-license | Assessment of the hardness of different orthodontic wires and brackets produced by metal injection molding and conventional methods
Background: This study was conducted to assess the hardness of orthodontic brackets produced by metal injection molding (MIM) and conventional methods and different orthodontic wires (stainless steel, nickel-titanium [Ni-Ti], and beta-titanium alloys) for better clinical results. Materials and Methods: A total of 15 specimens from each brand of orthodontic brackets and wires were examined. The brackets (Elite Opti-Mim which is produced by MIM process and Ultratrimm which is produced by conventional brazing method) and the wires (stainless steel, Ni-Ti, and beta-titanium) were embedded in epoxy resin, followed by grinding, polishing, and coating. Then, X-ray energy dispersive spectroscopy (EDS) microanalysis was applied to assess their elemental composition. The same specimen surfaces were repolished and used for Vickers microhardness assessment. Hardness was statistically analyzed with Kruskal–Wallis test, followed by Mann–Whitney test at the 0.05 level of significance. Results: The X-ray EDS analysis revealed different ferrous or co-based alloys in each bracket. The maximum mean hardness values of the wires were achieved for stainless steel (SS) (529.85 Vickers hardness [VHN]) versus the minimum values for beta-titanium (334.65 VHN). Among the brackets, Elite Opti-Mim exhibited significantly higher VHN values (262.66 VHN) compared to Ultratrimm (206.59 VHN). VHN values of wire alloys were significantly higher than those of the brackets. Conclusion: MIM orthodontic brackets exhibited hardness values much lower than those of SS orthodontic archwires and were more compatible with NiTi and beta-titanium archwires. A wide range of microhardness values has been reported for conventional orthodontic brackets and it should be considered that the manufacturing method might be only one of the factors affecting the mechanical properties of orthodontic brackets including hardness.
INTRODUCTION
In orthodontic treatment, forces are applied to teeth through activated archwires inserted into the slots of the brackets bonded to tooth enamel surfaces.
Three different methods are used to manufacture metallic brackets: milling, casting, and metal injection molding (MIM). Combined brackets are manufactured by soldering with brazing alloys to connect the base and wings of the brackets or by direct laser welding the wings to the base. [1,2] The MIM technique is more recent than the other three methods and was developed in the United States in the early 1980s. [3] It is an inexpensive manufacturing process compared to other methods and is used to manufacture large quantities of complex and intricate parts. MIM makes it possible useing different alloys to manufacture orthodontic brackets, which is not always possible with the other manufacturing methods. [4][5][6] Single-unit MIM brackets exhibit uniform elemental distribution with no brazing components, without intra-bracket galvanic corrosion; however, they have increased porosity, increasing the risk of pitting corrosion. [5,7,8] In comparison to conventional brackets, MIM brackets exhibited a lower rate of nickel ion releasing into saliva. [9] The method of production might seriously affect the mechanical performance of orthodontic brackets in the clinic, and despite a large number of studies compared corrosive potential between MIM and conventional metal brackets, only a limited number of studies have compared the mechanical properties of these appliances. [5,[7][8][9][10][11] This study was undertaken to assess the hardness of orthodontic brackets produced by MIM and conventional methods and also different orthodontic wires (stainless steel, nickel-titanium [Ni-Ti], and beta-titanium alloys) to determine which wire is more compatible with each bracket to decrease the consequences of bracket and wire hardness mismatch.
MATERIALS AND METHODS
The brackets in this experimental study consisted of injection-molded (Elite Opti-Mim, Ortho Organizers, USA) and conventional brazed (Ultratrimm, Dentaurum, Germany) orthodontic brackets. The two types of brackets were edgewise brackets with a slot size of 0.018" for the upper left canine. The wires were made of stainless steel (SS) (Remanium, Dentaurum, Germany), nickel titanium (NiTi, Ortho Technology, USA), and beta-titanium (TMA, Ortho Technology, USA). All the archwires had the same rectangular cross-sectional configurations (0.017" × 0.025") and were cut into 15-mm segments. Fifteen specimens from each bracket and wire brand were evaluated. To this end, the wires were embedded in epoxy resin, and to expose the wing area for hardness assessment, the brackets positioned in a horizontal direction. The specimens were then ground with water-cooled 220-2000-grit Silicon carbide papers and polished up to 0.05-mm alumina slurry (Buehler, Lake Bluff, Il, USA). Then, the specimens were cleaned in an ultrasonic bath for 5 min, and three specimens from each study group were vacuum coated with a thin layer of gold to determine the elemental composition by X-ray energy dispersive spectroscopy (EDS) microanalysis. A scanning electron microscope (Seron AIS 2300, Seron, Korea) connected to an EDS unit equipped with a super-ultra-thin beryllium window was used. These specimens were repolished and the exposed surfaces of all the fifteen specimens from each experimental group underwent a VHN (HV 200 ) test, using a microhardness tester (Micromet 5101, Buehler, Tokyo, Japan) that applied a 200-g load for 15 s. The hardness of the external surfaces of the brackets and wires was measured, with only the wing component of the brackets being assessed. Three readings were recorded from the center of each specimen, and the mean value was calculated to represent the specimen. The micrographs of the representative Vickers indentations were obtained at ×200 through an optical microscope (Metallux, Leitz, Germany) equipped with a digital color camera. Since data did not exhibit normal distribution, the hardness test data were statistically analyzed with Kruskal-Wallis test, followed by Mann-Whitney test. Figure 1 illustrates representative X-ray EDS spectra obtained from the surfaces of tested brackets and wires. The elemental compositions of the brackets and wires as determined by EDS analysis are presented in Tables 1 and 2, respectively. In relation to brackets, Figure 1].
RESULTS
The results of Vickers hardness (VHN) measurements are presented in Figure 2 and Table 3. Micrographs of the representative Vickers indentations, obtained through the optical microscope, are shown in VHN values of wire alloys were significantly higher than those of the brackets studied. Comparisons of microhardness data among the five experimental groups were carried out with Kruskal-Wallis test, which revealed a significant difference between the groups (P < 0.001). Mann-Whitney tests were also employed for pair-wise comparisons and demonstrated significant differences among the study groups (P < 0.001 for all the comparisons).
DISCUSSION
Regarding the important role of hardness in clinical performance of orthodontic appliance, this study was done to assess the hardness of MIM and conventional orthodontic brackets and different orthodontic wires to determine which combination leads to better clinical results. According to the finding of this study, the results of EDS analysis for the two bracket groups showed that each bracket had been manufactured from a different alloy. In case of Elite Opti-Mim bracket, Klimek and Palatynska-Ulatowska [12] reported it as an Fe-Cr alloy; however, our findings suggested that it consisted of a Co-based alloy. Based on the results of the present study, the elemental composition of Ultratrimm falls within the range of austenitic American Iron and Steel Institute (AISI) type 305 SS alloy which is used for manufacturing metallic brackets (with 17%-19% of chromium and 11%-13% of nickel with a small amount of manganese and silicon, and a low carbon content, typically <0.06%). However, EDS cannot be used to quantify light elements such as carbon; therefore, the results should be interpreted with caution. [13,14] Based on the findings in relation to the hardness of wires, Vickers microhardness of SS wire (529.85 VHN) was significantly higher than that of NiTi wire (384.08 VHN). Beta-titanium wire exhibited the lowest hardness value (334.65 VHN). These findings are consistent with previous findings with the SS wires that exhibited the highest hardness (468-601 hardness values) [15][16][17][18][19] compared to other two alloys. Ni-Ti (240-438 hardness values) [16][17][18][20][21][22] and TMA (292-378 hardness values) [15][16][17][18]20,23,24] exhibited lower values with overlapping ranges.
In relation to bracket hardness, Zinelis et al. [5] reported that the Vickers microhardness of MIM brackets varied from 154 to 287 VHN, these results are consistent with the results of the present study. In our study, Elite Opti-Mim exhibited a hardness value of 262.66 VHN, significantly higher than that of Ultratrimm (206.59 VHN), probably due to the presence of Co-Cr alloy rather than a ferrous alloy in Ultratrimm bracket.
Surface properties are important factors in sliding technique for orthodontic space closure. [25] An increased hardness facilitates surface integrity of orthodontic brackets, preventing wire binding and impingement on the bracket slot walls, which might impede movement during displacement of bracket along the archwire. Moreover, low-hardness wing components might complicate the transfer of torque from an activated archwire to the bracket since it might prevent full engagement of the wire with the slot wall and possible plastic deformation of the wings. [17,22,23] Ultratrimm is a conventional SS orthodontic bracket manufactured by soldering the base and wing parts. [26] Previous studies have suggested that the VHN of one-piece brackets produced by MIM technology (154-287 VHN) is much lower than the hardness (400 VHN) of the wing components of conventional SS brackets; [5,27] however, in the present study, the hardness value of conventional brackets was much lower than that of the MIM brackets.
Such a difference might be justified by the fact that the manufacturing technique might not be the only factor affecting the mechanical properties of orthodontic brackets; other factors might include the type of alloy used for bracket manufacturing, its microstructure, thermal treatments used after bracket fabrication, and other manufacturing process factors. For instance, the bracket tested in the study mentioned [27] was Mini Diamond (Ormco, Glendora, CA, USA). The composition of SS alloy used for manufacturing this bracket wing material is very close to that of the S17400 precipitation-hardening alloy (type 17-4 PH SS, with nominal composition of [wt%]: 0.07 C, 0.70 Mn, 1.00 Si, 1-17.5 Cr, 3.0-5.0 Ni, 3.0-5.0 Cu, 0.04 P, 0.04S, and 0.15-0.45 Ta and Nb), [27,28] which yields high strength and hardness through heat treatment and therefore has a higher mechanical property than austenitic 305 SS type used in Ultratrimm. [5,14,27,28] As mentioned previously, the hardness of orthodontic brackets and wires should be similar and the results of this study are consistent with previous studies, suggesting that MIM brackets are more compatible with NiTi archwires, considering the decrease in the consequences of hardness mismatch. [5,13,29] However, it should be pointed out that the fabricating method might be only one of the factors affecting the mechanical properties of orthodontic brackets, including hardness, and further studies assessing these factors are needed.
CONCLUSION
The results of this study suggested that MIM orthodontic brackets exhibited hardness values much lower than SS orthodontic archwires, with greater compatibility with NiTi and beta-titanium archwires.
In relation to conventional orthodontic brackets, a wide range of microhardness values has been reported and it should be pointed out that the manufacturing method might be only one of the factors affecting the mechanical properties of orthodontic brackets.
Financial support and sponsorship
Nil. | 2018-04-03T01:00:46.643Z | 2017-07-19T00:00:00.000 | {
"year": 2017,
"sha1": "7de4e22f270bf80012a6c8425186c36ed28a1a03",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/1735-3327.211620",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7de4e22f270bf80012a6c8425186c36ed28a1a03",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
2847178 | pes2o/s2orc | v3-fos-license | Clinical Outcomes of an Optimized Prolate Ablation Procedure for Correcting Residual Refractive Errors Following Laser Surgery
Purpose The purpose of this study was to investigate the clinical efficacy of an optimized prolate ablation procedure for correcting residual refractive errors following laser surgery. Methods We analyzed 24 eyes of 15 patients who underwent an optimized prolate ablation procedure for the correction of residual refractive errors following laser in situ keratomileusis, laser-assisted subepithelial keratectomy, or photorefractive keratectomy surgeries. Preoperative ophthalmic examinations were performed, and uncorrected distance visual acuity, corrected distance visual acuity, manifest refraction values (sphere, cylinder, and spherical equivalent), point spread function, modulation transfer function, corneal asphericity (Q value), ocular aberrations, and corneal haze measurements were obtained postoperatively at 1, 3, and 6 months. Results Uncorrected distance visual acuity improved and refractive errors decreased significantly at 1, 3, and 6 months postoperatively. Total coma aberration increased at 3 and 6 months postoperatively, while changes in all other aberrations were not statistically significant. Similarly, no significant changes in point spread function were detected, but modulation transfer function increased significantly at the postoperative time points measured. Conclusions The optimized prolate ablation procedure was effective in terms of improving visual acuity and objective visual performance for the correction of persistent refractive errors following laser surgery.
known to increase the odds of requiring retreatment [6]. Although not all of the mechanisms involved in myopic regression have been elucidated, the number of patients who require retreatment for myopic regression after primary refractive surgery has decreased. The reported decrease has been attributed to an enlarged optical zone, the development of laser technology, and improved postoperative wound management. However, some patients still experienced myopic regression after the performance of primary refractive surgery. Further, retreatment planning following primary refractive surgery is dependent on the individual needs of each patient.
Retreatment tends to be less predictable compared with primary ablation because of difficulties related to analysis of the refractive state, unpredictable rates of wound healing, and corneal irregularities induced by previous ablation procedures. Previous studies have reported that the combination of corneal wavefront and topographic aspheric treatments has been used successfully to treat eyes that have undergone previous refractive surgeries [5,7].
Optimized prolate ablation (OPA) employs wavefront aberrometry and corneal topography to treat preexisting spherical aberrations and to maintain the preoperative corneal asphericity (Q value) [8,9]. As improved outcomes had been achieved in previous studies with the combined application of the corneal wavefront and topographic aspheric treatments, we endeavored to assess clinical outcomes following application of the OPA retreatment method. In the current study, we investigated the clinical efficacy of an OPA procedure for the treatment of persistent refractive errors following previous refractive surgery.
Materials and Methods
In this retrospective study, we evaluated patients who had undergone retreatment OPA following previous LASIK, LASEK, or PRK surgery. The study was conducted in accordance with the Declaration of Helsinki, and surgical procedures were conducted after all patients participated in a thorough preoperative discussion of the risks and benefits of OPA and informed consent was obtained.
Inclusion criteria for patients who had undergone previous LASIK, LASEK, or PRK between 1997 and 2012 included an age greater than 20 years and myopia measurements less than or equal to -3.50 manifest refraction spherical equivalent (MRSE) diopters (D). Patients who had active systemic ocular disease or who underwent previous ocular surgery other than the aforementioned primary refractive surgeries were excluded from the study. All patients were symptomatic and had residual refractive errors.
All patients underwent a preoperative ophthalmic evaluation, and uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), manifest refraction, corneal topography, wavefront aberrometry, modulation transfer function (MTF), point spread function (PSF), slit-lamp biomicroscopy, tonometry, and fundus measurements were obtained postoperatively at 1, 3, and 6 months. Corneal haze was assessed on a scale from 0 to 4 according to the method proposed by Fantes et al. [10]. Corneal topography and wavefront aberrometry were measured on the same optical axis using an OPD-Scan III wavefront aberrometer (Nidek, Tokyo, Japan), following the application of tropicamide 0.5%-phenylephrine 0.5% eye drops (Mydrin-P; Santen, Osaka, Japan) and the subsequent dilation of the pupil to 6.0 mm. All wavefront measurements were performed to the eighth Zernike order. The device software separated corneal and internal aberrations to evaluate the effects of the optical elements in the visual system. Corneal asphericity (Q value) was measured by software-simulated corneal topography. The objective visual quality was assessed using an MTF graph, which revealed the degree of contrast transfer at different spatial frequencies by calculating the area ratio and the ratio of the area under the MTF graph covered by the vertical and horizontal axes to the area under the normal eye curve. PSF was also calculated using the Strehl ratio, which is the ratio of the PSF value to the theoretical diffraction limit.
All patients underwent treatment by one surgeon (BJC) using the EC 5000 CXII excimer laser platform (Nidek). Manifest refraction, corneal topographic data, and wavefront data obtained from the OPD-Scan III were considered when designing the ablation procedure. All data were transferred to the Optimized Prolate Ablation Software (ver. 1.00, Nidek) for treatment planning. The ablation design was established automatically using the software, by setting the value of the postoperative ocular spherical aberration to zero. The software did not target to correct ocular coma and trefoil aberration. The surgeon adjusted the degree of refractive correction for age, pretreatment manifest refraction, and CDVA. The eyes of each patient were prepared in a sterile fashion, and a topical anesthetic (proparacaine hydrochloride 0.5%, Alcaine; Alcon, Fort Worth, TX, USA) was instilled. Following the application of the Carones LASEK Pump OZ Chamber (9.0 mm in diameter; ASCIO, Copenhagen, Denmark), the cone was filled with 20% alcohol solution mixed with Liquifilm tears (Polyvinyl Alcohol 1.4%; Allergan, Irvine, CA, USA). The alcohol solution was flushed out for 40 seconds after instillation using a cold balanced salt solution. To detect torsion error, the image of the patient's iris was compared with the image acquired using the OPD-Scan III device. If the torsion error was greater than 2°, the patient's head was repositioned in an attempt to decrease the error. The optical zone was established using the Optimized Prolate Ablation software, and covered the entire scotopic pupil diameter, considering the ablation depth. The transition zone was determined to be 1.5 mm larger than the optical zone diameter. The mean optical zone and transition zone were 6.3 mm and 7.9 mm, respectively. A 14-ring-shaped filter paper disc saturated with 0.02% mitomycin C was applied on the cornea for 15 seconds to prevent contact with the central portion of the cornea. Irrigation with cold balanced salt solution was performed for 15 seconds following the removal of the paper disc [11,12]. A therapeutic soft contact lens was then applied to ensure complete epithelial healing. Finally, a standardized regimen of topical steroids and antibiotics was recommended for each patient.
Postoperative UDVA and CDVA, ocular wavefront aberrations, corneal spherical aberration, corneal asphericity, MTF, and PSF were compared to preoperative (before retreatment) values using analysis of variance and Bonferroni tests. A p-value less than 0.05 was considered statistically significant. All analyses were performed using the SPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA). Values are presented as mean ± standard deviation. UDVA = uncorrected distance visual acuity; logMAR = logarithm of the minimum angle of resolution; CDVA = corrected distance visual acuity; D = diopters; MRSE = manifest refraction spherical equivalent; HOA = higher-order aberrations; SA = spherical aberration; PSF = point spread function; MTF = modulation transfer function; HO = higher-order.
* Difference between groups. The p-values were calculated using analysis of variance and Bonferroni correction; † Statistically significant difference compared to preoperative value; ‡ Simulated using only higher-order aberrations. Spherical aberrations are reported for the entire eye (ocular) and cornea at 6.0-mm diameter to the eighth Zernike order. Corneal asphericity, commonly referred to as the Q value, is reported for a 6.0-mm diameter.
Patients
The study included 24 eyes from 15 enrolled patients, including eight males and seven females. The mean patient age was 33 years (range, 25 to 45 years). Twelve eyes (eight patients) had undergone a previous LASIK procedure, nine eyes (five patients) a previous LASEK procedure, and three eyes (two patients) had undergone PRK. The mean interval between the first refractive procedure and retreatment was 108 ± 60 months (range, 12 months to 20.9 years). No intraoperative or postoperative complications were detected.
Wavefront aberrations
No significant differences were detected in the mean preoperative root mean square of ocular higher-order aberrations (HOA) after the OPA retreatment at all postoperative time points (Fig. 7A). The mean ocular spherical aberration decreased from 0.33 ± 0.21 µm preoperatively to 0.17 ± 0.14 µm ( p < 0.005) at 1 month postoperatively, but exhibited no statistically significant postoperative change at 3 or 6 months when compared to baseline (Fig. 7B). While the mean coma aberration increased significantly at 3 and 6 months postoperatively, no statistically significant changes were noted in the mean trefoil aberration (Fig. 7C and 7D). Corneal spherical aberration remained unchanged at all postoperative time points (Fig. 7B).
Corneal asphericity and haze
Corneal asphericity (Q value) was maintained after OPA retreatment at all postoperative time points; however, the Q value tended to increase after OPA retreatment. Corneal haze after retreatment remained minimal.
Objective visual quality
The mean MTF representing the area ratio increased significantly at all postoperative time points compared to the preoperative values (Table 1). MTF simulated with the eye corrected for lower-order aberrations (e.g., sphere and cylinder) and the higher-order (HO) MTF exhibited no significant changes after retreatment. PSF presented as a Strehl ratio, and the ratio of the PSF value to the theoretical diffraction limit remained unchanged postoperatively.
Discussion
The essential advantages of OPA are to minimize the induction of higher order aberrations and to maintain a prolate shape of cornea. By utilizing both wavefront aberrometry and corneal topography, reducing the induced higher order aberrations while maintaining corneal asphericity was reported to be successful, and clinical outcomes of these trials were also reported to be promising in the literature. Spherical aberrations and HOAs disturbing visual quality could be reduced [13,14]. Halo and glare were re- ported to also be decreased [15].
The results of the current study indicate that OPA is an effective and predictable treatment for the correction of residual refractive errors following LASIK, LASEK, and PRK procedures. UDVA changed from 20 / 50 preoperatively to 20 / 20 postoperatively, and the spherical equivalent was significantly reduced ( p < 0.005) at all postoperative time points evaluated. Further, visual outcomes in the current study were similar or superior to those from previous investigations involving corneal wavefront retreatment [5,16]. The mean refractive astigmatism remained unchanged after OPA retreatment, while the preoperative range of astigmatism decreased in the 6 months following the performance of the surgical procedure. Further studies involving the investigation of OPA retreatment for astigmatism would be helpful to assess OPA efficacy and safety in astigmatism retreatment patients.
In our study, the predictability (±0.50 D from intended refraction) was 62.5%. Other investigations have reported a predictability of 76% for wavefront-guided ablations, 91% for topography-guided ablations, and 100% when pro-late ablation was applied as the primary treatment [13,17]. Compared to the application of the same OPA laser ablation pattern in virgin cornea, pretreated corneas exhibited several drawbacks. Corneal surface irregularities after primary laser treatment, or flap making and subclinical decentration may impede accurate preoperative evaluation for treatment planning and precise ablation. Moreover, altered corneal wound healing responses are likely explanations for limited OPA retreatment predictability.
In this study, corneal haze, one of the complications resulting from corneal ablation, was not apparent in any patients that underwent OPA retreatment. Corneal micro-irregularities have been reported to be one of the possible causes of corneal opacity after refractive surgery [18,19]. After laser ablation, collagen fibers newly synthesized to cover the irregular surface, are known to contribute to the development of corneal opacity, refractive errors, and the induction of HOA [20]. In our surgical procedure, phototherapeutic keratectomy surface smoothing and application of mitomycin C might have prevented corneal opacity. Eyes treated with additional phototherapeutic keratectomy A C B D smoothing, which was performed to remove corneal micro-irregularities, were reported to be less likely to develop corneal haze and collagen fibers [21][22][23].
Although factors causing regression after LASEK, LASIK, and PRK are not clearly understood, previous investigations have revealed that regression might be caused by molecular memory in the corneal collagen fibers, or by stromal remodeling, corneal ectasia, corneal hydration, the effect of intraocular pressure on the thinned cornea, or by compensatory epithelial hyperplasia [6,24]. Prior reports have likewise indicated that variability in corneal wound healing, including keratocyte apoptosis, biomechanical properties, and other factors might contribute to the apparent changes [25,26]. Further, compared to initial treatments, secondary laser treatments might induce atypical or unpredictable corneal wound healing.
Refractive results remained stable until 6 months after OPA retreatment. In addition, the safety of the procedure was demonstrated by the postoperative loss of no more than two CDVA lines at 6 months. In one patient, postoperative CDVA decreased from 30 / 20 to 25 / 20 in conjunction with the loss of one CDVA line at 6 months. The patient was a 34-year-old female who had undergone primary LASIK surgery 13.4 years prior to retreatment. Her refractive error measurements before the primary LASIK surgery and before the retreatment were -6.34 and -2.25, respectively. However, the patient exhibited no astigmatic error before retreatment. Six months following retreatment, her refractive error was -0.50, and her UDVA had increased from 20 / 40 to 25 / 20. While ocular and corneal spherical aberrations decreased and total MTF increased 6 months postoperatively, total HOA, trefoil, and coma aberration measurements were increased. Thus, a possible explanation for decreased CDVA might be the increase in total HOA, trefoil, and coma aberration.
The Q value, the coefficient of asphericity, is one of the coefficients used to express the conic shape factor [27,28]. However, corneal topography revealed no statistically significant changes in corneal asphericity (Q value) at any time point after OPA retreatment. Corneal spherical aberrations tended to decrease after OPA retreatment, but the changes were not statistically significant. Similarly, no statistically significant changes were detected in any of the other aberrations measured by wavefront analysis, with the exception of total coma aberration, which demonstrated significant postoperative increases at 3 and 6 months.
MTF and PSF are objective methods used to assess the quality of vision. The overall MTF exhibited significant increases at all postoperative time points. In the MTF plot of corrections for lower-order aberrations (e.g., defocus and astigmatism), no statistically significant changes were detected. Correction of defocus and astigmatism after retreatment may be attributed to the increase in total MTF. No statistically significant changes were found in HOAs, which corresponded to no indications of statistically significant changes on the HO MTF plot. Likewise, PSF, which is closely related to visual function at night, exhibited no significant changes after the retreatment.
Among higher order aberrations, coma aberration was increased while corneal asphericity and most ocular aberrations were not induced. The OPA algorithm is based on both wavefront and corneal topographic data to treat myopia, and attempts to preserve the natural shape of the cornea as much as possible. Nevertheless, worsened tear film dynamics and subclinical decentration after primary treatment can interrupt precise ablation. Further, corneal or flap surface irregularities can negatively affect the retreatment results. In addition, we used ablation design software that only corrected ocular spherical aberration. HOAs, with the exception of coma aberrations, were maintained after the retreatment, while spherical aberration decreased. Corneal asphericity was likewise maintained. Overall, the visual quality of the patient is determined by various factors including residual refractive error, higher order aberrations and corneal asphericity. Although coma aberration, one of the higher order aberrations, was increased, total MTF as an objective measurement of visual quality showed improvement in this study.
OPA treatment is aimed at minimizing the induction of spherical aberrations and maintaining the prolate corneal shape. While postoperative visual acuity is an indicator of treatment efficacy, OPA retreatment will provide successful visual recovery and optical quality improvement. In this study, corneal asphericity and most ocular aberrations were maintained successfully, which could contribute to the improved visual quality after the retreatment.
The drawbacks of the current study were the relatively small number of cases evaluated. The size of the patient group was insufficient to draw conclusions. However, considering the limited number of patients who are candidates for retreatment, we believe this study provides valuable data. Further, this is the first report of OPA retreatment for residual refractive errors following laser surgery. Finally, the results indicate that OPA retreatment provided effective and reliable surgical outcomes, and objective visual performance revealed significant improvement after retreatment.
Conflict of Interest
No potential conflict of interest relevant to this article was reported. | 2018-04-03T03:23:05.632Z | 2017-02-01T00:00:00.000 | {
"year": 2017,
"sha1": "9566c5d3a10741e03f6eb35fddf4d17c804b792c",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc5327170?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "9566c5d3a10741e03f6eb35fddf4d17c804b792c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231767981 | pes2o/s2orc | v3-fos-license | Sex‐dependent prefrontal cortex activation in regular cocaine users: A working memory functional magnetic resonance imaging study
Abstract Although two thirds of patients with a cocaine use disorder (CUD) are female, little is known about sex differences in the (neuro)pathology of CUD. The aim of this explorative study was to investigate sex‐dependent differences in prefrontal cortex (PFC) functioning during a working memory (WM) functional magnetic resonance imaging (fMRI) task in regular cocaine users (CUs), as PFC deficits are implicated in the shift from recreational cocaine use to CUD. Neural activation was measured using fMRI during a standard WM task (n‐back task) in 27 male and 28 female CUs and in 26 male and 28 female non‐cocaine users (non‐CUs). Although there were no main or interaction effects of sex and group on n‐back task performance, WM‐related (2‐back > 0‐back) PFC functioning was significantly moderated by sex and group: female compared with male CUs displayed higher WM‐related activation of the middle frontal gyrus (MFG), whereas female compared with male non‐CUs displayed lower WM‐related MFG activation. Additionally, WM‐related activation of the inferior frontal gyrus, insula, and putamen was negatively associated with cocaine use severity in female but not male CUs. These data support the hypothesis of sex‐dependent PFC differences in CUs and speculatively suggest that PFC deficits may be more strongly implicated in the development, continuation, and possibly treatment of CUD in females. Most importantly, the current data stress the importance of studying both males and females in psychiatry research as not doing so could greatly bias our knowledge of CUD and other psychiatric disorders.
| INTRODUCTION
Cocaine is one of the most commonly used illicit drugs in Europe, and the prevalence of use has increased in the past decade. 1 Although the prevalence of cocaine use is 2-3 times higher in men than in women, this gap is slowly closing. 2 Although women start using cocaine at a later age, they are suggested to progress more rapidly to compulsive use and have higher relapse rates compared with men. 3 A better understanding of the role of sex in the (neuro)pathology of cocaine use disorder (CUD) could pave the way for the development of sextailored treatment strategies. 4 The prefrontal cortex (PFC) plays a key role in cognitive control and emotion regulation, and compromised PFC functioning is thought to promote the shift from recreational to compulsive drug use. 5 The few studies that investigated the role of sex in CUDs showed lower dorsomedial and ventromedial PFC activity in female compared with male cocaine users (CUs) during cocaine cue imagery 6 and in response to negative emotional 7 and drug-salient stimuli. 8 In contrast, female CUs also showed higher dorsomedial PFC activity in response to negative emotional stimuli. 9 Moreover, pharmacological enhancement of PFC functioning (using the noradrenergic α2-receptors agonist guanfacine) mitigated stress-induced arousal and craving 10,11 and improved cognitive control 12 in CUD females, but not males. These studies suggest that PFC functioning is specifically impaired in females with a CUD in the context of emotionally salient stimuli, but it remains to be tested if these results generalize to cognitive control-related processes.
Working memory (WM) refers to the ability to temporarily maintain, update, or manipulate information in an active state and is crucially involved in cognitive control. 13,14 WM performance is generally associated with activation of a network of various PFC and parietal brain areas, with increased WM load being associated with increased activation. 13 In various psychiatric disorders, stronger WM-related (high WM load > low WM load) PFC activation (compared with a control population) is suggested to reflect compensatory but inefficient neural information processing, leading to deficits in WM performance. 15 WM tasks generally require sustained attention, information storage, memory for temporal order as well as the updating and manipulation of information. 16,17 As such, WM-related differences in neural activation in CUD populations may reflect differences in any of these functions. Neuroimaging research demonstrated WM-related PFC deficits in CUD, although both higher WM-related PFC activation (middle frontal gyrus [MFG]) 18 and lower PFC activation (cingulate gyrus, middle, superior, and inferior frontal gyrus) 19 have been reported. Research in other substance use disorders (SUDs) reported similar mixed findings, including WM-related (dorsomedial) PFC overrecruitment in alcohol use disorder (AUD) patients 20 and cannabis users. 21 Moreover, higher WM-related (dorsolateral) PFC activity has been reported to predict cannabis use. [22][23][24] In contrast, lower WMrelated PFC activation (middle and superior frontal gyrus, precentral and postcentral gyrus) has also been demonstrated in AUD patients, 25,26 with lower WM-related activation of the rostral PFC and ventrolateral PFC predicting relapse to alcohol use. 27 A possible explanation of previous conflicting findings could be that the majority of these studies did not account for sex differences.
Although most neuroimaging WM meta-analyses in nonsubstance using populations did not include sex in their analyses, a 2014 metaanalysis demonstrated higher WM-related limbic and (middle and medial) PFC activity in females but higher WM-related parietal activity in males. 28 As such, omitting sex from the analyses could greatly obscure the interpretation of WM-related neural deficits in psychiatric disorders, including CUD. 29 To date, only one study reported on sex differences in WMrelated PFC activation in CUD. 19 Although this study did not demonstrate any significant group (CUD vs. controls) by sex interaction effects on WM-related PFC activation, this was likely due to insufficient statistical power (i.e., inclusion of three female vs. 16 male CUD patients). Based on the earlier described findings of lower dmPFC activation during the processing of emotionally salient stimuli in female compared with male CUD patients, [6][7][8] and the finding that pharmacological enhancement of PFC functioning improved cognitive control while reducing arousal and craving in female CUD patients only, [10][11][12]30 sex-dependent differences in WM-related PFC activation can be expected as well.
The main aim of this study was to explore sex-dependent differences in WM-related PFC activation in a relatively large sample of regular CUs (27 males and 28 females) and non-cocaine users (non-CU: 26 males and 28 females) using a standardized WM paradigm (the n-back task 31 ). It was hypothesized that (i) CUs would show higher WM-related (2-back > 0-back; 2-back > 1-back) PFC activation compared with non-CUs, (ii) females would show higher WM-related PFC activation compared with males, and (iii) WM-related PFC activation would be highest in female CUs, reflecting inefficient WM-related processes. Of note, because of the mixed and limited previous findings, the direction of these hypothesized effects is highly speculative.
| Participants
This study is part of a large project designed to investigate the role of sex in the neurocognitive mechanisms underlying CUD. Fifty-four regular CUs and 54 matched non-CUs who conducted the n-back functional magnetic resonance imaging (fMRI) task were included in this study. All participants were between 18 and 45 years of age and free from any MRI contraindications. CUs used cocaine (intranasally) at least four times a month in the past 6 months (CUs). Non-CUs were excluded if they smoked regularly (at least once per week), had an Alcohol Use Disorders Identification Test (AUDIT) 32 score > 12, used cocaine more than five times in their life, or used illicit substances more than five times in the past 6 months. All participants provided
| Assessment of substance use and psychological functioning
Severity of depressive symptoms was assed using the Beck Depression Inventory (BDI-II 33 ), state and trait anxiety was assessed using the State and Trait Anxiety Inventory (STAI 34 ), attention deficit hyperactive disorder (ADHD) symptom severity was measured using the ADHD Rating Scale (ADHD-RS 35 ), and impulsivity was assessed with the Barratt Impulsiveness Scale (BIS-11 36 ) in all participants. The following characteristics of substance use were assessed in CUs only: severity of cocaine use and related problems in the past 12 months was assessed using the Drug Use Disorder Identification Test for cocaine (DUDIT 37 ), cocaine use (grams and days per month) in the 28 days prior to study participation was assessed using the Time Line Follow-Back procedure, 38 and onset age of regular use was assessed using an in-house questionnaire. Moreover, motivation to change cocaine use was assessed using the Readiness to Change Questionnaire (RCQ 39 ), smoking behavior (number of smoking days per week and cigarettes per day) was assessed using an in-house questionnaire, severity of cannabis use was assessed using the Cannabis Use Disorder Identification Test-Revised. 40 Current DSM-5 symptoms for CUD, and cannabis use disorder and AUD were assessed using a self-reported questionnaire based on the SCID. 41
| Procedures
Participants were recruited through social media and local advertisements in the Amsterdam area, the Netherlands. After signing informed consent, participants were screened on inclusion and exclusion criteria.
On the day of testing, participants first completed the questionnaires, after which the MRI scan was made. Participants were instructed to abstain from any drug use in the 24-h preceding the MRI scan.
| The n-back task
The n-back task 22 consisted of alternating blocks with three load levels: 0-back, 1-back, and 2-back. During each block, participants viewed a series of 15 letters in sequence, including five targets. Blocks lasted 30 s (each stimulus lasted 2 s), and the interblock interval was 5 s, during which the block instructions were repeated. In 0-back blocks, participants were instructed to indicate when the target letter "X" appeared on the screen. In 1-back blocks, participants had to decide if the letter on the screen was identical to the previous one. In 2-back blocks, targets were those letters identical to the letter presented two trials back. Participants were instructed to press a right response box button for targets (right index finger) and a left button for nontargets (left index finger). No additional speed or accuracy instructions were given. Each load level was repeated four times resulting in a 7-min task of 12 blocks. Prior to scanning, all participants first completed a practice block of the n-back task outside of the scanner. Data were preprocessed using fMRIPrep 1.3.2 42 : the anatomical scans were corrected for intensity nonuniformity, skull-stripped, spatially normalized, and segmented into cerebrospinal fluid, white matter, and gray matter. The functional data were corrected for susceptibility distortions using a deformation field and subsequently coregistered, motion corrected, and smoothed. ICA-AROMA 43 was used to automatically remove motion artifacts, and data were resampled to standard space. Further details on the pre-processing pipeline can be found in the Supporting Information.
| fMRI acquisition and preprocessing
fMRI data were further analyzed using SPM12 (http://www.fil. ion.ucl.ac.uk/spm). First level models included separate regressors for the 0-back, 1-back, and 2-back blocks. These regressors were convolved with a canonical hemodynamic response function. A high pass filter (1/128 Hz) was included in the first-level model to correct for low-frequency signal drift. The contrasts for the 0-back, 1-back, and 2-back blocks were subsequently entered in a second level model to test for the main and interaction effects of sex, group, and n-back load.
| Statistical analyses
Demographics, scores on (clinical) questionnaires, and n-back behavioral performance were compared between groups with standard univariate analysis of variance (ANOVA) in SPSS for Windows (v.26.0).
Differences between groups and sexes in age, alcohol use severity (AUDIT), depressive symptoms (BDI), state and trait anxiety (STAI), impulsivity (BIS-11), and ADHD symptom severity (ADHD-RS) were assessed using 2 × 2 ANOVAs, testing both main and interaction effects. One-way ANOVAs were subsequently used to test sex differences within the CU group in cocaine use (grams per month and days per month), cocaine use severity (DUDIT), onset age of regular cocaine use, and cannabis use severity (CUDIT). Chi-square tests were additionally used to test for sex differences in the severity of cocaine, alcohol, and cannabis use disorder (mild, moderate, or severe according to the DSM-5 criteria), the prevalence of smoking (percent weekly smokers and percent daily smokers), and motivation to change cocaine use. Sex and group differences in n-back performance were assessed in terms of mean reaction time of correct responses and accuracy (proportion correct), using repeated measures ANOVAs.
To test for main and interaction effects of group, sex, and n-back load (2-back > 0-back and 2-back > 1-back), a whole-brain analysis was performed, with mean framewise displacement (FD) values for each subject as covariate of noninterest to account for potential motion effect. Whole brain analyses were family-wise error (FWE) rate corrected on cluster level (p < 0.05), with an initial height threshold on the voxel level of p < 0.001. This analysis was repeated to test whether significant effects were still present after correcting for potential confounding variables. Variables were treated as confounders when there was a significant interaction effect between group and sex on these variables. This was tested for age, education, AUDIT, BDI, ADHD-RS, STAI, and BIS. When a significant whole brain interaction effect was found, the nature of this interaction was explored by performing pairwise comparisons (between sexes within groups and between groups within sexes) using a small volume correction where the mask of the significant cluster served as a the small volume. Additionally, the Marsbar toolbox (http://marsbar.sourceforge.net) was used to extract the mean activity of the significant cluster(s) visualization purposes.
A second whole brain analysis was performed within the CU group only, with cocaine use (grams per month), cocaine use severity (DUDIT-scores), as well as its interaction with sex as regressors of interest, correcting for variations in FD, to test whether cocaine use and cocaine use severity were associated with WM-related (2-back > 0-back; 2-back > 1-back) brain activity in a sex-dependent matter. This analysis was repeated to test whether significant effects were still present after correcting for potential confounding variables. CUs met the DSM-5 criteria for AUD (males: 85%, females: 59%), whereas less than one third of the CUs met the DSM-5 criteria for cannabis use disorder (males: 30%, females: 15%). There were no sex differences in the amount of cocaine used per month, readiness to change cocaine use, tobacco use, the prevalence of a DSM-5 diagnosis for CUD, cannabis use disorder or AUD, cocaine use severity (DUDIT), alcohol use severity (AUDIT), or cannabis use severity (CUDIT-R). The only significant sex difference was that CU females reported to use cocaine on fewer days per month compared with CU males. Therefore, days of cocaine use per month was treated as a confounder in an exploratory within-group fMRI analysis. See Table 1 for detailed substance use characteristics and statistics.
CUs and non-CUs had similar age, educational level, and trait anxiety scores, but CUs had significantly higher AUDIT, state anxiety, impulsivity (BIS attention and BIS planning), and ADHD-RS (childhood and past 6 month) scores. Additionally, females had significant higher state anxiety scores compared with males, whereas males reported higher childhood ADHD-RS than females. There was a significant group by sex interaction effect on depressive symptoms (BDI scores), impulsivity (BIS total and motor subscale), and ADHD-RS scores in the past 6 months. Follow-up tests revealed that although both male CUs and female CUs scored higher on these variables compared with non-CU male and non-CU females, female CUs had significant higher BDI, BIS total and BIS motor scores compared with male CUs, while no such sex differences were present within non-CUs. Because of these differences, BDI, BIS motor, BIS total, and adult ADHD-RS scores were treated as confounders in the fMRI analyses. See Table 2 for all values and statistics.
| Behavioral results n-back task
There were no significant differences between groups or sexes in nback performance. There was a significant main effect of n-back load Table S1).
| WM-load, group and sex interaction effects
WM-related brain activation (2-back > 0-back or 2-back > 1-back) did not differ between groups or sexes. For the 2-back > 0-back contrast, there was a significant group by sex interaction in the left dorsal MFG (dMFG). See Table 3 and Figure 2. Pairwise comparisons on this specific cluster demonstrated that the non-CU males had higher WMrelated activation in this region than non-CU females (p FWE-corrected on peak level = 0.002), although no such difference was between CU males and females. In addition, within non-CU males displayed higher WMrelated activation in this region compared to CU males (p FWE-corrected on peak level = 0.006), whereas CU females displayed higher activation in this region compared to non-CU females (p FWE-corrected on peak level = 0.002). Adding BDI, BIS-total, BIS-motor, and ADHD symptom severity scores as regressors of noninterest in the whole brain analysis did not alter the outcomes of these analyses (results not reported).
There was no significant group by sex interaction effect for the 2-back > 1-back contrast.
| Within CU group whole brain regression analyses
Whole brain regression analyses with cocaine use (grams of cocaine per month) and cocaine use severity (DUDIT scores) in the CU group demonstrated that cocaine use was negatively associated with WM-related activation of the vermis (2-back > 0-back) and right calcarine sulcus/bilateral cuneus (2-back > 0-back and 2-back > 1-back).
Cocaine use was positively associated with WM-related activation of the left cerebellum (2-back > 0-back and 2-back > 1-back) and lingual gyrus and vermis (2-back > 1-back). The association between cocaine use and WM-related activation of the cerebellum, vermis, and occipital cortex was significantly moderated by sex: the association was positive in males but negative in females. In addition, although cocaine use was not associated with any WM-related brain activation, the association between cocaine use severity and WM-related (2-back > 1-back) activation in the left insula, inferior frontal gyrus, the cerebellum, and vermis was also moderated by sex, with a negative association in females, but not in males. Adding days of cocaine use per month as confounder to the model did not change the outcome of these results. See Table 4 and Figure 3. use. 5 WM-related (2-back > 0-back; 2-back > 1-back) PFC activity was hypothesized to be higher in CUs compared with non-CUs, with larger differences in female compare with male CUs. There was no main effect of group or sex on brain activity or behavior; however, we did observe a significant group by sex interaction effect in WM-related (2-back > 0-back) left dMFG activation: CU females displayed higher dMFG activity compared with non-CU females, whereas CU males displayed lower dMFG activity compared with non-CU males.
Furthermore, WM-related activation of the vlPFC (including the inferior frontal gyrus, insula, and putamen) was negatively associated with F I G U R E 1 Main and interaction effects of working memory (WM) load, group, and sex on reaction time and percentage correct during the n-back task. Although there was a main effect of WM-load on reaction time (increase) and percentage correct (decrease), these effects were not moderated by sex, group, or both. CU, cocaine user; WM, working memory F I G U R E 2 Main and interaction effects of WM load, group and sex. In red, brain regions activated with increasing WM load. In blue, brain regions deactivated with increasing WM-load. In green, group differences in WM-related brain activation that are significantly moderated by sex. Mean activity of the whole cluster is extracted and plotted for visualization purposes. The error bars represent the 90% confidence interval. CU, cocaine user; WM, working memory T A B L E 3 Main and interaction effects of group and sex on working memory-related whole brain activation Note: All whole brain analyses were family-wise error (FWE) rate corrected on cluster level (p < 0.05), with an initial height threshold on voxel level of p < 0.001.
cocaine use severity in female CUs, but not in males. Heightened WM-related PFC activation is suggested to reflect compensatory but inefficient information processing, leading to WM deficits. 15 As such, the current data support our hypothesis that PFC deficits are more strongly implicated in the neuropathology of CUD in females compared with males. Importantly, these findings highlight an urgent need to further unravel the role of sex in the mechanisms underlying CUD.
Although heightened WM-related PFC activation in CUD (compared with a control group) may reflect compensatory (but inefficient) mechanisms, 15,18 heightened PFC activation in recreative CUs (in the absence of behavioral deficits) has also been suggested to reflect resilience to stimulant dependence. In line with this, heightened WMrelated activation of the ventrolateral and ventromedial PFC has been shown to protect against relapse in alcohol dependent patients. 15 In the current study, inferior frontal gyrus activity was negatively associated with cocaine use severity in female CUs only, perhaps reflecting sex-dependent resilience against the development of compulsive cocaine use. It should be noted, though, that the majority of CUs included in our study already transitioned from recreational to compulsive cocaine use. Alternatively, although WM-related dMFG activation was unrelated to cocaine use (severity), the negative association between cocaine use severity and WM-related ventrolateral PFC recruitment may reflect a sex-specific (neurotoxic) effect of cocaine use on the brain, supporting the hypothesis that females are more vulnerable to the (neurotoxic) effects of substances, including cocaine and alcohol. 44 Unexpectedly, dMFG activation was higher in non-CU males compared with non-CU females. This is remarkable, as females generally display higher WM-related middle and medial PFC activity compared with males. 28 Interestingly, although various PFC regions are shown to be more active in females compared with males during WM-related tasks, 28 there seems to be a sex-dependent effect on the lateralization of WM-related dMFG activity as well: higher WMrelated activation of the right MFG was found in females compared with males, whereas higher WM-related activation of the left MFG was found in males, 28 which is in line with our finding in non-CUs. Sex differences in WM-related brain functioning and lateralization are suggested to result from a combination of prenatal hormonal (testosterone) exposure 45 and gender-related factors later in life. 46 Consequently, the current findings may be the result of neurodevelopmental differences, reflecting a sex-dependent predisposition to CUD rather than a consequence of cocaine use.
It is important to note that no significant group or sex differences were found in behavioral n-back performance. However, the n-back task is generally considered to be less reliable to assess behavioral WM-related deficits. 47 As such, the behavioral implications of the current findings remain speculative and future research may benefit from including a more reliable WM task outside the MRI scanner to assess behavioral WM performance.
Although the causal interplay between PFC functioning, CUD and sex can only be established with future longitudinal research, the current findings suggest that PFC deficits are more strongly implicated in the development, continuation, and perhaps also treatment of CUD in women. Because pharmacological enhancement of PFC control (using the noradrenergic α2-receptors agonist guanfacine) 30 reduced arousal and craving and improved cognitive control in women with a CUD specifically, [10][11][12] women with a CUD may benefit more from interventions targeted at improving PFC-related cognitive and emotional control processes.
Inconsistent results from previous SUD studies [18][19][20][21]25,26 may be explained by highly variable but mainly low numbers of female participants. An important strength of the current study is that it F I G U R E 3 Working memory load-related activation (2-back > 1-back) of the insula, putamen, and inferior frontal gyrus (in red) is negatively associated with drug use severity in cocaine using women but not in in cocaine using men. Nonetheless, the brain activation patterns are in the similar range as non-cocaine using controls. Mean activity of the whole cluster is extracted and plotted for visualization purposes. The error bars represent the 90% confidence interval. CU, cocaine user was specifically set-up to elucidate sex differences in PFC functioning in regular CUs and non-CUs. As such, CU males and females were matched on most cocaine-use-related variables. Although we focused on including nontreatment seeking CUs, the majority of all included CUs met the DSM-5 criteria of CUD and were actively trying to change their cocaine use based on the readiness to change questionnaire. Therefore, the current findings likely generalize to treatment seeking CUs as well.
A limitation of the current study is that we only tested for sex differences without taking gender taking into account. According to the Sex and Gender Equity in Research (SAGER) guidelines, gender is an equally important determinant of health and well-being as sex. 48 Although the terms sex and gender are often confused in scientific literature, gender refers to the socially constructed roles, behaviors, and identities of female, male, and gender-diverse people, whereas sex refers to a set of biological attributes in humans and animals that are associated with physiological features. Future research should take gender as a potential moderating factor in the (neuro)pathology of addiction into account, for example, by calculating a gender index based on a variety of psychosocial gender-related variables. 49 Moreover, CUD is generally associated with polysubstance use. Indeed, approximately two thirds of CUs met the DSM-5 criteria of an AUD and almost one third met the DMS-5 criteria of a cannabis use disorder. Although there were no sex differences in the prevalence of comorbid SUDs, we have previously demonstrated that deficits in PFC structure are strongly associated with the amount of polysubstance use across regular CUs. 50 Hence, we cannot fully exclude potential confounding effects of other substances on PFC functioning in the current study. In this study, we instructed CUs to remain abstinent for 24 h prior to study participation. We decided to not perform a urine screening to check this as cocaine metabolites can be detected in urine up to 6 days after the last use in regular CUs, 41 which is much longer than its psychopharmacological effects. A urine test would, therefore, not have been a very accurate measure of intoxication in this specific population. Instead, we used the time-line follow-back procedure to assess cocaine (and other substance) use prior to the experiment, which is generally considered to be a highly reliable method to asses information about substance use, including cocaine use, in both treatment and nontreatment seeking populations. 52 Nonetheless, we cannot fully exclude the possibility that some CUs were (still) under the influence of some substances.
In conclusion, the current study provides important first evidence for sex-dependent differences in WM-related PFC recruitment among regular CUs. Although speculative, these data suggest that PFC deficits are more strongly implicated in the development, continuation, and possibly treatment of CUD in females compared with males. Most importantly, the current findings highlight the crucial need for (i) including both males and females in (pre)clinical addiction research and (ii) disaggregating (neuroimaging) findings for males and females separately. 48 Doing so will not only lead to a better understanding of (sex differences in) the (neuro)pathology of addiction but could also pave the way for the development of sex-tailored treatment of SUDs.
ACKNOWLEDGMENTS
The authors would like to thank Nutsa Nanuashvili, Daantje de Bruin, and Annel Koomen for their assistance in data collection.
FINANCIAL DISCLOSURE
The project was funded by an Amsterdam Brain and Cognition Talent Grant. None of the authors reported biomedical financial interests or potential conflicts of interest.
AUTHORS CONTRIBUTION
AMK was responsible for the study concept, design, data acquisition, analysis, and preparing the first draft of the manuscript. JC and RR provided critical revision of the manuscript for important intellectual content. All authors critically reviewed content and approved final version for publication.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2021-02-03T06:17:53.663Z | 2021-01-28T00:00:00.000 | {
"year": 2021,
"sha1": "baaea1c4ff46f140e178728bc00b24925a952086",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/adb.13003",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "adb1bca55d90625025affc80bf15acfad23b4075",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199135767 | pes2o/s2orc | v3-fos-license | Existence of Multiple Positive Solutions for Third-Order Three-Point Boundary Value Problem
In this paper, we study the existence of positive solutions for a class of third-order three-point boundary value problem. By employing the fixed point theorem on cone, some new criteria to ensure the three-point boundary value problem has at least three positive solutions are obtained. An example illustrating our main result is given. Moreover, some previous results will be improved significantly in our paper.
Introduction
As we all know, the earliest boundary value problem studied is Dirichlet problem.We need to find the solution of Laplace equation.Boundary value problems are most common in physics, such as wave equation.With the development of boundary value problems, many scholars began to pay attention to the study of higher-order boundary value problems.The third-order three-point problems have a wide range of applications in the fields of mathematics and physics [1] [2] [3] [4] [5].Many works on the third-order boundary value problems have been established.In [6] [7] [8] [9] [10], the authors have studied the third-order three-point boundary value problem and proved that the model has at least one positive solution.Recently, there have been many papers dealing with the positive solutions of boundary value problems for nonlinear differential equations with various boundary conditions.For example, Anderson [11] obtained some existence results for positive solutions for the following system: Moreover, Yao [12] considered the following system: With the development of third-order boundary value problems, Guo et al. [2] considered the existence of a positive solution to the third-order three-point boundary value problem as follows and not identically zero on , . By using the Guo-Krasnoselskii fixed point theorem, they proved that the system (1.3) has at least one positive solution.
To our best knowledge, few papers can be found in the literature for three positive solutions of third-order three-point boundary value problems.Motivated greatly by the above-mentioned excellent works, in this paper, we will consider the following model Obviously, this model is new because the nonlinear f depends not only on the unknown function but also the derivative of unknown function.In particular, the system (1.2) is special case of system (1.4).By the properties of the Green's function, existence results of at least three positive solution for the third-order three-point boundary value problem are established by a new method which is different from the method in [13].The paper is organized as follows.In Section 2, we present some notation and lemmas.In Section 3, we give the main results.In Section 4, an example is given to illustrate the main results of this Then, K is called a cone of E. Definition 2.2.Suppose K is a cone.The map ) ( ) ( ) ( ) Then the map α is a nonnegative continuous convex function on K.
Suppose K is a cone.The map → +∞ is continuous and satisfies the following inequality Then the map ϕ is a nonnegative continuous concave function on K.
, then we have the following lemma.
For positive real numbers , , , a b c d , we define the following convex sets: ( ) then T has at least three fixed points ( )
The Existence of Three Positive Solutions
We define the norm max max , max , max .
Proof From the fact that f is nonnegative continuous function and Lemma 2.2, we know that
Tx t G t s h s f s x s x s x s s g s h s f s x s x s x s s
According to the Arzela-Ascoli theorem, we prove that T is a completely continuous operator.
For convenience, we note that then the system (1.4) has at least three positive points 1 2 , x x and 3 x satisfying [ ] ( ) ( ) Proof For x K ∈ , we have So we show that (1.7) of the Lemma 2.4 holds.
Tx t T x t T x T x G t s h s f s x s x s x s s t G t s h s f s x s x s x s s t d A d
, it is easy to prove ( ) From assumption (H 2 ), we have From assumption (H 3 ).we have
Example
Example 4.1 Consider the following boundary value problem x t tf t x t x t x t t ,100 100,100 100,100 .625 t u v w All the conditions of theorem 3.1 are satisfied, so there are at least three positive solutions for the system.
Conclusion
In this paper, applying the fixed point theorem on the cone, we investigate the existence of positive solutions for a class of third-order three-point boundary value problem, which is a more general system.We obtain that the boundary value problem has at least three positive solutions.
is a Caratheodory function.The author proved that (1.2) has at least one positive solution by Krasnoselskii fixed point theorems.
∫
h s f s x s x s x s s g s h s f s x s x s x s sThus condition (iii) of Lemma 2.4 is also satisfied.From the above facts, the proof of Theorem 3.1 is completed. | 2019-08-02T20:21:53.841Z | 2019-07-10T00:00:00.000 | {
"year": 2019,
"sha1": "a38f1fc199b9642183599c07d3a3034384755bfb",
"oa_license": null,
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=93727",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3b9413c7285eb58ccb10b5e49fc06624d8be9163",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
22479311 | pes2o/s2orc | v3-fos-license | The insulator/Chern-insulator transition in the Haldane model
We study the behavior of several physical properties of the Haldane model as the system undergoes its transition from the normal-insulator to the Chern-insulator phase. We find that the density matrix has exponential decay in both insulating phases, while having a power-law decay, more characteristic of a metallic system, precisely at the phase boundary. The total spread of the maximally-localized Wannier functions is found to diverge in the Chern-insulator phase. However, its gauge-invariant part, related to the localization length of Resta and Sorella, is finite in both insulating phases and diverges as the phase boundary is approached. We also clarify how the usual algorithms for constructing Wannier functions break down as one crosses into the Chern-insulator region of the phase diagram.
I. INTRODUCTION
The band structure of any insulator is characterized by a certain discrete topological index known as the Chern invariant 1 which encodes information about the phase evolution of the Bloch functions around the boundary of the Brillouin zone (see Sec. II). Insulators can thus be classified as "normal insulators" or "Chern insulators" depending on whether or not the Chern invariant vanishes. The latter case requires breaking of time-reversal symmetry, so insulating ferromagnets and ferrimagnets could be candidates for Chern insulators. While models for Chern insulators can be constructed theoretically, 2 no experimental realizations are yet known to occur in nature. A Chern insulator, if found to exist, would have the remarkable feature of showing a quantum Hall effect in the absence of a macroscopic magnetic field. Hence, Chern insulators may also be referred to as "quantum Hall insulators." Although the basic theory of Chern insulators was formulated in the 1980's, not much is known theoretically about the general features of the electronic band structure of such insulators. In the last 15 years or so, the theory of normal insulators has been greatly enriched by a deeper understanding of electric polarization, 3 orbital magnetization, 4,5 linear-scaling theory, methods for constructing Wannier functions, 6 the spatial decay of Wannier functions and of the one-particle density matrix, 7 and related measures of localization. [8][9][10] However, all of this work implicitly assumed the presence of time-reversal symmetry, and thus was limited to the case of normal insulators. It is therefore of considerable interest to revisit many of these same issues, and to reconsider whether, or how, the previous conclusions generalize to the case of Chern insulators. For example, what are the decay properties of the one-particle density matrix in a Chern insulator? Can Wannier functions be constructed, and if not, in what way do the usual construction procedures fail? If one inspects closely related measures of localization such as the gauge-invariant part of the Wannier spread functional, 6 the localization length of Resta and Sorella,8 or the second-cumulant moment of the electron distribution, 9,10 does the localization remain finite in a Chern insulator, or does it diverge?
Furthermore, an intriguing feature of Chern-insulating systems is that a phase boundary separating the Chern insulator from a normal insulator may occur. Such a normal-insulator/Chern-insulator (NI/CI) transition is an example of a class of topological transitions that have become of considerable current interest, at which a topological invariant changes discontinuously across a phase boundary. 11 Such transitions normally appear within the theory of correlated states. 12,13 The NI/CI transition, on the other hand, occurs in a non-interacting context, and can therefore be studied at a level of detail, and tested with numerical calculations, in a way that is difficult for correlated models. In addition, Chern insulators are closely related to so-called "spin Hall insulators," which have also been the subject of a recent surge of interest. 14 Thus, there is a special opportunity associated with the study of this particular topological insulator/insulator transition.
In this paper, we investigate several aspects of the electronic structure near the NI/CI transition in the twodimensional Haldane model. 2 We choose the Haldane model because it is one of the simplest models that exhibits a quantum-Hall insulator state. The underlying idea of this model is to break time-reversal symmetry so that the transverse conductivity σ xy , which is odd under time-reversal, can become nonzero. Usually, the quantum Hall effect is associated with a gap at the Fermi level resulting from a splitting of the spectrum into Landau levels by a macroscopic magnetic field. In the Haldane model, however, there is a degeneracy between the valence and conduction bands at certain high-symmetry k-points when both inversion and time-reversal symmetry are present. If a gap is opened by the breaking of inversion symmetry, the system becomes a normal insulator. However, if the gap opens as a result of breaking time-reversal symmetry, the system turns into a Chern insulator.
We have organized this paper as follows. In Sec. II we introduce the Chern invariant, which will henceforth be used to classify the state of our system. The basics of the Haldane model are reviewed in Sec. III. Thereafter, we focus on the problems occurring when constructing Wannier functions for Chern insulators (Sec. IV), the behavior of the spread functional (Sec. V), and the decay of the density matrix (Sec. VI) as the system transitions from the normal insulator phase into the Chern-insulator phase. We conclude and give an outlook in Sec. VII.
II. THE CHERN INVARIANT
We restrict ourselves to the case of a one-particle Hamiltonian H having Bloch eigenvalues ǫ nk and eigenstates |ψ nk . The cell-periodic part of the Bloch function u nk (r) = e −ik·r ψ nk (r) is then an eigenfunction of the effective Hamiltonian H(k) = e −ik·r He ik·r . We consider electrons to be spinless, but factors of two can easily be inserted for non-interacting spin channels.
We can now define the Chern invariant 1 for an insulator, defined here as a system with a gap in the singleparticle density of states separating occupied and unoccupied states, to be where BZ denotes an integral over the Brillouin zone and ∂ k = ∂/∂k. The cross product notation in Eq. (1) implies, for example, that C z contains terms involving ∂ kx u nk |∂ ky u nk − ∂ ky u nk |∂ kx u nk . For non-interacting electrons, 15,16 the Chern invariant is quantized in units of reciprocal-lattice vectors G. For the case of a twodimensional system with only a single occupied band, Eq. (1) becomes In two dimensions the Chern invariant is a pseudo-scalar called the Chern number which can only take integer values. Alternatively, we can write the Chern number in terms of the Berry connection A(k) = i u k |∂ k |u k and the Berry curvature Ω(k) = ∇ k × A(k) as A Chern insulator is now simply defined as an insulator with a nonzero Chern invariant. Conversely, we define a normal insulator to be an insulator with zero Chern invariant. Hence, the NI/CI transition is characterized by a change of the Chern invariant from zero to a nonzero value. The Chern invariant of Eqs. (1) and (2) is gauge invariant, 5 i.e., invariant with respect to the choice of phases of the |u nk , or in the more general multiband case, the choice of unitary rotations applied to transform the occupied states among themselves at a given k. It can be shown that in normal insulators it is always possible to make a gauge choice such that the Bloch orbitals are periodic in k-space (i.e., |ψ nk+G = |ψ nk ) and smooth in k (i.e., continuous and differentiable), whereas no such choice is possible for a Chern insulator. 17
III. THE HALDANE MODEL
Here we provide a brief review of Haldane's model and its properties, as discussed in detail in Ref. [2]. As illustrated in Fig. 1, the Haldane model is comprised of a honeycomb lattice having two tight-binding sites per cell with site energies ±∆, a real first-neighbor hopping t 1 , and a complex second-neighbor hopping t 2 e ±iϕ . The model can also be thought of as consisting of two sublattices A and B corresponding to the sites with energies +∆ and −∆, respectively. Note that the macroscopic magnetic flux through the unit cell is indeed zero, resulting in a vanishing macroscopic magnetic field. This follows directly from the fact that the first-nearest neighbor hopping is real and no phase is picked up when hopping around the Wigner-Seitz unit cell. This, however, does not rule out a microscopic magnetic field that averages to zero over the unit cell. Note that the wavevector k is still a good quantum number under these conditions. Let a 1 , a 2 , and a 3 be the vectors pointing from a site of the B sublattice to its three nearest A neighbors, such thatẑ · a 1 × a 2 > 0 andx · a 1 > 0. If we furthermore define the vectors b 1 = a 2 − a 3 , b 2 = a 3 − a 1 , and b 3 = a 1 −a 2 , then the Hamiltonian of the Haldane model can be written as where the σ i are the Pauli matrices and I is the identity matrix. The Chern number can now be calculated analytically or numerically according to Eq. (2) or (3). For our tests, we have chosen a lattice constant equal to unity, t 1 = 1, and t 2 = 1/3. If the Chern number of the bottom band is mapped out as a function of the remaining model parameters ϕ and ∆/t 2 , we obtain the Haldane phase diagram shown in Fig. 2. Since we are interested in studying the transition from a normal insulator to a Chern insulator, we choose for all our calculations below a path in the phase diagram that crosses the phase boundary. Specifically, we traverse the vertical line in Fig. 2 where the phase ϕ is fixed at π/4 and ∆/t 2 is reduced from 6 to 2. At the critical value (∆/t 2 ) cr = 3 √ 3 sin(π/4) ≈ 3.67, the phase boundary is crossed.
The band structure of the Haldane model is plotted in Fig. 3 along some high-symmetry lines in the Brillouin zone (see Fig. 1b). It shows a remarkable feature as the system passes through (∆/t 2 ) cr . In the normal-insulator region, the two bands are separated by a finite gap. As the critical value is approached, the gap at K gets smaller and smaller. Finally, exactly at (∆/t 2 ) cr the bands touch at K in such a way that the dispersion relation is linear. Such points are also referred to as Dirac points. When going further into the Chern-insulator region, the bands separate again. Note that our specific choice of t 1 = 1 and t 2 = 1/3 prevents the bands from overlapping. If ∆ and t 2 sin ϕ are both chosen to be zero, two Dirac points form at K and K ′ , and the Haldane model then becomes an appropriate model for a graphene sheet. 18 In the normal-insulator region of the Haldane model the Chern number of each band is zero, so that the total Chern number (the sum of the Chern numbers of the upper and lower bands) is obviously also zero. When the phase boundary is crossed, the Chern numbers of the upper and lower bands become ±1, but their sum still remains zero. The closure and reopening of the gap as the NI/CI boundary is crossed corresponds to the "donation" of a Chern unit from one band to another through the temporarily formed Dirac point. In the present case, the total Chern number must always remain zero because the model, having a tight-binding form, assumes Wannier representability of the overall band space, and a non-zero Chern number is inconsistent with such an assumption. More generally, the total Chern number of a group of bands should not change when a gap closure and reopening occurs among the bands of the group, as long as the gaps between this group and any lower or higher bands remains open.
It is possible to argue on very general grounds that a finite sample cut from a Chern insulator must have conductive channels, otherwise known as chiral edge states, that circulate around the perimeter of the sample 19 in much the same way as for the quantum Hall effect. 20,21 It is therefore of interest to investigate the electronic structure of the Haldane model from the point of view of the surface band structure. We consider a sample that is finite in the b 3 direction (specifically, 30 cells wide) and has periodic boundary conditions along the b 2 direction (the b i are defined above Eq. (4)); its states can be labeled by a wavevector k y running from −π/a to +π/a, where a is the repeat unit in the y direction. The energy eigenvalues are plotted vs. k y for several values of ∆/t 2 in Fig. 4. At first sight, the surface band structure shows qualitatively the same information as the bulk band structure in Fig. 3. For ∆/t 2 = 6, the valence and However, when we go deeper into the Chern insulator, the surface band structure reveals a new behavior: one surface band now crosses from the lower manifold to the upper one with increasing k y , and another crosses in the opposite direction. Further inspection shows that the upgoing and downgoing states are localized to the right and left surfaces of the strip, respectively. Thus, if the Fermi level lies in the bulk gap, there will be metallic states with Fermi velocities parallel to the surfaces and with opposite orientation, i.e., a chiral (counterclockwise) circulation of edge states around the perimeter of the sample, as expected.
IV. BREAKDOWN OF WANNIER-FUNCTION CONSTRUCTION AT THE CHERN TRANSITION
We now study aspects of the NI/CI transition that are related to Wannier functions (WFs) and electron localization. We expect that in the normal-insulator phase, it should be straightforward to construct Wannier functions via a k-space construction. The term "Wannier function" is usually applied only in the case of periodic systems, but for finite samples one can construct well localized Boys orbitals 22 which play the same role and which map onto the WFs in the thermodynamic limit n → ∞. Thus, if we cut a finite sample from a normal-insulator realization of the Haldane model, we also expect it to be straightforward to construct such Boys orbitals. The question then arises as to what, precisely, will "go wrong" with these procedures if one tries to do the same on the Cherninsulator side of the transition. In particular, for a finite sample cut from the Haldane model, it is unclear how the system would "know" whether the finite sample corresponds to the normal-insulator or Chern-insulator side of the transition, and how the construction would break down in the latter case. In this Section, we investigate these issues, first in the context of the real-space construction, and then later from the k-space point of view.
We start, then, by considering finite n × n samples of the Haldane model. We can interpret Fig. 1 as showing a picture of a finite sample of size n = 2; we study similarly-constructed samples of size n =10, 20, 30, and 40. For each sample, the Boys orbitals are constructed as follows. We define the projection operator onto the occupied states as and we choose a set of well-localized "trial" orbitals |t α , equal in number to the number of occupied states, that we want the Boys orbitals to be roughly modeled after. We then construct the projected trial functions |y α = P |t α . Since ρ(r, r ′ ) = r|P |r ′ is expected to decay exponentially in |r − r ′ | for an insulator (see Sec. VI), we expect the |y α to be localized as well, and as long as they are not overcomplete they will span the occupied space of interest. However, they are not orthonormal, so the last step is to carry out a symmetric orthonormalization. 23 This is done by computing the overlap matrix S αβ = y α |y β and then constructing the final Boys orbitals |ω α as In the context of the Haldane model, it is natural to choose the trial functions to be a set of δ-functions located on the sites of the lower-energy sublattice. With this choice, we can now study the lowest and highest eigenvalue of S αβ as the parameter ∆/t 2 traverses the path shown in Fig. 2. While the highest eigenvalue remains very close to 1, the lowest eigenvalue drops and rapidly approaches zero in the Chern-insulator region, i.e., for ∆/t 2 values below the critical value of (∆/t 2 ) cr ≈ 3.67, as shown in Fig. 5. The slope of the "drop" depends on the size of the sample and becomes steeper as the sample size gets larger. For any given value of ∆/t 2 < (∆/t 2 ) cr , the lowest eigenvalue appears to approach zero exponentially with sample size. When the eigenvalue becomes too small, the inversion to obtain S −1/2 becomes ill-conditioned, and the symmetric orthonormalization in Eq. (7) can no longer be carried out. It follows that Boys orbitals cannot be constructed in the Chern-insulator phase, at least not using this approach. We now change perspective and look at the problem from the k-space point of view, where we find that something similar happens. WFs for periodic samples are defined by where the inverse relation is In this notation |Rn refers to the n'th WF in cell R.
As mentioned previously, for systems with zero Chern invariant, the Bloch orbitals can always be chosen to obey a smooth and periodic gauge |ψ nk+G = |ψ nk . However, if the Chern invariant becomes nonzero, this choice is no longer possible. 17 In this case it is possible to make a periodic gauge choice that is smooth almost everywhere, but there must be singularities ("vortices") somewhere in the interior of the BZ. For example, in two dimensions, assume a gauge choice that is periodic and also smoothly defined everywhere in the BZ except in a small disk located somewhere in the interior of the BZ. The periodic gauge choice implies that dk · A(k) around the perimeter of the BZ must vanish. Applying Stoke's theorem as in Eq. (3), but now to the region excluding the small disk, implies that dk · A(k) around the circumference of the small disk must approach −C in the limit that the disk becomes small. For the Chern phase (C = 0), this implies that there must be a vortex singularity in the phase choice inside the disk. If one attempts to construct WFs naively using Eq. (8), one then finds that the discontinuity in the phase choice of |u nk at the vortex in k-space leads to the destruction of exponential localization of the WFs in real space.
We have investigated how this problem manifests itself if one attempts to construct WFs using standard k-space methods. Similar to the approach described in Ref. [6], we again adopt a projection method in which one chooses trial Bloch-like functions |t k that are smooth and periodic in k-space. This can be done by constructing the |t k from a set of real-space trial functions |t α , i.e., t k (r) = R e ik·R t α (r − R). Then one can construct projected states |y k via and orthonormalized projected states where s(k) = y k |y k = | t k |y k | 2 .
The WFs are then constructed by Fourier transforming to real space using Eq. (8) with |w k substituted for |ψ k . Clearly, if s(k) should vanish at some k, this procedure would fail.
We can now study what happens if this construction procedure is applied to the Haldane model. We again use trial functions that are δ-functions located on the lowerenergy sites. We study the behavior of s(k) as a function of k throughout the BZ, while varying ∆/t 2 along the line in Fig. 2. Results for s(k) are plotted along some high-symmetry lines in Fig. 6. In the normal-insulator region, we find 0 < s(k) ≤ 1 for all k. After the phase boundary has been crossed at (∆/t 2 ) cr , we find that there is one point k a in the BZ for which s(k a ) = 0. There is also one point k b for which s(k b ) = 1 exactly. In our numerical calculations, the locations of k a and k b coincide with the points K and K ′ respectively. By experimenting with different trial functions, we have found that the precise locations of the minimum and maximum may deviate from K and K ′ , and the value at the maximum may be less than unity. However, we always find a point k a at which s(k a ) = 0. This is the point at which ψ k |t k = 0; the robustness of such a zero-crossing can be understood heuristically by realizing that by adjusting the two parameters k x and k y , the real and imaginary parts of the complex scalar ψ k |t k can generically both be made to vanish. From Eqs. (10)(11)(12) it follows that the phase of |y k evolves by 2π as one circles around k a , so that a vortex-like singularity is generated in the phase of |w k about k a , with |w k becoming ill-defined precisely at k a . Thus, the construction of well-localized WFs is no longer possible.
Instead of focusing only on the lowest eigenvalue, we plot in Fig. 7 the "density of overlap values" s(k). In the normal-insulator region of ∆/t 2 , one sees typical 2D van Hove singularities, and in particular, a well-defined minimum above zero. In the Chern-insulator region of ∆/t 2 , on the other hand, the density of overlap values shows a tail extending all the way to zero.
In summary, when the system is in its normal-insulator phase, the construction of Boys orbitals for finite samples, or of WFs for periodic samples, can be carried out in the usual way using a projection method. However, once the NI/CI phase boundary has been crossed, such a construction is bound to fail because of singularities that appear in the overlap matrices in both the real-space finite-sample and k-space extended-sample approaches.
V. THE SPREAD FUNCTIONAL
Another quantity that shows interesting behavior as the phase boundary is crossed is the spread functional Ω in real space, defined by Marzari and Vanderbilt (MV) 6 to be where |0n refers to the WF |Rn for band n in the home unit cell R = 0 and the sum is over occupied bands of the insulator. The spread functional is a measure of how "spread out" or delocalized the WFs are. In the remainder of this section, we specialize for simplicity to the case of a single band in two dimensions, so that Ω = 0|r 2 |0 − 0|r|0 2 . MV showed that the spread functional can be decomposed as Ω = Ω I + Ω, where and are gauge-invariant and gauge-dependent contributions, respectively. The gauge-invariant part has been shown to be a useful measure for characterizing the system: Ω I is finite in insulators and diverges in metals. 8 MV also gave corresponding k-space expressions for the two parts of the functional. Defining the metric tensor g µν = Re ∂ µ u k |Q k |∂ ν u k where Q k = 1 − |u k u k | (and ∂ µ = ∂/∂k µ ), these two quantities can be rewritten as and where A is the unit cell area, Tr [g] = g xx + g yy , andĀ is the BZ average of A(k) defined just above Eq. (3).
In the case of a Chern insulator, the use of the realspace expressions (14-15) becomes problematic, since well-localized WFs cannot be constructed. Nevertheless, the reciprocal-space expressions (16-17) remain welldefined. It is interesting, then, to see how these quantities behave in a Chern insulator. Do each of these quantities remain finite, or does one or both of them diverge? Also, what is the behavior of these quantities as one approaches the NI/CI phase boundary?
To answer these questions, we have computed the quantities in Eqs. (16)(17) using the finite-difference versions of these equations given in Eqs. (34) and (36) of Ref. [6]. For the calculation of the gauge-dependent part Ω, we have fixed our gauge such that |ψ k is real for all k on the lower-energy site in the home unit cell. The results are plotted in Fig. 8 for different densities of the k-mesh. It can be seen that Ω I is finite inside both the normal and Chern-insulator regions. At the critical value of (∆/t 2 ) cr ≈ 3.67, however, Ω I diverges logarithmically with the number of k-points. Furthermore, Ω is finite in the normal insulator region, but diverges logarithmically with the number of k-points for Chern insulators. This latter behavior is consistent with the presence of a vortex in the phases of the |w k around point k a , which causes A to diverge as |k − k a | −1 and imparts a logarithmic divergence to Eq. (17). It follows that the total spread Ω is finite in normal insulators and divergent in Chern insulators. Heuristically, it is tempting to associate this divergence with the presence of the metallic chiral edge states that are required to exist in Chern insulators (see Sec. III), but it is unclear precisely how these features are related. Note that electron localization in the quantum Hall regime is discussed in detail by R. Resta in Ref. [24].
VI. DECAY OF THE DENSITY MATRIX
The decay of the density matrix is a fundamental property of a system and it is closely connected to the electron localization. It was first studied by W. Kohn for one-dimensional insulators, 25 and many others have investigated this topic thereafter. [7][8][9]24,26,27 For periodic samples the density matrix is defined as where we assume that the wave functions ψ nk are normalized to one unit cell of area A. If the wave functions are written in terms of some basis functions φ k α (r), this becomes The C k nα are the eigenvectors obtained by diagonalizing the model Hamiltonian-in our case Eq. (4). In a tightbinding model, the basis functions φ k α (r) are made up of localized orbitals φ at sites r α : Inserting Eq. (21) into Eq. (20) gives The density matrix cannot be evaluated explicitly without the knowledge of the orbitals φ, but we can study instead the decay of ξ αβ (R), which essentially has the interpretation of being a density matrix expressed in a tight-binding representation.
Calculating the decay of ξ αβ (R) in Eq. (23) numerically is very demanding and the corresponding results are to be interpreted with caution. To ensure high accuracy, we used a very dense k-mesh of 2000 × 2000 points and 128-bit arithmetic. Results for ξ αβ (Rx) (i.e., along the x direction) for the Haldane model are collected in Fig. 9. In normal insulators the density matrix decays exponentially with a power-law prefactor. 7 We therefore choose to fit our results according to ξ αβ ∼ R −a e −bR , where R = |R| = |Rx|, and a and b are fit parameters. More specifically, we performed least-square fits of ln |ξ αβ | for distances up to 100 unit cells. For the decay behavior at the NI/CI boundary, we even went as far as 500 unit cells.
Within fitting error, the best-fit values for the parameter b are the same for all ξ αβ . Numerical results corresponding to ∆/t 2 values of 6, 5, 4, 3.67, 3, and 2 are 0.69±1, 0.43±1, 0.118±5, 0.0001±1, 0.282±1, and 0.75±1, respectively. In general, when approaching the phase boundary from either side, the best-fit value of the parameter b decreases and takes its minimum of zero at (∆/t 2 ) cr . In other words, in the normal and Cherninsulator regions the decay is dominated by the exponential behavior. However, exactly at the phase boundary the exponential decay vanishes (b = 0) and a pure powerlaw behavior remains, similar to metals. At (∆/t 2 ) cr the power-law decay is then characterized by a = 3.01±3 for ξ 11 and ξ 22 , and a = 2.00±2 for ξ 12 and ξ 21 , which suggests that the "true" values are the integers 3 and 2. Note that the results depicted in Fig. 9 correspond to a particular direction in real space (R = Rx). While the decay parameters inside the normal and Chern-insulator phase depend slightly on the direction, they become universal at (∆/t 2 ) cr . Again, this is a signature of the metallic character.
It is interesting that the power of the power-law decay at the phase boundary seems to be exactly an integer and that it differs by 1 for different ξ αβ . This behavior can be understood in the following way: ξ αβ (R) of Eq. (23) is essentially the Fourier transform of the kernel C k * nα C k nβ and it is well known that discontinuities in the kernel determine the decay behavior of the resulting quantity. In one dimension the discontinuities are related to the decay like R −(l+1) , where l is the number of continuous derivatives of the kernel. 28,29 Unfortunately, in two dimensions the situation is more complex and the resulting BZ integrals cannot easily be solved analytically. Nevertheless, we give heuristic arguments that a similar expression holds for higher dimensions.
To this end, we solve for analytic expressions of C k nα by diagonalizing the Hamiltonian H(k) in Eq. (4). In turn, we find analytic expressions for the kernel C k * nα C k nβ . Next, we switch to polar coordinates k = (k x , k y ) → k = (k, φ), replace ∆/t 2 by (∆/t 2 ) cr = 3 √ 3 sin ϕ, and expand the kernel around the Dirac point in orders of k: From the above expansions it is apparent that C k * 11 C k 11 has its first discontinuity in first order in k. Hence, there are l = 1 continuous derivatives. On the other hand, due to the e −iφ term, C k * 11 C k 12 has already a discontinuity in zeroth order in k, i.e. l = 0. This is consistent with the numerical results for ξ αβ (R) in Fig. 9 if we assume that the decay in two dimensions is according to R −(l+2) .
Equations (25) and (24) are thus consistent with a decay of R −2 and R −3 , respectively.
In summary, the numerical and analytical arguments are consistent in supporting the conclusion that the diagonal and off-diagonal elements of ξ αβ (R) decay as R −3 and R −2 respectively. An arbitrary pair of coordinates r and r ′ in Eq. (22) will involve a linear combination of contributions coming from diagonal and off-diagonal terms, so the final conclusion is that the decay of the density matrix will be as R −2 exactly on the NI/CI boundary, and exponential for any point lying within the normalinsulator or Chern-insulator phase.
Above, we have evaluated the density matrix ρ(r, r ′ ) for periodic samples. For finite samples, we expect a parallel behavior to hold for points r and r ′ deep inside the bulk. However, if both points are chosen to be near the surface of a Chern-insulator sample, one may expect that the presence of metallic chiral edge states will induce a power-law decay with the distance between r and r ′ as measured along the perimeter. Preliminary calculations on finite samples appear consistent with this picture.
VII. CONCLUSIONS
We have performed numerical and analytical calculations to study the behavior of several properties of the Haldane model as the system undergoes a transition from the normal-insulator phase to the Chern-insulator phase. We first showed how the usual methods of constructing Wannier functions break down for Chern insulators. We then investigated several quantities related to electron localization. We found that the total spread functional, which is finite in normal insulators, diverges in the case of a Chern insulator. However, when the spread functional is decomposed into its gauge-independent and gaugedependent parts, the former is found to remain finite in a Chern insulator, while only the latter diverges. The localization length of Resta and Sorella,8 which is related to the gauge-independent part of the spread functional, thus remains finite. However, the localization length increases and diverges logarithmically as one approaches the NI/CI transition. Similarly, when inspecting the density matrix, we find that it decays exponentially inside both the normal and Chern-insulator phases, but that the decay length increases as the phase boundary is approached, and the behavior crosses over to a power-law decay exactly at the phase boundary.
We thus find that a system that is sitting right on the NI/CI boundary has a kind of semimetallic character similar to that of graphene, in which the valence and conduction bands touch at one (for the Haldane model) or two (for graphene) Dirac points in the BZ. When the system is in the Chern-insulator phase, it still has remnants of metallic behavior in the presence of metallic edge states, the divergence of the total spread functional, and the difficulty of constructing Wannier functions.
Our results were obtained here for a specific realiza-tion of a Chern insulator, namely, the Haldane model. While it seems very likely that the localization properties found here will apply to other Chern-insulator systems, it remains to test this hypothesis by carrying out similar studies on other systems. It would also be of considerable interest to extend the current study to three-dimensional Chern-insulator crystals, and to continuum, as opposed to tight-binding, models. These could be fruitful avenues for future investigations. | 2017-09-28T15:10:23.094Z | 2006-08-24T00:00:00.000 | {
"year": 2006,
"sha1": "916a05b96a9b84eff2fac093a7988dee60e2a2a1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0608527",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "916a05b96a9b84eff2fac093a7988dee60e2a2a1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
12323109 | pes2o/s2orc | v3-fos-license | A culture apparatus for maintaining H at sub-nanomolar 2 concentrations
We devised a microbial culture apparatus capable of maintaining sub-nanomolar H concentrations. This apparatus 2 provides a method for study of interspecies hydrogen transfer by externally fulfilling the thermodynamic requirement for low H concentrations, thereby obviating the need for use of cocultures to study some forms of metabolism. The culture vessel is 2 constructed of glass and operates by sparging a liquid culture with purified gases, thereby removing H as it is produced. We 2 used the culture apparatus to decouple a syntrophic association in an ethanol-consuming, methanogenic enrichment culture, allowing ethanol oxidation to dominate methane production. We also used the culture apparatus to grow pure cultures of the ethanol-oxidizing, proton-reducing Pelobacter acetylenicus (WoAcy 1), and to study the bioenergetics of growth. (cid:211) Science B.V. All reserved. grow 2 3 (10 ) was again serially diluted and allowed to grow. This procedure was repeated three additional times until consistent growth was achieved.
Introduction
degrade the organic substrates to respiring organisms. Due to thermodynamic constraints, the organic The syntrophic degradation of organic material is substrates can only be consumed in this fashion an environmentally and economically important pro-when the concentration of H is low. Respiring 2 cess which occurs during anaerobic digestion organisms utilize the H , and maintain low con-2 (Schink, 1997). Syntrophic degradation, also called centrations so that the syntrophic oxidation of the secondary fermentation, involves the cooperation of organic substrates is sufficiently exergonic. two or more organisms to consume a single sub-Many anaerobic microorganisms are capable of strate; the substrate (organic acids, alcohols, amino acting as syntrophic partners during the degradation acids, and aromatics) is generally a product of of organic material. Many of these organisms are primary fermentation. Hydrogen (H ) is thought to found within the Genus Syntrophomonas, though 2 be the key intermediary in this process, transferring others include sulfate reducers, species of Pelobacreducing equivalents from the organisms which ter, benzoate degraders, and others. Many such organisms are available in pure culture as they are also capable of growth on substrates which do not require syntrophic coupling. Previous attempts to grow these organisms on 'syntrophic' substrates in the absence of partner organisms have met with only limited success (Mountfort and Kaspar, 1986; Stams subsequently flow through a purifier to remove H , 2 et al., 1993;Schink, 1997). CO, and O . The gas then flows through a stirred 2 This study describes the design, construction, and glass culture vessel, where biologically produced H 2 use of a flow-through culture apparatus capable of is rapidly transferred from the liquid to the gas growing monocultures of 'syntrophs' by externally phase. The resultant H -containing gas flows through 2 maintaining the thermodynamic requirement for low a series of traps to remove water and hydrogen H concentration. We further describe the growth of sulfide. The analytical portion of the apparatus is 2 both pure and enrichment cultures of H -producing located downstream and is used to measure the 2 ethanol oxidizers. concentrations of gases entering and exiting the culture vessel. The location of the analytical portion of the apparatus allows for passive sampling of gas 2. Materials and methods metabolism from the culture. A schematic diagram of the entire apparatus is shown in Fig. 1 H contamination of the vessel must be minimized, the vessel is purged for at least 24 h prior to addition 2 (3) strict anaerobic conditions must be maintained, of medium, (2) the vessel is kept under positive and (4) all experiments must be performed aseptical-pressure to prevent air contamination, (3) a reducing ly. agent, generally cysteine, is added in minor quan-To ensure that gas exchange between phases tities to maintain reduced conditions, (4) minor occurs rapidly, the bottom of the culture vessel amounts of resazurin are added to the medium as a contains a glass frit which produces fine bubbles visible redox indicator, (5) all gases are of the (estimated size 10-100 mm) which give the solution highest purity available, and (6) gases flow through a a milky white appearance. The glass frit allows for heated column which removes traces of O in 2 an even distribution of bubbles, though when the addition to H and CO (see Section 2.3.). In addition 2 impeller is not used organisms may accumulate near to cysteine, sufide can also be used as a reducing the surface of the frit. The vessel is also equipped agent, though sulfide is lost as gaseous hydrogen with a glass stirrer to mix the liquid medium and sulfide at a rate which is pH dependent. Other maintain uniform conditions. The screw-shaped stir-reducing agents, including thiosulfate, are also comrer was fashioned from a piece of pyrex plate (60 3 patible with the culture apparatus. 1 1/20 3 1/80); the top and bottom halves are To ensure that no contamination is introduced to threaded in opposite senses, minimizing vortex for-the culture vessel the entire vessel is cleaned and mation, shearing, and disruption of cells. The stirrer autoclaved prior to use (30 min, 1218C), sterile plugs is driven by a variable speed power head (model consisting of glass wool are located directly up-RZR-1, Caframo Ltd., Wiarton, Ontario, Canada), stream and downstream of the vessel, sterile techand is generally operated between 200 and 600 rpm.
nique is used in handling any components of the The rod of the stirrer is fitted to a bore in the vessel, and sampling ports located on top of the TeflonE plug, and is lubricated with a small amount vessel are sterilized before each use. of grease (KrytoxE, Dupont, Deepwater, NJ). The snug fit of the glass rod through the hole in the 2.3. Gases TeflonE plug, coupled with the use of grease, is sufficient to create a seal under slight positive Mass flow controllers (model 8100, Unit Instrupressures. We have not observed biofilm formation ments, Yorba Linda, CA) are employed to precisely during experiments.
control the flow-rate and mixing ratios of gases. The Because metal surfaces are known to produce H MFCs are controlled by a digital power supply 2 in the presence of water, metal has been eliminated (model DX-5, Unit Instruments, Yorba Linda, CA) completely from portions of the vessel which contact which is capable of simultaneously controlling severwater. Though the vessel consists primarily of glass, al channels. Each tank of gas is connected to an minor amounts of TeflonE, PFA (perfluoroalkoxy), individual MFC, and flow-rates are confirmed by use and TeflonE-coated rubber are also present. Portions of a bubble flow meter (The Gilibrator, Gilian of the vessel constructed using PFA are the Instruments Corp., W. Caldwell, NJ). The following SwagelokE fittings and the tubing leading from the gases have been used with the culture apparatus: (1) vessel, while the plug located on the top of the vessel UHP N (So-Cal Airgas, Lakewood, CA), (2) use of a gas chromatograph equipped with a flame Discrete liquid samples were taken during growth ionization detector. The sampling port located at the for analysis of acetate, pH, and growth yield. Growth top of the vessel allows for removal of discrete liquid yields were determined in duplicate by harvesting samples for other analyses.
cells at the end of the experiment, centrifuging 35 ml of the culture (4000 3 g for 1 h), desiccating the 2.5. Operation pellet and measuring the resulting mass. Acetate was measured with an HPLC using an organic acids The culture vessel is sterilized, assembled, and column (Alltech, IOA-1000) and a UV/ VIS detector purged with H -free gas beginning more than 1 day set at 210 nm (0.5 mM H SO mobile phase set at 2 2 4 21 before inoculation. Upstream and downstream H 0.6 ml min ).
Calculation of DG9
Free energy yields (DG9) were calculated using standard thermodynamic equations. Values for 2 CH COO and pH were interpolated from measured 3 concentrations, while H and temperature were mea-2 sured for each calculation. Values for ethanol were calculated from initial conditions by subtracting 2 CH COO production; assimilation of ethanol de-3 rived carbon into cell mass was not considered, and is not likely to be significant for thermodynamic calculations. Several factors are involved in calculating DG9. Temperature is important through the effect of entropy on DG89 ( 2 TDS) as well as its effect on the deviation from equilibrium (RT 3 lnhQj). The pH is important through its effect on DG89 as well as 2 through its effect on the speciation of CH COO / 3 CH COOH. We assumed that all CH COOH was in The H -stripping culture system has been used to mol , and DG89-H 525.69 kJ per pH unit (0 at 2 1 analyze H production from several different cultures pH 0). The measured concentrations of H , H , and 2 2 2 including pure cultures of Methanobacterium strain CH COO were assumed to be equal to the con-3 Marburg, Methanosaeta thermophila strain CALS-1, centrations apparent to the organism, and all ac-P. acetylenicus strain WoAcy 1, and ethanol-oxitivities were assumed to equal 1.
dizing methanogenic enrichment cultures similar to the classical 'Methanobacillus omelianskii' (Bryant et al., 1967). Fig. 3 demonstrates the net production 3. Results of H , CH and acetate in an ethanol utilizing 2 4 methanogenic enrichment culture grown in the cul-The culture vessel is capable of achieving gas ture vessel with a defined mineral salts medium phase H levels below our analytical detection limit containing 20 mM ethanol. The metabolic activity of 2 23 (10 Pa), corresponding to an equilibrium con-the H -producing organisms far exceeded the 2 centration below 10 picomolar in the liquid phase. methanogenic activity when H was held low, there-2 Fig. 2 demonstrates the flushing of H from an by uncoupling the 'syntrophic' association with the 2 empty culture vessel which is given an initial pulse methane producers (Fig. 3). of H . The residence time calculated from Fig. 2 Pure cultures of P. acetylenicus were grown in a 2 (15.6 min) closely matches the expected residence mineral salts medium containing 20 mM ethanol. time (16 min) based on calculations using flow-rate The evolution of H was monitored as a function of 2 and total volume. When the vessel contains liquid, time in the exhaust gas of the culture apparatus the residence time of H is about half as long during several experiments. Fig. 4 demonstrates a 2 because the total volume of gas in the system is typical H production profile, while Fig. 5 dem-2 about half as large. Because H is relatively insolu-onstrates the net production of H (calculated from a 2 2 ble and the culture is constantly being sparged, the production profile) and acetate (which builds up in the culture apparently act to maintain a consistent 25 methane (10 ) was again serially diluted into the same medium free energy yield for the catabolic pathway (Fig. 6), and allowed to grow. After 2 weeks, the lowest dilution to grow counteracting thermodynamic changes caused by 23 (10 ) was again serially diluted and allowed to grow. This changes in pH, temperature (Fig. 7), and in the procedure was repeated three additional times until consistent relative proportions of ethanol and acetate.
growth was achieved.
Growth yields measured for P. acetylenicus (WoAcy1) grown on ethanol are low, 2.2 6 0.5 21 the liquid phase) during a separate experiment. The g mol acetate-dry weight, corresponding to the low observed reaction stoichiometry shown in Eq. (1): amount of free energy available for the entropically driven oxidation of ethanol. These yields are, how- ever, similar to those estimated in coculture studies with the same organism (Seitz et al., 1990a). agrees well with the expected stoichiometry. Hydrogen production typically began within minutes of inoculation, and increased for several hours until stabilizing at a critical level corresponding to the 4. Discussion minimum thermodynamic yield (Fig. 4). The partial pressure of H in the exhaust gas of the culture Anaerobic microorganisms, particularly those in-2 vessel typically ranged from 30-85 Pa during this volved in terminal degradation of organic material, growth in pure culture. The key ability which allows are able to grow from very small quantities of P. acetylenicus to conserve energy presumably lies in energy. It is generally accepted that some anaerobic its use of a transmembrane ion pump to drive the microorganisms are able to grow on a 'biological endergonic production of H from NADH (Haus-2 energy quantum' equivalent to the extrusion of one child, 1997). Recent estimates indicate that P. ion from the cytoplasm (Schink, 1997). Other an-acetylenicus utilizes 2 / 3 of ATP production to drive aerobes, like P. acetylenicus, are thought to conserve an electrochemical gradient which in turn drives the energy through substrate level phosphorylation even endergonic production of H (Schink, 1997). The 2 though the thermodynamic yield for the catabolic growth yield and free energy yields observed in the 21 process is lower than the |70 kJ mol required for present pure culture study lend further support to this irreversible synthesis of ATP (Schink, 1997). Our hypothesis. calculations indicate that the amount of energy Calculating thermodynamic yields from cultures available to P. acetylenicus in these studies ranged grown in the apparatus assumes that equilibrium is 21 from 26 to 33 kJ mol , equivalent to the irreversible rapidly achieved between the environment surroundformation of about one third of an ATP per mol of ing the cell, and the gas phase. The small size of the ethanol oxidized (Fig. 6); such an energy yield is bubbles produced by the glass frit and the use of an near the absolute minimum for energy metabolism.
impeller, help to facilitate rapid gas transfer. During Similar energetics and growth yields have been growth, the cultures constantly produce H , and 2 estimated in coculture experiments involving P. therefore maintain an H flux from the cell into the 2 acetylenicus with various H -oxidizing syntrophic surrounding liquid. Each cell is surrounded by a 2 partners (Seitz et al., 1990a,b), though never during diffusive boundary layer in which diffusion is the have a profound influence on the thermodynamics of H production (Conrad and Wetter, 1990). Tempera-2 ture affects H production through its effect on 2 entropy (DG89 5 DH 2 TDS), which influences the standard Gibbs free energy (DG89), as well as through its influence on the deviation from standard conditions (DG9 5 DG89 1 RT lnhQj). Results shown in Fig. 7 demonstrate the tightly coupled relationship between temperature, free energy yield, and H production. The general result for H produc-2 2 ing reactions, holding all other factors constant, is that higher temperatures allow for higher H con-2 centrations. The converse is true for lower temperatures. Changes in pH can influence the free energy when there is a net production or consumption of protons during metabolism, as is often the case during syntrophic degradation. For example, acetic acid production caused the pH of the liquid culture in the lines of constant free energy.
The culture apparatus described here shows potential for study of other forms of metabolism besides those already discussed. Suitable substrates dominant mixing process (Fenchel et al., 1998). may include additional alcohol, substituted aromat-Each cell experiences a microenvironment of higher ics, acetate, glycolate, and amino acids. The culture localized H concentrations so that use of gas phase apparatus also shows potential for enrichment and 2 H concentrations to calculate thermodynamic yields isolation of other 'syntrophs'. The advantage of a 2 consistently overestimates the actual energy available culture apparatus such as this is that it mimics to the organism. The net effect is that an H natural conditions and fulfills the thermodynamic 2 producing organism within the culture vessel is requirement for low H ; this capability may obviate 2 living from less energy than calculations indicate. the need for use of cocultures in studying many Such factors may explain the small differences forms of H metabolism. 2 between free energy yields calculated with P. acetylenicus, and those calculated in coculture studies (Seitz et al., 1990b). | 2018-04-03T00:30:48.319Z | 2000-02-01T00:00:00.000 | {
"year": 2000,
"sha1": "cf2d73dd47d5e305266518a40fab2544ebf8ed86",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt72f7m2zm/qt72f7m2zm.pdf?t=nr2ra8",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4b0840850f7531c3c14d636cf0c77bf8afb9faba",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
15555937 | pes2o/s2orc | v3-fos-license | ©Copyright in this paper belongs to the author(s)
Interface design can overwhelm the effects of better or worse database engines. Progressively more systems use graphical interfaces but as with any new technologically–driven medium it takes time for notions of ‘good’ design to form and disseminate. Nowhere is this more obvious than in the nascent field of 3D interfaces where — much as in the days of font overkill which followed the introduction of laserwriters — 3D is often used for its own sake. This leads to designs that may initially be eye–catching but later turn out to be cluttered and difficult to work with. I will discuss some issues that make the design and evaluation of information visualisation systems complex such as numerical metrics for interface assessment, the use (and abuse) of metaphor, taking advantage of 3D perception, and shared information environments. This discussion will draw upon images and experiences from the Bead system in order to examine the discipline we are involved in: is it more craft than science?
Introduction
When we build interactive systems, we produce structures as complex as the buildings we work in.Essentially, both are structures where we obtain, store and produce information, and interact with other people.When we look at a building and say whether it is 'good' or not, we do not only look at the functional or engineering aspects.We think of the way it allows us to move and work, as well as its aesthetics, how weatherproof it is and whether it safely supports the weight of the people within it along with their paper, furniture and machines.We tend not to think of interfaces and computer systems in such varied and complex ways.We tend to look at our work as civil engineering and not architecture.Although engineers do consider the aesthetics of their designs, it is not given nearly the same weight in the practice or education of their discipline.Architecture consists of both the objective and the subjective or, in other words, the scientific in combination with the artistic.They have been blended together over centuries in a craft based on designing complex spatial structures, and I feel that this kind of approach is a better paradigm for interface designers to follow than pure, or traditional, computer science.I would expect that the majority of people who attend this workshop are computer scientists, and a number are likely to think about their systems in traditional computer science terms.This can mean an engineering-like view based purely on technical proofs of the availability of functionality, non-interference in database operation, expressibility of data access in a given formal query language, and so forth.More modern HCI practitioners have taken on a wider architecture-like view, as exemplified also in textbooks such as [10].Computer science, psychology, graphic design and sociology each play a part in the way to think about interactive systems and interfaces.
It matters how you think about interface design.Clearly, it influences what you will want to build.It defines what type of issues, criticisms and assessments you take on as relevant.It determines what you consider to be a good system or even a working system.Therefore I would encourage the reader to accept that what I and most others in this workshop do nowadays is best considered not as science, but as craft.Not engineering, but architecture.I will put forward some issues and relate some experiences with my own system, Bead, which may serve to clarify my opinion in this matter.At best I might win over a few 'converts', but at least I hope to provoke discussion about how we design interfaces to database systems.
Bead
Part of the basis for this paper is the experience I have had developing a system for information visualisation called Bead.Bead lays out a set of objects to make a map or landscape, using either simulated annealing or force-directed placement.An example is shown in Figure 1.The most common data type mapped out has been textual documents -articles from a bibliography -although in the past year my group has also been applying Bead to financial data.In the following discussion, I will usually refer to bibliographic data.
The system approximately represents the similarities and dissimilarities between the articles by their separation in the landscape i.e. the high-dimensional distances defined by similarities in word usage are approximated by lowdimensional (three-dimensional) distances in the visualisation.At any moment, one can look at the discrepancy between the current low-dimensional distance between two articles and their high-dimensional distance, and so decide whether to add a force to push them further apart or pull them closer together in order to reduce this error.An additional gravity-like force is used to attract the documents towards a 'ground plane'.This system of forces is implemented using spring models in order to lay out the landscape, and work on layout algorithms based on error minimisation has been the focus of a substantial part of the work on Bead in the past years.Algorithms have been developed to take the complexity of each iteration of the layout process for N objects down from the standard O(N 2 ) to O(N.logN) and most recently to O(N) [3].
Once the errors of document separation in the landscape have been minimised, one has a set of positions of graphical objects which are meshed together using Delaunay triangulation to make a mostly flat landscape.Thereafter a number of features are used to enrich the landscape with features for legibility and imageability [4] when visualised in a shared virtual environment.These include static features which serve as landmarks such as the landscape's shoreline, local areas of roughness, and coloured clusters of related articles.Dynamic features are also shown as each user moves through the shared virtual environment, for example pop-up titles and topics.These features appear and disappear in accordance with each user's field of view, the local frequency of occurrence of words, and 'popularity ratings' of documents based on histories of word search and document selection.An example scene is given in Figure 2, below.
Numerical Metrics of Quality
The spring models used for layout give rise to a convenient measure of global layout quality, stress, widely used in assessing multidimensional scaling and related techniques.This is the mechanical stress of the spring system, and is essentially the sum of squared errors, normalised by the sum of squared low-dimensional distances to favour compact layouts and to allow some comparison between different data sets.Bead brings the stress of small sets of articles (of the order of N=100) down to around 0.1.Another order of magnitude in N leads to a stress of around 0.2.
In the course of a layout, stress is the key value used to determine how well the program is progressing.Sometimes the stochastic nature of the layout process will lead to small variations in the final stress of repeated runs even when all input parameters and data are held constant.There are many such parameters controlling the layout process e.g. the stiffness of the springs, their damping, and the metric of high-dimensional distance.The latter feature has been found to have a strong influence in generating low stress values.Having a metric which leads to good discrimination between objects is important.
Recently some layouts were made comparing high-dimensional distance metrics while holding other parameters constant.Unsurprisingly, rather different layouts were generated, with some deviation in stress values.The layouts were fed into a visualisation tool for examination, and what we found rather surprised us: some of the layouts with the average or slightly higher stress values seemed to us to be clearly (although subjectively) better than others.
In what we felt to be the best layouts, clusters of objects were better separated from each other, and internally were more tightly bunched.Also, when comparing localised pairs within clusters, there seemed to be a better fit or 'sense'.The separated tighter clusters meant that they also served better as landmarks.They were more identifiable and distinct, and added imageability features could accentuate this to good effect.We began to consider the way the stress metric worked, and whether we might take into account cluster tightness, but then began to wonder how one might consider the effect of pop-ups and labelling regions with topics.Also, there are features such as font size and colour, the size, colour, shape and behaviour of each document, and so on.Collectively and interdependently these convey the feel of the set of documents, along with the layout positions, and thus form an essential part of the final presentation of data to the users.Their presence or absence in different layouts can often override differences in stress.
I was reminded of a paper by Bryce Allen where the effectiveness of a much simpler (textual) display of bibliographic information could be varied significantly merely by varying the order of title, author, &c.[1].He noted that "by making a minor change in the way an information system presents data, designers can change the pattern of usability of the system.Simply altering the order of presentation of data elements in a bibliographic display can transform an information system from a 'one-size-fits-all' system to one in which certain users, because of their higher levels of perceptual speed, are able to achieve significantly better performance."The possible presentational variations in the structure of his displays were small compared to those of the moving perspective views of the Bead information landscape.
There is more to this issue than saying that stress is not the right metric for assessing the spring system.In general, if information is in the 'right' position on a screen, it doesn't mean that the user will perceive and use it the way we expect.It can be skipped over because something else nearby looks more obvious, interesting or relevant, because the surrounding items mostly don't seem interesting enough, or because the user has already seen so many 'relevant' items that their interest is beginning to move on elsewhere.
When checking the progress of an ongoing layout process, or when comparing two layouts which have the same input parameters and employ the same output techniques, stress offers a useful tool to assess quality.However, one should be aware that more general use of such numerical measures -based on simple mechanistic features such as geometric distance, presence on the screen or ranking in lists -ignores the overriding perceptual and design effects which may transform the effectiveness of the system in the hands of the user.
Perception and Metaphor
To some extent the information design in Bead was founded in the familar linguistic notion of similar items being metaphorically 'close together' [7].In addition, there were features that had familiar spatial interpretations: rough areas of the landscape were less reliable and more difficult to handle, peripheral topics in the data tended to be represented by documents pushed out to the periphery of the shore, while dominant topics were more central.Our everyday familiarity with landscapes, and also the historically developed culture of maps and their interpretation, suggested that the landscape metaphor was a promising approach to take in enriching and improving the Bead visualisations.
Moving to the landscape model for Bead increased average stress levels when compared to the earlier 3D point cloud model, but the landscape was superior in terms of legibility and imageability [2].It did not matter if similar documents were closer to each other inside the point cloud model, as occlusion made most of them difficult or impossible to see.Patterns and clusters were 'not there' as it was so difficult to perceive them.Moving through the structure didn't help much as one could not maintain the relationship of one's position to a useful ground plane or to the patterns within the data that had been laboriously calculated by the system.
Perceptually speaking, we do not live in a fully 3D world.In the past I have sometimes referred to our everyday experience as being '2.1D'.The variations on the surface of the earth such as hills and valleys are relatively small compared to the extent of the earth's surface.Gravity makes the vertical axis a special case, as pointed out by J.J. Gibson [5], and we should not expect that attributes mapped to height will be treated in the same way as those defining the base plane in a visualisation.
Another way of looking at this is to disambiguate 3D vision and 3D structure.We see in a three-dimensional way, with perspective compressing the apparent size of objects which are further away and expanding objects which are close.We use 3D vision very skilfully in the 2.1D world.We can use perspective to rapidly flick between an overview of large areas of the environment and detail on closer objects.One of the main reasons for motion is to reallocate the finite resource of what we can see in detail.We move towards what is distant or peripheral, and smoothly make our view of it more detailed.Perspective lets us maintain the context of what is more distant in the environment, including where we came from and where we might move on to next.Motion is thus a declaration of interest, a request for more detail.These techniques are hard to apply in strongly 3D structures which are as large and complex in height as they are in length and breadth.Occlusion stops us getting an overview and using our skills in manipulating perspective and wayfinding.In nature this is not often a problem, as we rarely use or find large scale 3D structures.
In a metaphorical way, Bead employs some features of landscapes because I want to use 3D vision skills to work with a 2.1D information structure.As a linguistic tool, metaphor requires a certain slackness in interpretation, as otherwise it is not expressive enough to be useful.Excessively tight interpretation of a metaphor where every single feature is assumed to shared between the two types of object is doomed to failure.In visualisation, photorealism is a means to end rather than an end in itself.
As Gibson and others such as Don Norman [8] have discussed, potential use will define how we perceive an object.Something affixed to a door must afford my grasping and pulling it before I would perceive it as a handle for opening the door towads me.Metaphors often fail when there are similarities in structure but not in use, and this problem seems to be increasingly common in 3D interfaces.
A number of systems (which I will leave unnamed) have appeared which basically show a graphical structure which is isomorphic to the database structure or query result structure, but nevertheless the design is poor.For example, a ranked list of objects matching a database query might be mapped to a set of floors in a building, best match highest, and then each object has its subfields mapped to different rooms in the building where other properties or attributes are shown, for example as objects within the rooms or images on walls and surfaces.The mapping between the two structures is fairly obvious and might be shown to be formally correct, but I would contend that the design is bad because the floors and walls inhibit the ability to scan and compare items, to look for patterns, and so forth i.e. the metaphor fails because the graphical representation does not let us use our 3D vision skills -the uses or activities that make visualisation powerful.Such naive mappings, dressed up as metaphors, may show off fancy graphics engines, but they don't serve well as visualisations.We have to think about how the structures will really be used and perceived, as graphical representation is a linguistic issue with all the complexity and subtlety that implies.Isomorphism is not enough.Lastly, perception does not only involve the fleeting appearance of objects in our visual field, or other transient sensory information.It also involves cognitive aspects such as memory.Have you noticed that if you get a new car, cars of the same model start to become so much more noticeable than before?Furthermore, we may infer some feature of our environment because of consideration of the sensory and memory information we have at hand.To use a paraphrased example from [9], I might be new to Edinburgh and have been told that a particular restaurant is on Grindlay Street.Then, after meandering through the town I suddenly find myself in front of that restaurant.I might infer the name of the street and my position in the city without having to read any other sign or map.
Interfaces often stop us from making use of everyday memory and inference abilities.They tend to present information to us without regard to what we have done before, for example in seeing how a current query relates to earlier work, thereby allowing comparison or combination of earlier results.They tend not to allow us to get a view of the database that lets us see how each item or query fits in to the wider scheme of things.
Bead lets us employ spatial memory as the landscape is a persistent referent upon or in which successive searching and browsing activities are carried out.Memory of the locations of previous work, especially in relation to landmarks such as shorelines and clusters, can be used to help find one's way through the model toward a document or set of documents.By having a global structure -a 'sense' to the layout -then people can infer useful features.For example if a currency trader enters a query about 'banks' and finds one match in an area with a topic word such as 'river' and another in an area labelled 'finance' then he can immediately infer something about the content and utility of each.
Shared Information Environments
Another aspect of everyday work which is too often ignored in database interfaces is that of shared use.We use the information of others so often in our workaday world that we hardly notice it.Especially with more complex information, people's opinions and actions are often more reliable than our own data-centred analyses.When I am thinking about going to see a film, then reviewers' and friends' opinions will be as or more important than a plot summary.
With almost all information stored in databases, it is clearly important to avoid interference so as to maintain database integrity and quality.Unfortunately this principle tends to extend too far up into interfaces, ignoring the way that information is used in the real world.We can't see if other people are working on the parts of same data, or in what way.We can't see the history of an object such as who created it, who has been working on it, and who has been reading it.An interesting example of what can be done in this regard this was given in [6].Documents had traces of past use shown on them, showing which subregions had been the focus of earlier work either in creation or in reading.This helps answer practical and important questions such as: Which sections of a document have been read by various categories of users?Who were the last people to read a section, and when?
The pop-up topics and titles of Bead use history information to help interpret and display the visualised data.Histories of past use of visible documents are examined in order to find which words and documents were 'significant' in that area i.e. what were people looking for in that area, and what documents were interacted with in that area?This approach tends to sidestep some of the problems of text understanding and summarisation that dog information retrieval and AI-based approaches.
Although these are only our first steps in using histories in Bead, we feel that finding out what tasks and activities people were involved in when accessing and using data, and perhaps how and why they changed it, may give very useful information to subsequent users.Similarly, awareness of the ongoing activities of others is an essential part of everyday work.There still remain many issues unresolved, for example the obvious questions of privacy and invasiveness, and also showing the ownership and status of an object e.g.we might not be able to see so much detail of a rough draft of a document or design, but more refined versions may be represented so as to invite access for other people's comments.Awareness of future use may also be important.For example, I may see a document such as a report I am working on as being more significant if I see that my boss plans to look at it tomorrow morning.Such issues are handled in quotidian ways in our real world workplaces, but although CSCW research has made some advances in addressing them, we have yet to fully address the challenge of handling them in the design of interfaces to database systems.Our designs should relate to the culture of work: of ownership and privacy, of history of use, current action, and future plans.If our systems better fit the way work happens in the real world then they will be the better for it, as they will better suit the people who do that work.
Conclusion
In considering some of the ways in which designing good interfaces to database systems is hard, I have tried to push on the notion that there are more issues which are relevant to interface design than those from computer science.As designers we should be aware of functional aspects but we should also be aware of the range of complexities and subtleties that arise when we attempt to convey information to people, and to support their work.We should use numerical and computational metrics of quality, but know the limits of their utility.We should use metaphor, but with the knowledge that using such a rhetorical device requires appreciation of how it will be used and interpreted.We should be aware of how important a role use plays in the acceptance of graphical metaphors, especially when we feel the pull towards greater naturalistic 'realism'.We must face up to the way that buildings and networks make people less and less likely to be isolated in their work, and the way that social issues come in to effect.
We should accept that handling this wide range of issues is our craft.We should be competent practitioners of informatics, but also familiar with social and psychological issues including the perceptual, linguistic and aesthetic -each of which is unlikely to soon be reducible to systematic, objective formulations of knowledge i.e. to science.Therefore, to paraphrase Michel Beaudouin-Lafon, we should not call what we do 'computer science' any more than writers consider that they do 'pen science' or architects 'brick science'.We have to raise ourselves up and gain a broader perspective.That is what our craft is all about.
Figure 1 .
Figure 1.A set of 831 bibliography entries has been laid out by making proximity approximate similarity in word usage.Coloured clusters, based on local density in the layout, have been added to the basic landscape model.Clusters serve as static imageability features for orientation and navigation.A few large clusters dominate, although several smaller clusters exist.
Figure 2 .
Figure 2. Static imageability features such as clusters help with orientation and navigation, but dynamic features are often better for revealing detail.A few documents are dynamically and randomly chosen to be highlighted with a medium colour and have their titles shown, with a sampling bias based on nearness to the eye.Topic words are also dynamically placed in the scene according to density of occurrence in the field of view and word usage history.Clicked-on words start searches which colour documents white. | 2014-10-01T00:00:00.000Z | 1996-07-01T00:00:00.000 | {
"year": 1996,
"sha1": "d4151dc8dc60ae85b00eaef447a5eebd5d95dfa9",
"oa_license": "CCBY",
"oa_url": "https://www.scienceopen.com/document_file/95cf8e49-1649-44d3-91ac-5faa99f020e7/ScienceOpen/001_Chalmers.pdf",
"oa_status": "HYBRID",
"pdf_src": "Grobid",
"pdf_hash": "4e4a3dc8a0863f53ca0764ff10c825d7037a4c11",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
212665285 | pes2o/s2orc | v3-fos-license | Management of Ejaculatory Duct Obstruction by Seminal Vesiculoscopy: Case Report and Literature Review
Ejaculatory duct obstruction is a rare condition identified in up to 5% of infertile men. Patients with ejaculatory duct obstruction can present with aspermia, azoospermia or oligoasthenospermia, painful ejaculation, hematospermia, prostatic pain, or male infertility. Semen analysis, transrectal ultrasonography, pelvic computerized tomography and magnetic resonance imaging are often used in the diagnostic work up, but with limited accuracy. While transurethral resection of the ejaculatory ducts has good efficacy for distal duct obstruction, results for proximal obstruction are less impressive, and it might cause severe complications, such as rectal injury and urinary incontinence. Recently, the use of high quality endourological devices and an improved understanding of ejaculatory ducts anatomy gleaned through the use of sophisticated imaging tools have led to the development of novel minimally invasive treatment options for this condition. The present study aims to report an index case of ejaculatory ducts obstruction managed with seminal vesiculoscopy, and review the current literature regarding this topic.
INTRODUCTION
Ejaculatory duct obstruction (EDO) is a rare condition present in up to 5% of infertile men. It may be caused by several pathologies such as ejaculatory duct (ED) malformations, midline prostatic cysts, fibroses due to prostatitis or seminal vesiculitis, seminal vesicle (SV) stones, or scarring after endoscopic manipulation (Kang et al., 2016;Wang et al., 2012). Patients with EDO can present with aspermia, azoospermia or oligoasthenospermia, painful ejaculation, hematospermia, prostatic pain, or male infertility. Semen analysis (SA), transrectal ultrasonography (TRUS), pelvic computerized tomography (CT) and magnetic resonance imaging (MRI) are often used in the diagnostic work up of EDO, though have limited accuracy in identifying the etiology of EDO (Han et al., 2009;2013).
Despite being known for decades, EDO is considered a problem of difficult management due to the complex anatomic relations of the ejaculatory ducts (ED). While transurethral resection of the ejaculatory ducts (TURED) has good efficacy for distal duct obstruction, results for proximal duct obstruction are less impressive, and it might cause severe complications such as rectal injury and urinary incontinence (Kang et al., 2016;Wang et al., 2012;Han et al., 2013). Recently, the use of high quality endourological devices and an improved understanding of ED anatomy gleaned through the use of sophisticated imaging tools have led to the development of novel minimally invasive therapeutic options for EDO. The present study aims to report an index case of EDO treatment with seminal vesiculoscopy (SVC), describe the technique of SVC, and review the current literature. This report was approved by the Research Ethics Committee of our institution (approval number 2.057.579).
CASE REPORT
The patient was a 44-year-old healthy male, presenting with a three-year history of persistent hypospermia and recurrent episodes of hematospermia. He also had primary infertility, being unable to conceive with his 42-year-old wife, despite regular unprotected intercourse in the past 10 years. He was treated empirically with several antibiotic regimens and phytotherapeutic supplements without any improvement in symptoms. His past medical history was significant for obesity, having been submitted to bariatric surgery 3 years previously, and untreated bilateral varicoceles. His physical exam revealed bilateral grade II varicoceles, a 15 mL right testis, a 12 mL left testis, and bilateral normal epididymis, and vasa. Digital rectal exam was unremarkable, with non-tender prostate and non-palpable SVs.
Since the patient refused to perform TRUS, further diagnostic evaluation was done with contrast-enhanced pelvic MRI, which revealed dilation of the right SV and ED with no signs of inflammatory, neoplastic or cystic lesions in the prostate and SVs (Figure 1). After the patient was informed of the probable diagnosis of partial EDO, and spermatogenesis impairment secondary to varicoceles and bariatric surgery, the patient agreed to undergo endoscopic SVC aiming to confirm the diagnosis of partial EDO and to improve the hematospermia and his SA parameters.
The procedure was performed in lithotomy position under general anesthesia. Ciprofloxacin was given as antibiotic prophylaxis. Urethroscopy was performed using a 6 Fr rigid ureteroscope and the verumontanum was identified. Catheterization of the ED orifices with a hydrophilic guidewire was unsuccessful. Careful inspection of the internal cavity of the prostate utriculum revealed no connection to EDs. Unroofing of the verumontanum with a 26Fr resectoscope using monopolar cut current was then performed, and drainage of dark and haze fluid was observed coming from the right ED. This maneuver allowed the guidewire to be inserted into the right ED and the ureteroscope was progressed in the right seminal vesicle. Intermittent low-pressure irrigation and gentle alternate rotation of the scope was used. Several smalls stones and amorph material were found ( Figure 2). Irrigation was used again to flush out all the material and stones. Revision of the right seminal vesicle revealed dilated right ED and absence of residues. Using the right ED as a landmark, the left ED was successfully catheterized, and a milky liquid drained after guidewire insertion and the vesiculoscopy revealed a small amount of amorph material.
The final endoscopic evaluation revealed an unroofed prostatic utriculum and the dilated right and left ED orifices at 2 and 10 o'clock positions, respectively. No significant bleeding was observed. Bladder inspection revealed no abnormalities, and a digital rectal examination was negative for blood. At the end of the procedure, an 18 Fr urethral Foley catheter was left in place overnight. The patient was discharged on the following day and was counseled to resume sexual activity as soon as possible to maintain ED patency.
A SA performed on the 30th postoperative day revealed normal ejaculate volume (2.0 mL), no red blood cells, an increase of the total sperm count to 1.000.000/ ejaculate, improved morphology (10%), and unchanged progressive motility (20%). After the procedure, he reported no new episodes of hematospermia, denied any sexual symptoms such erectile or ejaculatory dysfunctions, and noticed a subjective feeling of increased ejaculatory volume. He also denied having pelvic pain or symptoms of prostatitis, seminal vesiculitis or epidydimitis.
DISCUSSION
In vivo endoscopic evaluation of the seminal vesicles was first reported by Okubo et al. (1998). Yang at al. (2002) published the first large study with 37 patients. They performed a transutricular access in men with persistent (>3 months) hematospermia and SV abnormalities on imaging studies. Their technique consisted of catheterization and dilatation of the utricular orifice with guidewire and a 5 Fr open-ended ureteral catheter followed by inspection of the utricular lumen using 6 Fr and 9 Fr rigid ureteroscopes. The EDs were accessed by inserting the ureteroscope in directly into the ED orifices via the utricular lumen. A coagulating electrode was used to open the utricular orifice or the ED when these structures were not immediately visible (Yang et al., 2002). The same group published a larger series of 70 patients with persistent hematospermia in 2009 (Han etal., 2009). In the subsequent paper, they injected dye antegrade through the vas deferens to confirm the location of the ED orifices and resected the verumontanum to find the orifices in those cases where the verumontanum had been previously damaged. After a mean follow-up of 12.3 months, 55 patients had resolution of their hematospermia while no complications were reported. Hematospermia recurred in 7 cases (Han et al., 2009).
Using a similar transutricular ED access technique, Liu at al published a cases series of 72 hematospermic patients managed with SVC (Liu et al., 2009). They noticed that the ED orifices inside the utricular lumen were always covered by a transparent membraniform wall. Definitive diagnosis and symptomatic improvement were achieved in 93.1% and 97.2% of the patients, respectively; with no complications after a median follow-up of 21.7 months (Liu et al., 2009).
According to Guo et al., the ED can also be catheterized directly from the urethra without entering the utricular lumen (Guo et al., 2015). The ED orifices can be found outside the prostate utricle, usually at 5 and 7 o'clock positions and, if the orifices are unclear, the verumontanum can be resected to expose their openings. In their case series of 20 patients, an epidural catheter was used as guide and for flushing saline solution in order to identify the ED openings.
A prospective trial conducted by Xing et al. compared the diagnostic yield of TRUS and SVC in 106 patients with persistent hematospermia (Xing et al., 2012). Seminal vesiculoscopy could not be performed in 7.5% of the patients because the ED orifices were not identifiable. The individual diagnostic yields of TRUS and SVC were 45.3% and 74.5%, respectively (p<0.001); with the overall diagnostic yield rising to 87.7% when the modalities were combined. Calculi (87%) and strictures (79.6%) were the most common findings on SVC. Therapeutic interventions were performed in 83.3% of the patients who underwent SVC, with 97.6% having resolution of their hematospermia. Twenty-three patients (21.7%) developed temporary mild perineal pain that resolved spontaneously in less than 3 months and no serious complications were reported. The authors concluded that combining TRUS and SVC might improve the management of men with persistent hematospermia.
The efficacy of SVC for the treatment of complete EDO was assessed by Wang et al. in a series of 21 azoospermic patients with EDO. The procedures were performed using the same transutricular technique described earlier (Wang et al., 2012). One patient required TURED because of failure to identify the ED orifices. Only 2 patients remained azoospermic after 12 months post-surgery, with the mean sperm count rising from 0 to 6.6x10 6 /mL, and the mean semen volume increased from 1.1 mL to 2.8 mL after 3 months. Again, perineal discomfort was present in 7 patients after the procedure, but the pain subsided in all patients after 3 months, and no major complications were reported. The authors noted that a 6 Fr rigid ureteroscope was more effective when performing SVC, and the ED orifices were usually found next to the median line of the verumontanum (Wang et al., 2012).
Han et al. reported a case series including 61 men with seminal vesicle disease. Using an F 6/7.5 ureteroscope, SVC was successfully performed in 95% of the cases, with a mean surgical time of 35.6 minutes. Only 2 patients complained of perineal discomfort after the procedure, and 1 patient had recurrence of hematospermia (Han et al., 2013). A similar success rate with SVC was demonstrated by Hu et al. In their 38 patient-case series, SVC had a success rate of 92.1%. Interestingly, even the 17 cases with negative findings improved their symptoms after the procedure. The recurrence rate was 11.8%, and 5.2% of the men developed post-operative epididymitis, treated with antibiotics. Another large series of 114 patients with hematospermia and abdominal or perineal pain demonstrated resolution of the hematospermia and pain improvement after SVC in 89% of the cases. There were 2 cases of postoperative epididymitis, 6 cases of postoperative painful ejaculation and no major complications (Liu et al., 2014).
In the largest series up to this date, Liao et al. reported the outcomes of 305 cases of refractory hematospermia treated with SVC (Liao et al., 2019). The procedure was successfully performed in 296 patients, and all 271 treated men who had follow-up experienced resolution of hematospermia. Seven percent of the patients developed recurrent hematospermia, treated with a second procedure. Complications were rare, 5.9 men complained of thinner ejaculation and only one case of epididymitis was reported. No case of perineal pain was observed after the procedure.
The group of Zhang et al. described the use of SVC coupled with ultrasonic lithotripter to treat patients with persistent hematospermia. In a retrospective study, 30 patients were divided in two groups, 16 who underwent conventional SVC (group A), and 14 who underwent SVC with ultrasonic lithotripter (group B). Overall, 56% of the men had calculi in the SV and surgical time was shorter in group B (55 versus 66 minutes). All the procedures were successful in group B, while 1 patient in group A had the procedure interrupted due to bleeding. There were no recurrences in group B and 2 in group A. In both groups, there were no complications. The authors advocated the use of ultrasonic lithotripter due to its strong and continuous suction, providing a clear surgical field and minimizing the SV pressure (Zhang et al., 2017).
Kang et al. evaluated the use of SVC for the treatment of symptomatic prostate midline cyst diagnosed by TRUS in 61 patients (Kang et al., 2016). The main presenting symptoms were hematospermia (52.4%) and chronic pelvic pain syndrome symptoms (32.7%). Fifty-seven percent of the patients had seminal vesicle dilation (>12 mm) on TRUS, and 28% had calculi found in the midline cyst during SVC. The SVs were successfully accessed in 53 cases. Hematospermia resolved in 90.6% of the cases (with only 1 recurrence) and the prostatitis symptoms improved significantly after the procedure (Kang et al., 2016). Surprisingly, SA parameters did not improve in this cohort. Two men developed acute epididymitis and 2 other minor complications were reported.
When a midline prostate cyst is a suspected cause of EDO, unroofing of the cyst using a resectoscope combined with SVC can be used. Cheng et al. described a series of 12 infertile men with midline prostate cysts treated with the combined procedure and reported improvements in semen quality in 80% of the men (Cheng et al., 2015).
The use of SVC to treat SV cysts was evaluated by Xue et al. (2018). The technique includes a transutricular approach and holmium laser was used to create a communication between the cyst and the SV lumen. Twenty men with SV cysts ranging from 32 to 55 mm were treated. Although the procedure was well tolerated, with 8 patients developing self-limited mild hematospermia, symptomatic improvement accessed by NIH-CPSI did not reach statistical significance and no patients were free of cystic lesions on follow-up.
Seminal vesiculoscopy is a technique that can be used to treat several conditions from the prostate, EDs, and SVs. Evidence is growing in support of its use as an effective alternative to more invasive procedures like TURED, since the procedure uses commonly available urologic equipment such as cystoscopes, rigid 6-9 Fr ureteroscopes, resectoscopes, guidewires, ureteral catheters, and coagulating electrodes. On the other hand, a strong knowledge of the pelvic anatomy is required and the surgeon must be able to recognize small structures and anatomic landmarks while discerning which maneuvers will safely lead to the SVs.
As per described above, the ED orifices can be accessed via two different approaches -the transutricular approach or the direct approach via the urethra at the 5 and 7 o'clock positions of the prostatic utricle. The most commonly described access is the transutricular approach, which involves using a ureteroscope and a guidewire to enter the utricular lumen before catheterizing the ED orifices or puncturing the thin lateral wall that sometimes covers the ED orifices from within the utricular lumen. The second direct approach to the ED orifices is performed by catheterizing the natural ED orifices directly from the urethra, which is more difficult due to the small size of the openings. Finally, if both of the aforementioned approaches fail, one may resect the verumontanum to unroof the EDs that run postero-laterally to it. However, this technique should be used as a last resource since it has the potential to cause complications such as reflux epididymitis, urinary incontinence, and rectal injury.
SVC can be regarded as a safe and effective treatment modality for patients with EDO, hematospermia, and some pelvic pain conditions. The procedure is feasible in most patients and outcomes are excellent, with hematospermia resolution rates ranging from 78% to 98% and recurrence rates as low as 10% (Han et al., 2009;Liu et al., 2009;Xing et al., 2012;Liu et al., 2014). Pelvic pain or ejaculation-related pain can also improve after SVC (Kang et al., 2016;Liu et al., 2014), though up to 30% of the patients may develop perineal pain or discomfort in the postoperative period (Han et al., 2009). Although these symptoms are mild and temporary, patients should be informed about this possibility. Postoperative epididymitis seems to be rare and may occur due to urine reflux to the epididymis caused by the destruction of the ED orifice valve mechanism during dilation. High pressure irrigation during the procedure may also cause epididymitis. No other major complications have been described in the literature, which might have been underreported because of insufficient follow-up time. Larger series and longer follow-up are necessary to establish the long-term efficacy and safety of SVC, and to identify the conditions that would benefit at most from this procedure. In addition, studies from different populations are needed to compare results in patients with distinct anatomical variants, and postoperative recommendations should be standardized.
CONCLUSION
Early reports suggest SVC to be a safe and feasible technique that represents a minimally invasive treatment modality for EDO, persistent hematospermia, and some pelvic pain conditions. It seems to be as effective as TURED, with lower potential complication rates, but further studies are required to clarify long term outcomes and to provide external validation. | 2020-03-12T10:33:42.816Z | 2020-03-10T00:00:00.000 | {
"year": 2020,
"sha1": "866fdeed9a03d5c6523d7b6e22f4d5a67a8e3314",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc7365543?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "03ac732d6c11fb8c56f2d9daf4f67a586ea4097b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55950936 | pes2o/s2orc | v3-fos-license | Impurity-directed Transport within a Finite Disordered Lattice
We consider a finite, disordered 1D quantum lattice with a side-attached impurity. We study theoretically the transport of a single electron from the impurity into the lattice, at zero temperature. The transport is dominated by Anderson localization and, in general, the electron motion has a random character due to the lattice disorder. However, we show that by adjusting the impurity energy the electron can attain quasi-periodic motions, oscillating between the impurity and a small region of the lattice. This region corresponds to the center of a localized state in the lattice with an energy matched by that of the impurity. By precisely tuning the impurity energy, the electron can be set to oscillate between the impurity and a region far from the impurity, even distances larger than the Anderson localization length. The electron oscillations result from the interference of hybridized states, which have some resemblance to Pendry's necklace states [J. B. Pendry, J. Phys. C: Solid State Phys. 20, 733-742 (1987)]. The dependence of the electron motion on the impurity energy gives a potential mechanism for selectively routing an electron towards different regions of a 1D disordered lattice.
I. INTRODUCTION
Many researchers [1][2][3][4][5][6][7][8][9] have studied open (infinite) models of one-dimensional regular lattices, in which an impurity is introduced that allows for control over transport and closely related properties in the lattice. This has led us to consider the possibility that an impurity might be used to control transport even in disordered finite systems, with one question in mind: What type of transport would occur if the ordered lattice were replaced by a disordered one?
It is well-known that disorder in quantum systems produces Anderson localization [10]. There have been numerous theoretical and experimental studies on Anderson localization [11][12][13][14][15]. For example, on the theoretical side, it has been shown that in a one-dimensional lattice with random energies at each site, all the eigenstates of the Hamiltonian are localized [16][17][18]. Although this result indicates that there can be no electron conductance through an infinite one-dimensional disordered lattice, sharp resonances at the band center have been noted [19,20]. Such resonances are required for electron transport. In fact, Pendry [21] has shown that it is possible to transmit an electron from one end of a disordered finite lattice to the other due to the presence of "necklace states" that serve as stepping-stones for the electron. Necklace states also exist in optical systems [22,23]. These states form a sub-band that can induce resonant transport similar to the energy band of an ordered lattice. Resonances of finite disordered systems coupled to infinite reservoirs have been theoretically studied in [24,25].
Extrapolating from these previous studies, here we consider finite disordered lattices (or quantum wires) with a side-attached impurity ("T-junction"). The impurity can be realized using a quantum dot, which constitutes a nano-control device. The properties of the dot can be altered through a gate potential allowing an experimentalist control over electron transport. Varying the gate potential on the dot can be used to probe the spectrum and localization properties of the lattice. As we will show, we can indeed use an impurity to direct transport within a disordered lattice. In our theoretical study we will consider the case of zero temperature. Therefore the transport we will discuss is different from variable-range hopping [26,27], which occurs at non-zero temperature. We will discuss possible extension of our work to the case of nonzero temperature in section VIII. Note that our lattice is finite, but large enough so that boundary effects only play a minor role.
Experimentally, effective 1-D systems can be synthesized by a variety of techniques [28,29], including lattice geometries that incorporate a side-attached quantum dot [30,31]. Randomized site potentials in a finite lattice might be obtained, for example, by varying segment lengths (i.e., growth times) in GaAs/GaP superlattices assembled by laser-assisted catalytic growth [15,29]. An arXiv:1511.08758v2 [cond-mat.mes-hall] 5 Jul 2017 effective side-attached dot could then potentially be introduced by doping one such segment.
While as far as we are aware there have been no studies on disordered models of electron transport incorporating a side-attached impurity, at least one experimental realization of a system similar to ours has been reported in Ref. [32] using microwaves instead of electrons. The system in Ref. [32] consists of a waveguide with random blocks (analogous to our disordered lattice) and an airgap in the middle (analogous to our impurity site). The focus of Ref. [32] however is different from the focus of our present study, as we will discuss below. In another relevant work, boundary effects on localization properties have recently been studied in finite, weakly disordered optical waveguide arrays [34]. Moreover, Refs. [27,33] have considered control of thermopower using a gate potential that shifts all the lattice energies at once.
We will consider the motion of the electron from the impurity to the lattice and back. The initial state is that in which the electron is completely localized in the side impurity. Hereafter this state will be referred to as the unperturbed impurity state; this state is an eigenstate of the unperturbed Hamiltonian, corresponding to the case where the impurity is decoupled from the lattice. The coupling will allow the electron to transfer between the impurity and the lattice.
We will treat the energy of the impurity as a tunable parameter. We will study how this parameter influences the transport of the electron from the impurity to the lattice, or vice versa.
Our main finding is that for certain impurity energies the electron can jump to small regions in the lattice; these regions are localization centers of Andersonlocalized states whose energy is matched by the impurity energy. These, together with the impurity state, form hybridized states that are similar to the necklace states studied by Pendry. Interference between the hybridized states induces Rabi-like oscillations of the electron survival probability at the impurity. Hence, the electron alternates positions between the impurity and the localization centers of the lattice states hybridized with the impurity state.
The Rabi-like oscillations occur in the vicinity of avoided crossings in the energy spectrum of the system; these avoided crossings are induced by the interaction between the impurity and the lattice. The center of the avoided crossings signals the appearance of maximally hybridized states. Experimental observation of avoided crossings and hybridized states similar to ours was the main focus of Ref. [32] mentioned above. The new aspect of our study relative to Ref. [32] is the description of the time evolution of the electron associated with these avoided crossings and the possibility of tuning the impurity energy to predictably route electrons to different regions of the lattice. In addition, we will also point out that the range of electron transport can be larger than the localization length of the hybridized states, as long as the impurity energy is precisely tuned to match the center of the avoided crossing.
We will focus our attention on Rabi-like oscillations involving only two or three hybridized states. Oscillations involving many hybridized states produce an erratic pattern of motion, which is less suitable for controlled transport.
The paper is organized as follows: in sections II-VI, we introduce the model and analyze the electron transport between the impurity and the lattice for a specific realization of disorder. In section VII we consider ensemble averaging and in section VIII we discuss our results.
II. T-JUNCTION LATTICE
We consider a T-junction lattice, consisting of a disordered lattice (a finite one-dimensional chain of quantumwells with random energy levels) and a side impurity attached to one of the wells. The impurity is introduced as a nano-control device that will enable directed electron transport between the impurity and a lattice segment. We will focus on the motion of a single electron and will FIG. 1: T-Junction lattice with N lattice sites and impurity site d. The lattice sites have disordered energies within the range W and a constant nearest neighbor interaction energy of b/2. The impurity has energy d and is attached to the lattice at site a through tunneling strength g.
neglect Coulomb interactions altogether. We will model the lattice using a tight-binding Hamiltonian with uniform nearest-neighbor interactions, represented as a sum of lattice and impurity Hamiltonians, The lattice Hamiltonian is written as The energies x are random energies uniformly distributed to introduce purely diagonal disorder. They describe unoccupied levels of the quantum wells that will roughly form an energy band. The width of the disorder W is represented by the range W = max − min , where for simplicity we will set max = W and min = 0. Other parameters include the number of lattice sites N and nearest neighbor interaction strength b/2. Hereafter we will use b as our energy unit. We also choose W = b such that the disorder width is comparable to the nearest-neighbor interaction strength.
The impurity Hamiltonian is given by The impurity is denoted as d while the lattice attachment site is defined as site a, where a ∈ {1, N }; d represents the energy of the impurity, which we treat as a tunable parameter. The impurity could be physically realized by using a quantum dot with a variable gate potential [30,31] or by segment doping, although the impurity energy would be fixed for an individual lattice in the latter case. Tunneling strength between the impurity and the attachment site is given by g.
A. Characteristics of uncoupled disordered lattice
To better understand the capability of the side impurity to direct transport within the lattice we first investigate the influence on the spectrum of the Hamiltonian as we vary the tunneling strength. We will begin by investigating the g = 0 case, when the lattice and impurity are uncoupled. For this case the Hamiltonian of the disconnected lattice can be diagonalized as The presence of disorder results in Anderson Localization (AL) in the lattice. To demonstrate the occurrence of state localization we numerically diagonalized a specific realization of the lattice Hamiltonian with random siteenergies. Figure 2 shows one of the resulting localized states. In this section and in sections III-VI we will use this specific realization of the site energies to illustrate our results.
The degree of state localization can be determined by the second moment of probability density, the inverse participation number [35] where Ψ(x) ≡ x|Ψ . Our numerical example produced states with a variety of localization characteristics.
derived in Ref. [11], where E 0 is the center of the random energies (E 0 = 0.5 here).
B. Coupled case
Having described the eigenstates of the Hamiltonian for the g = 0 uncoupled case we analyze next the g > 0 case as a perturbation of the uncoupled system. This allows us to understand how the impurity modifies the spectral properties of the finite disordered lattice. In terms of the eigenstates of the Hamiltonian for the uncoupled lattice, the partially diagonalized Hamiltonian then takes the form where V m = a|ψ m is the amplitude of the m th eigenstate at site a, which determines the strength of the interaction between each mode and the impurity. We remark that due to the completeness of the eigenstates |ψ m we have N m=1 |V m | 2 = 1.
Therefore, if a group of modes interacts strongly with the impurity, the other modes will interact weakly. The eigenvalue equation for an eigenstate |φ j with real eigenvalue z j is written as Writing the explicit matrix elements of the Hamiltonian gives the set of equations −gV * m d|φ j + E m ψ m |φ j = z j ψ m |φ j . (10) Letting m = m and assuming z j = E m for all m, we solve for ψ m |φ j as Substituting this into Eq. (9) we obtain This equation can be written as a polynomial equation of degree N +1 for the N +1 eigenvalues of the coupled, full Hamiltonian. The corresponding eigenstates are given by where d|φ j is found from the normalization condition φ j |φ j = 1, which gives This expresses the probability to find the electron at the impurity when it is in the state |φ j . By taking the derivative of Eq. (12) with respect to d we can also write Eq. (14) as It is worth pointing out that by setting g = 0 in Eq. (12), at first sight we just obtain the lone uncoupled eigenvalue z j = d for the uncoupled impurity state. But the other eigenvalues (associated with the chain) have non-trivial g → 0 limits, which are not obvious from Eq. (12). We can see these limits more naturally with the following rearrangement, where we pull out a specific term corresponding to the l th uncoupled eigenvalue where Note that this equation is a polynomial equation of degree N +1, similar to Eq. (12). The difference is that Eq. (16) formally reduces to the eigenvalue E l of the uncoupled lattice when g → 0, whereas Eq. (12) reduces to the impurity eigenvalue in the same limit. The eigenstates corresponding to the eigenvalue in Eq. (16) are given by where is a normalization constant.
C. Perturbation due to the coupling g Equations (12) and (16) demonstrate perturbation characteristics that are induced by a non-zero tunneling strength g between the lattice and the impurity; they also show the dependence of the eigenvalues on the energy value of the impurity, d , which we will consider as a tunable parameter in the next section.
The effect of the impurity-lattice coupling is only significant in cases where in Eqs. (13) or (18), for at least one value of m. If this condition is not met, the impurity state remains approximately isolated from the lattice (weakly hybridized). Likewise, lattice states do not become significantly altered by the impurity's presence, so they are close to the unperturbed Anderson-localized states. When Eq. (20) is satisfied for at least one value of m, on the other hand, the impurity state in Eq. (13) will become strongly hybridized with the Anderson-localized (AL) state(s) |ψ m , retaining some of its isolated characteristics while taking on characteristics of those AL states; conversely, the AL states will become hybridized with the impurity state. Due to the lattice-impurity coupling, AL states with large V m can even take on each others localization characteristics; they do so using the impurity as an intermediary as indicated by the additional gV * m term in Eq. (18). Strongly hybridized states that include both lattice sites and the impurity site are shown in Fig. 4. In this figure, we have used the same specific realization of disorder as in Fig. 3. We chose the attachment site a = 66 because at this site there appears a sharply localized state. We chose g = b/4 = 0.25 because we found that on average it led to the hybridization of just a few AL states for different values of d , simplifying our analysis. For example, for d = −0.62 only 2 AL states are significantly hybridized with the impurity state. This is the value of d used in Fig. 4.
Hybridization of AL states is manifested by the existence of a nonzero amplitude at the impurity site, while hybridization of the impurity state is manifested by the existence of nonzero amplitudes on the lattice sites. Hybridized states will enable control over spectral properties of the lattice and thus control of transport of the electron between the lattice and the impurity.
III. AVOIDED CROSSINGS AND HYBRIDIZATION
We will show that the coupling between the impurity and the attachment site leads to the hybridization of the unperturbed impurity state with a set of unperturbed lattice states; the latter can be chosen by varying d . The impurity state hybridized with the AL state in Fig. 2 and (to a lesser extent) with another AL state similar to the state shown in (c). (c) An AL state that is less hybridized with the impurity state than (a).
The resulting hybridized states are eigenstates of the full Hamiltonian.
To investigate how d may select lattice states to become hybridized, we start by numerically computing eigenvalues for the complete Hamiltonian at different values of d . Figure 5 demonstrates the results using the previously mentioned parameter values and the specific set of random energies used in Fig. 3. When g = 0, some of the uncoupled eigenvalues are noticeably perturbed; the perturbation is manifested in Figure 5 as avoided crossings, consisting of curved lines near the diagonal. Figure 6 shows some of the avoided crossings in more detail. As we will argue next, this perturbation of eigenvalues implies strong hybridization of the corresponding eigenstates.
Eq. (15) shows that for each perturbed eigenstate |φ j , the probability to find the electron at the impurity state |d is given by the slope of the curve of z j vs. d . Away from the (visible) avoided crossings in Figures 5 or 6, the horizontal lines have a slope near 0 and thus p j ≈ 0; they correspond to lattice states with negligible perturbation. Meanwhile, the diagonal lines have a slope near 1 and thus p j ≈ 1; they correspond to the impurity state with a small perturbation.
As we approach an avoided crossing, following the curve associated with one of the lattice eigenvalues (nearly horizontal line) from the left, the slope p j of the curve increases, while the slope p j (nearly diagonal line) decreases. This means that the probability to find the electron at the impurity shifts from the perturbed impurity state to the perturbed lattice state. At the middle point of the avoided crossing the slopes of the curves are approximately equal, thus p j ≈ p j . Moving to the right, away from the avoided crossing, the eigenstates switch curves and p j decreases while p j increases. Therefore, the middle of the avoided crossing is a point of maximum sharing of probability at the impurity site; it is a point of maximum hybridization. We identify the degree of hybridization of the two states with the product p j p j . In the next section, we will show that in the simplest case, maximum hybridization indeed occurs when the slopes are equal at the middle point of the avoided crossings.
Note that j p j = 1. This means that if only one AL state j is significantly hybdridized with the impurity state, then at maximum hybridization we have p j = p j = 1/2 and the maximum degree of hybridization is 1/4. If more than one AL state is hybridized, then we will have that p j < 1/2 and p j < 1/2; the maximum degree of hybridization between any two states is then less than 1/4. An example of this situation, with two significantly hybridized AL states, is seen in Figure 6. It occurs at the point indicated by the vertical dashed line. The hy-bridized states involved in the upper avoided crossing are maximally hybridized; these are the states shown in Figures 4a and 4b. Notice that the impurity site amplitude values are nearly equal, thus validating our previous statements. The lower avoided crossing in figure 6 overlaps with the upper avoided crossing and involves the hybridized state in Figure 4c. Notice that this state has a smaller amplitude at the impurity site, corresponding to a smaller slope of the bottom curve in Fig. 6.
While the attachment site and the width of disorder can alter the AL states available for perturbation, the impurity energy determines which available AL state(s) become hybridized as shown in figure 5. Thus we find that d is an effective way to control perturbation within the lattice. Because there are degrees of hybridization, as indicated by avoided crossings in Fig. 5, we find that d can be used to tune maximum hybridization between AL state(s) and the impurity state.
IV. MAXIMUM HYBRIDIZATION
For some values of the impurity energy d only one of the AL states (say the m th state) is significantly hybridized with the impurity state due to the coupling g. Assuming this is the case, in this section we show that i) Maximum hybridization between the impurity state and the AL state occurs when d = E m , to zeroth order in g.
ii) At maximum hybridization the difference (gap) between the impurity and AL eigenvalues across an avoided crossing is a minimum; this minimum value is given by |2gV m |.
iii) At the point of maximum hybridization the slopes of the curves of the eigenvalues vs. d are equal. iv) Partial hybridization between the impurity state and the AL state occurs when | d − E m | 2g|V m |.
In more general cases, several AL states can be hybridized simultaneously. For these cases the results presented here are only rough approximations, applicable to the AL state that is most hybridized.
To demonstrate (i-iv), we start by writing Eq. (12) as Eq. (22) is re-written as Let us assume that˜ d,m (z j ) is approximately independent of z j , and is approximately equal to d . This occurs if all the unperturbed AL states other than the m th state have an eigenvalue E m sufficiently far from z j , such that the summation in Eq. (23) is negligible. In this case we have (with the labeling j = ±) which gives a simplified description of the avoided crossings seen in Figs. 5 and 6. In the limit g → 0 we can see that z + gives the impurity energy d and z − gives the AL energy E m . For g = 0 the two solutions correspond to the two perturbed states resulting from the hybridization of the unperturbed impurity state and the m th unperturbed AL state. Maximum hybridization occurs when the product of the probabilities of the two hybridized states at the impurity site is maximum. From Eq. (15) this implies that or which gives d = E m and the minimum distance between the two hybridized eigenvalues. When d = E m we also have that the slopes of z ± vs. d are equal: Partial hybridization occurs when the term inside parenthesis in Eqs. (26) or (27) is non-negligible. This happens roughly when When | d − E m | = 2|gV m | the product of slopes in Eq. (26) takes half its maximum value. Finally, note that when d = E m , the perturbed eigenvalues in Eq. (25) are z ± = E m ± g|V m |, which agrees with the previously stated condition (20) for significant interaction between the impurity and the lattice (i.e. for hybridization of eigenstates).
V. CONTROL OF ELECTRON TRANSPORT
So far we have discussed how the impurity's energy can be tuned to alter the spectrum of the lattice and form hybridized states localized at both lattice and impurity sites. Here we demonstrate how this tuning can be used to direct electron transport within the disordered lattice.
We begin by considering the time evolution of our system for fixed values of d . The initial condition is a single electron placed at the impurity at t = 0. We consider the survival probability that the electron remains at the impurity at time t; time is defined in units of b −1 with = 1. Beginning with the electron at the impurity will produce an evolving superposition of perturbed states as time progresses.
The survival probability for the electron to remain at the impurity is expressed as Using the complete set of eigenstates of the Hamiltonian {|φ n } with eigenvalues z n , we have Naturally, the survival probability is dominated by eigenstates that have a large probability at the impurity site, which are the strongly hybridized states. Evolving our lattice in time will introduce a phase difference between these states, leading to oscillations, which will allow for dynamic electron transport as time progresses. In the simplest case, discussed in the previous section, where only one AL state is hybridized with the impurity state, the superposition of the two hybridized states produces oscillations with period which at maximum hybridization (Eq. (28)) gives As shown in Fig. 7a, we numerically verified the existence of these oscillations for the case of maximum hybridization, for which the oscillations of the survival probability resemble Rabi oscillations. The figure shows that the minimum values of the survival probability are nearly zero. These minimum values are important because they demonstrate an instant in time in which the electron has completely left the impurity and is instead located within the lattice. We can understand this as due to a destructive interference between hybridized eigenstates, such as the ones shown in Figure 4, whose similar amplitudes at the impurity add with opposite phase and cancel each other out. Thus by maximizing hybridization we are able to momentarily confine the electron in the lattice. The periodicity of the survival probability also allows electron transport to be predictable. The degree of periodicity in the Rabi oscillation is determined by how many perturbed AL states are involved in the perturbation. For instance, the profile of Fig. 7a, corresponding to d = −0.69, has essentially one degree of periodicity demonstrating that only one perturbed AL state (in addition to the perturbed impurity state) is significantly involved in the lattice perturbation. In this instance the state shown in Fig. 4c has reached maximum hybridization. This can be visualized in the spectrum at the center of the lower avoided crossing in Fig. 6, around d = −0.69. Figure 4 and correspond to eigenvalue curves with non-negligible slope at the intersection with the vertical dashed line in Figure 6.
Although in general our numerical model exhibits Rabi oscillations with higher s of periodicity, we are primarily interested in low degrees of periodicity, which enables greater transport control.
To better understand where the electron is located during the minimum and local minimum points of the survival probability in Figure 7b, we consider the probability of the electron to be at any lattice site x at given time t, Figure 8a compares how the time evolved probability is distributed amongst lattice sites at a local minimum (t = 299 from figure 7b) against that at an absolute minimum point (t = 537). Figure 8b shows the spatial probability distribution of the three hybridized states that participate most strongly in the time evolution. We label them as E 1 , E 2 and E 3 . These states correspond to the wave functions of figures 4a, 4b, and 4c respectively. State E 1 is a maximally hybridized AL state; E 2 is the hybridized impurity state and E 3 is a partially hybridized AL state. At the absolute minimum point (t = 537) the states E1 and E2 interfere destructively at the impurity site, but they interfere constructively in the lattice. Hence the probability distribution in Fig. 8a resembles a superposition of E1 and E2 in Fig. 8b. At the local minimum (t = 299) Fig. 8a resembles E3 in Fig. 8b because then E1 and E2 approximately cancel in the lattice. The similarities between figures 8b and 8a demonstrate what we would expect from our previous analysis; when the electron is not at site d it is at locations that are roughly defined by the localization of the perturbed AL states E 1 and E 3 and the perturbed impurity state E 2 .
Thus we have demonstrated the impurity energy's ability to control electron transport within our finite disordered lattice. By tuning d we can force the electron to oscillate between the impurity site and specific groups of localized sites as time progresses.
VI. TUNING THE RANGE OF ELECTRON OSCILLATIONS
The electron oscillations described in the previous section involved AL states that are close to the attachment site. Therefore the spatial range of the oscillations was limited to the Anderson localization length of these states.
However, in general, it is possible to select values of d that allow the impurity state to hybridize with an AL state far from the attachment site. To see this, consider that an AL state with a given inverse localization length γ(E m ) has an amplitude given approximately by the exponential function where x 0 is the point of localization (maximum amplitude). Given that ψ m (a) = V m , we obtain where D ≡ |a − x 0 | is the distance from the point of the localization of the AL state to the attachment site. This means that V m decreases exponentially with D. However, it is possible to tune d so that it is at the center of the avoided crossing formed by the AL and the impurity eigenvalues. In this case the corresponding states will be maximally hybridized and we will have a Rabi oscillation, where the electron travels back and forth between the impurity and the region around x 0 . In order for this hybridization to occur, d must be within the range E m ± 2g|V m | as shown in Eq. (30). Therefore as D increases, d has to be tuned with a precision that increases exponentially with D; the larger D is, the smaller the gap at the avoided crossing. In Figures 5 and 6 one can see such avoided crossings with very small gap; the resolution of these figures makes it seem that in many places the lines are simply crossing, but in fact these are all "microscopic" avoided crossings. These are each associated with long-range (large D) transport of the electron. From Eqs. (34) and (37), the period of the oscillation at maximum hybridization is which increases exponentially with D as well. Hence we find that the penetration distance D into the lattice that the electron can achieve can be larger than the Anderson localization length. Meanwhile, the larger D is the more precisely d must be tuned, and the longer the period of oscillation becomes. If D is not too large, achieving a medium-range transport is not very difficult; it is enough that d lies within the narrow range of the avoided crossing. The electron transport is illustrated in figures 9 and 10, which show, respectively, the survival probability as a function of time and the probability distribution of the wave function when the electron is farthest from the attachment site. For reference, Fig. 10b shows the AL state that becomes strongly hybridized with the impurity state. figure 9. The attachement site is indicated by "a". (b) Uncoupled AL state (for g = 0) that becomes strongly hybridized with the impurity state after the interaction is switched on for g = 0. The chosen d = −.48304 value is slightly off the center of the avoided crossing formed by the AL and impurity eigenvalues, but it is within the range of maximum hybridization mentioned in the text. The probability to find the electron at the impurity is approximately 0.4 for both the AL and impurity states after becomimg hybridized. This value is close to the theoretical maximum of 0.5 discussed in Sec. III.
VII. ENSEMBLE AVERAGING
In this section we demonstrate that the transport properties we have considered thus far are robust with respect to ensemble averaging, which must be considered for modeling realistic lattice systems. This will allow us to find, for a given impurity energy, the average number of lattice states that are strongly hybridized with the impurity, as well as the average localization of those states. Since we want to focus on transport that is not affected by the boundaries of the lattice, we will also discuss the average number of hybridized states that are free from boundary effects.
The number of strongly hybridized states can be qualitatively defined as the number of states with significant impurity-site amplitude. Quantitatively this can be determined as follows: for a specific realization of disorder, we first define the array of amplitudes at a site x as The inverse participation number of this array is given by We then define the number of states that exhibit significant hybridization with the impurity state as the integer nearest to P d , denoted as n d ≡ P d . Thus we identify the n d states that exhibit significant impurity hybridization as the n d states with the highest amplitudes (absolute values) in the array A d . This set of amplitudes is written as However, in order to exclude boundary effects, we should identify the subset of d that exhibit strong overlap with the end sites x = 1 and x = N and remove them from the larger set. To do so we must form arrays of endsite amplitudes and calculate the participation number of these arrays, P 1 and P N . After finding subsets 1 and A N with the n 1 ≡ P 1 and n N ≡ P N highest amplitudes, respectively, we can then determine the number of eigenstates that significantly overlap both with the impurity site and with at least one end site. These are the states belonging to the subset We will denote the number of these states as n d;ends . Excluding these states from d gives the hybridized states that are free from boundary effects.
To find the ensemble average of n d , n d;ends , we did the following: keeping the hopping energies and impurity position constant, we chose a standard error of the mean (SEM ) threshold so that the size of the 95-percent confidence interval (CI) is equated to a desired fraction f of the data-set mean µ; that is CI = f µ, where the confidence interval is (µ−2 SEM √ n L , µ+2 SEM √ n L ), n L is the number of disordered lattices in the ensemble and CI = 4 SEM √ n L . We stopped averaging when the standard error of the mean fell below the threshold. Figure 11 presents the results of the above averaging procedure, which demonstrate that a side impurity can direct transport within any disordered lattice with little to no boundary influence. The upper curve in Figure 11 gives the ensemble-averaged number of hybridized states n d vs. the impurity energy d , while the lower curve gives the average number of hybridized states n d;ends that have significant overlap with the ends of the lattice. The regions d ∈ [−1.0, 0.0] and d ∈ [1.0, 2.0] are regions where we find hybridized states with minimal boundary influence. The average number of hybridized states within these regions varies between approximately 1 and 5. Staying within these energy regions ensures that only states exhibiting strong localization interact with the impurity, which makes electron diffusion outside the strongly hybridized region unlikely.
Note that there are two transition points around d 0 and d 1 where the lower curve in Fig. 11 begins to increase from zero or decrease to zero. These points correspond to a transition between hybridized states that are influenced and those that are not influenced by the lattice's finite size. Interestingly, at these points the slope of the upper curve changes slightly.
Meanwhile, when the impurity has an energy outside the lattice energy spectrum d < −1 or d > 2 the total number of hybridized states approaches unity, indicating that there is only one state (the impurity state) with significant amplitude at the impurity. This means the impurity is isolated from the lattice. Naturally, the overlap of the impurity state with the endpoints also vanishes in this region of strong localization.
The average n d in Fig. 11 roughly agrees with the results we obtained for the specific realization of disorder in sections II-VI. For example, for d = −0.62, we have n d ≈ 3 in Fig. 11, which means that in addition to the impurity state there are, on average, two hybridized lattice states. This is consistent with the two avoided crossings seen in Fig. 6 and with the double oscillation in Fig. 7b.
In addition to the average number of strongly hybridized states, we also obtained their ensemble-averaged inverse participation numbers, which are a measure of inverse localization length. We did this numerically by obtaining every hybridized state for a specific realization of disorder, as described in Eq. (41), and averaging their individual inverse participation numbers. We then obtained ensemble averages of these average participation numbers, following an averaging procedure similar to the one discussed earlier in this section. Figure 3 demonstrates the validity of our numerical calculations by showing an agreement between the inverse participation numbers of the eigenstates of a single disordered lattice, the theoretical expression for inverse localization The numerical results in this section demonstrate that our method for impurity-directed transport in a disordered lattice is robust against ensemble averaging. Further, we can reliably predict the average number of hybridized states that are free from boundary effects. These states lead to a fairly regular oscillatory transport of the electron between the impurity and specific regions of the lattice.
VIII. DISCUSSION
We have considered a simple model for transport control within a finite disordered T-junction lattice. The presence of the side-coupled impurity significantly impacts the spectral properties of the lattice, forming hybridized AL/impurity states. We have treated the energy of the side impurity as a tunable parameter that selects which states become hybridized, thereby controlling the motion of the electron. In particular, the electron can be forced to periodically oscillate between the impurity site and specific groups of localized sites as time progresses.
Previously Pendry [21] has shown that transport can be achieved through a finite disordered chain by relying on hybridized AL states that form a "necklace" through such a lattice. While this necklace of states forms a subband due to near-degeneracy that occurs by chance, in our model we demonstrate that a side-attached impurity can be used as a control device to intentionally select AL sites according to specific hybridization characteristics that enable desired transport properties.
Although we have studied a simplified model, our results may serve as a starting point for the design of devices based on finite disordered lattices that attempt to route electrons. Nano-designed transistors with disordered materials have already been proposed [36]. Using disorder to trap electrons within a given area could help combat issues with device miniaturization. Moreover, lattices based on complex stacked T-junction structures may offer interesting possibilities for controlling or storing electrons.
The finite size of disordered lattices we have considered may allow for nonzero temperature operation in realistic devices, because the energy-level spacings can be larger than the thermal excitation energies; this would allow the system to behave similarly to the zero-temperature case. Assuming that the width of the energy band is of the order of the width without disorder, the average level-spacing is 2b/N . A rough estimation of the maximum operational temperature would then be given by k B T = 2b/N . As a reference, for a disordered lattice with N = 100, having T = 300K would require b = 1.3 eV. An issue that deserves further investigation is the following: The level spacings between hybridized states at avoided crossings can be much smaller than 2b/N , so thermal excitation could induce transitions between these states, similar to variable-range hopping. This would modify the simple Rabi oscillations discussed in the present paper for T = 0. In Ref. [15] it was found experimentally that carrier transport in a disordered superlattice becomes thermally activated around T = 77K.
Our results are applicable to disordered optical or microwave lattices, such as the microwave lattice of Ref. [32] mentioned in the Introduction. For this type of lattice, temperature effects are much less important than for electron lattices, so our results regarding the Rabi oscillations and far-transport could be more readily tested on this type of system.
A possible extension of our work is to make the impurity energy time-dependent and study the electron motion in this case. As shown in [37] disordered optical systems systems with time-evolving disorder can produce "hyper-transport" of light, which is faster than ballistic transport. It would be interesting to see if this can be achieved in the case of electron transport. Related to this, Ref. [38] investigates transport driven through a disordered lattice by applying time-dependent control fields at the edges of the lattice. | 2017-07-05T16:04:27.000Z | 2015-11-27T00:00:00.000 | {
"year": 2015,
"sha1": "a29d3f0cbb505b8148542b2bfad34a68be8cabbe",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1511.08758",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a29d3f0cbb505b8148542b2bfad34a68be8cabbe",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
11376418 | pes2o/s2orc | v3-fos-license | Detection of Ammonia-oxidizing Bacteria (AOB) in the Biofilm and Suspended Growth Biomass of Fully- and Partially-packed Biological Aerated Filters
Nitrification is a two step process namely ammoniacal oxidation and nitrite oxidation. Oxidation of ammonium to nitrite is carried out by autotrophic bacterium mainly Nitrosomonas (e.g. N. europaea, N.oligocarbogenes) and Nitrosospira while conversion of nitrite to nitrate is performed by Nitrobacter (e.g. N. agilis, N. winogradski) and Nitrospira. However, ammoniacal oxidation is considered as the limiting or critical process in nitrification since the ammonia-oxidizing bacteria (AOB) has very low growth rate (Metcalf and Eddy 1991). Various approaches, both culture dependent and independent have been applied to analyze and compare the microbial structure of biomass. However, culture dependent methods are biased by the selection of species which obviously do not represent the real dominant structure (Wagner et al 1995; Lipponen et al 2002). Recently, the development of culture independent molecular techniques, like fluorescence in situ hybridization (FISH), polymerase chain reaction (PCR) or denaturing gradient gel electrophoresis (DGGE) improved the analysis of environmental samples. Whole cell fluorescene in situ hybridization (FISH) is a technique that uses fluorescently labelled phylogenetic oligonucleotide probes to detect specific whole cells/organisms in biological samples. It can be a valuable tool for the study of microbial dynamics in natural environments (Li et al 1999; Liu et al 2002, Eschenhagen et al 2003). These probes could be designed using the wealth of 16S and 23S rDNA sequence data available to target species, genera subdivisions or divisions in-situ and could be labelled with fluorescent groups, radioactive groups or antigens for immunological detection (Amann 1995). A combination of the FISH approach with the application of scanning confocal laser microscopy (SCLM) allows non-destructive studies of the three dimensional arrangements of bacterial population identified and out-of-focus fluorescence (Wagner et al 1995). Biological Aerated Filters (BAFs) also have a long history of successfully removing nitrogen in wastewater treatment plants (Chen et al 2000; Quyang et al 2000; Chui et al 2001). Biofilm in the reactors bears great potential for simultaneous and efficient removal of nitrogen (FdzPolanco et al 2000). Therefore, an assessment of nitrogen removal efficiency has been made to detect any deterioration to the performance. A possible adverse effect of reduced mass of biofilm in the partial-bed reactor was foreseen for the reason that the slow-growing nitrifiers
Nitrosomonas (e.g.N. europaea, N.oligocarbogenes) and Nitrosospira while conversion of nitrite to nitrate is performed by Nitrobacter (e.g.N. agilis, N. winogradski) and Nitrospira.However, ammoniacal oxidation is considered as the limiting or critical process in nitrification since the ammonia-oxidizing bacteria (AOB) has very low growth rate (Metcalf and Eddy 1991).Various approaches, both culture dependent and independent have been applied to analyze and compare the microbial structure of biomass.However, culture dependent methods are biased by the selection of species which obviously do not represent the real dominant structure (Wagner et al 1995;Lipponen et al 2002).Recently, the development of culture independent molecular techniques, like fluorescence in situ hybridization (FISH), polymerase chain reaction (PCR) or denaturing gradient gel electrophoresis (DGGE) improved the analysis of environmental samples.Whole cell fluorescene in situ hybridization (FISH) is a technique that uses fluorescently labelled phylogenetic oligonucleotide probes to detect specific whole cells/organisms in biological samples.It can be a valuable tool for the study of microbial dynamics in natural environments (Li et al 1999;Liu et al 2002, Eschenhagen et al 2003).These probes could be designed using the wealth of 16S and 23S rDNA sequence data available to target species, genera subdivisions or divisions in-situ and could be labelled with fluorescent groups, radioactive groups or antigens for immunological detection (Amann 1995).A combination of the FISH approach with the application of scanning confocal laser microscopy (SCLM) allows non-destructive studies of the three dimensional arrangements of bacterial population identified and out-of-focus fluorescence (Wagner et al 1995).Biological Aerated Filters (BAFs) also have a long history of successfully removing nitrogen in wastewater treatment plants (Chen et al 2000;Quyang et al 2000;Chui et al 2001).Biofilm in the reactors bears great potential for simultaneous and efficient removal of nitrogen (Fdz-Polanco et al 2000).Therefore, an assessment of nitrogen removal efficiency has been made to detect any deterioration to the performance.A possible adverse effect of reduced mass of biofilm in the partial-bed reactor was foreseen for the reason that the slow-growing nitrifiers
to correlate changes in the proportion of AOBs to all bacteria along the reactor heights in relation to the reactor configuration to associate factors that contribute to the changes in the AOB proportion
Experimental system
Two identical reactors were built; each reactor was 14 cm in diameter and 100 cm in height, providing an empty bed volume of 15 l.A small amount of freeboard or headspace (2.8 litres) was provided at the top of the reactor.The reactors were constructed from PVC, a non-transparent material that prevents the growth of phototrophic organisms.The columns were built with considerations for process air and influent supplies, backwashing air and water requirement and sampling outlets.
The control reactor was filled with 10.9 l cascade rings (Glitsch UK) whilst the second reactor was only partially packed with 5.5 l cascade rings.The media were stationary and held in place by a rigid polypropylene mesh with 15 mm diameter holes placed at the top and bottom of the packing.Three ports were placed along the height of the reactors for sample collection.
A synthetic waste prepared in the laboratory was used to provide a consistent organic substrate for all loadings.The basic make-up of the influent organic strength material used in the study was whey powder, glucose and meat extract (Lab Lemco powder) which contributed approximately 38%, 33% and 29% of the total soluble COD content of the substrate respectively.In order to guarantee that organic carbon was the limiting nutrient, a COD:N: P ratio of 25:5:1 was adopted.Nitrogen component of the feed came from whey powder (24.7%), meat extract (63.7%), and ammonium-dihydrogenphosphate (11.6%). 1 l of the prepared mixture produces a concentrated feed around 40000 mg/l COD.
Suspended biomass and biofilm sampling
The collection of samples for this study was carried out at the end of the steady-state condition of 0.24 ± 0.02 kg N/m 3 .dnitrogen loadings.Samples of the biofilm and suspended growth biomass were taken at different depths of the reactors.The in-situ characterization followed a top-bottom approach.Fig. 1 illustrates the exact locations where the samples of suspended biomass and biofilm were obtained from the reactors.Samples of suspended biomass were taken from port 1, port 2 and port 3 respectively.At each port, about 50 ml of reactor aliquot was wasted before sample collection to ensure that any debris or anaerobic bacteria residing in the pipeline was discarded.A 10 mL volume of aliquot was taken and immediately fixed with 1:1 absolute ethanol.Samples were then stored at -20 o C. For sampling the biofilm, the liquid was first drained from port 1 in order to allow access into the upper bed layer.Tongs were used carefully to remove the media from the upper layer.A random piece of media from the specified level was chosen.The biofilm was gently scraped off the plastic material using a sterile surgical knife before washing the media with 10 ml phosphate-buffered saline (PBS) solution.This procedure was repeated four times until all the biofilm attached to the media was completely removed.To homogenize the biofilm, the sample was sonicated for 2 minutes using an ultrasonic homogenizer (Bandelin Electronics D-1000, Germany).10 ml of the aliquot was put in a universal bottle and fixed with 1:1 absolute ethanol before storing at -20 o C. The sampling of biofilm at the second location was subsequently continued by draining the liquid from port 2. The same procedures were repeated until the media at the bottom were sampled.To detect the AOB in the samples, the FISH technique (Coskunur 2000) was applied in order to produce the fluorescent sites in the cells, and these were detected through the use of confocal scanning laser microscopy (CSLM).This method was applied to determine the presence of ammonia oxidizing bacteria (AOB) and to quantify them in the reactors.The steps involved fixation of the samples, permeabilization and hybridisation with probes, and finally detection with confocal laser scanning microscope (CLSM).
2.2.1Paraformaldehyde Fixation and Permeabilization
Generally, the samples used for this technique have undergone short term fixation where absolute ethanol was added in a volume ratio of 1 sample: 1 ethanol in sterile universal bottles and stored at -20 o C. A 1 ml volume of the stored sample was transferred to a 1.5 ml eppendorf tube and centrifuged at 13000 x g for 3 minutes.The supernatant was removed and the sample was washed with phosphate buffered saline (PBS) by adding 1 ml of the solution, mixing using vortex and centrifuging at 13000 x g for 3 minutes before removing the supernatant again.
The resulting pellet was resuspended in 0.25 ml PBS and 0.75 ml PFA fixative and vortexed.
A 4 % paraformaldehyde fixative solution was prepared fresh for every time of use, the procedure of which tabulated in Appendix 4.1.The suspension was incubated for at least 3 hours, or overnight, at 4 o C.After fixation, the cells were washed by centrifuging at 13000 x g for 3 minutes, removing the supernatant, adding 1 ml PBS and mixing.The samples were centrifuged again at 13000 x g for 3 minutes.The supernatant was removed and the sample was kept with PBS and absolute ethanol at 1:1 (v/v) and mixed.It was then stored at -20 o C.
Hybridization
A volume of 250 μl of fixed sample was centrifuged at 13000 x g for 3 minutes and the supernatant was removed.The sample was washed once by adding 1 ml PBS and centrifuged again.The sample was then divided into four tubes: a negative control containing no probe to observe autofluorescence, a negative control to observe non-specific binding events, a positive control where a universal eubacterial probe was added (Bact 338) and a sample to be hybridised by a specific AOB detection probe.The samples were serially dehydrated in successively increasing concentrations of molecular grade ethanol (60%, 80%, 100% v/v).After adding 1 ml of the ethanol solution, the sample was vortexed and left for 3 minutes.The sample was then centrifuged at 13000 x g for 3 minutes and the supernatant was removed.
The following step is to hybridize the samples.Hybridisation buffer (HB) was prepared according to Amann et al (1990).HB was added so that the final volume including the probe will be 40 μl.Thus, for the negative control for autofluorescence, 40 μl HB is added.For a hybridisation containing only one probe (2ul), 38ul HB is added.For a hybridisation containing two probes ( 2+2 μl) 36 μl HB is added.The samples were prehybridized for 15 minutes at the hybridisation temperature.After prehybridisation, 2 μl of probe (50 ng/μl) was added to the samples that were then incubated at the optimal hybridisation temperature for the given probe (Table 1) for at least 4 hours (or overnight).Following hybridisation, the samples were centrifuged at 13000 x g for 3 minutes and the supernatant was removed.A volume of 0.5 ml of wash buffer was added and the sample was mixed using a pipette before being incubated for 15 minutes at the same temperature as the hybridisation step.The washing step was again repeated.The samples were centrifuged again at 13000 x g for 3 minutes, the supernatant was removed and 1 ml of MilliQ water was added.Finally, the samples were centrifuged, the supernatant removed and the samples resuspended in 100 ul MilliQ water.
A 10 ul aliquot of the sample was added to a gelatine-coated slide with Teflon-coated wells of a known diameter (Appendix 4.1) and allowed to dry in a hybridization oven at 30 o C. The sample spot on the slide was mounted in a small drop of the antifadent-Citifluor (AFI, Canterbury, UK).A cover glass was sealed carefully on the top of the slide by applying clear nail varnish to the edges to prevent movement during microscopy.The slide was then stored at -20 o C in the dark and was prepared for viewing.
Scanning on a confocal laser microscope
The distribution of hybridized cells was subsequently visualised by means of a Leica TCS SP2 UV confocal laser scanning microscope (CLSM) equipped with Leica DMRXA microscope.Images were captured and processed using LCS V2.5.1040-1 software.For observation x 60 Na 1.32 lenses were applied.
The CLSM was run in the following mode: single channel for Fluorescene and double channel for Carbocyanine-5.Fluorescene was detected using excitation at 488 nm and a long pass emission filter in the range of 500-530 nm.Cy5 was detected using excitation at 633 nm and a long pass emission filter of 650-680 nm.The artificial colours green and red were assigned to the monochrome images acquired in the fluorescene and Cy5 channels respectively.The LCS software actively mixed colours so that a cell emitting red and green (the AOB) would appear yellow.For each sample, only 5 fields of view were randomly recorded in view of the time and budget available for the process.
Enumeration technique
An Excel spreadsheet constructed by Coskunur (2000) was used to carry out the calculation based on Equation 1 below: where K = average number of microcolonies in one ml of sample A1 = area of sample spot (the area can be calculated from the diameter of the sample spot , [π(D/2) 2 ]) A2 = area of one field view N = average number of ammonia oxidizer microcolonies/field of view V = volume of sample applied Vo = original volume of sample ODF = other dilution factors not considered above may be required (e.g.volume of sample spun down).Where no ODF, default value = 1 The spreadsheet was designed for the quantification of AOB population in wastewater treatment plants following FISH and quantification typically using CLSM produced images.It requires that the user inputs data concerning the number of AOB microcolonies, the shortest and longest diameter of the microcolonies, area measurements of the fields of view and sample spots and dilution factors used in FISH.The spreadsheet returns the average number of microcolonies and geometric mean diameter.This data sheet can also be used to calculate the concentration of AOB in mg/l, the % AOB in terms of total bacterial population (measured by volatile suspended solids, VSS), following an empirically determined conversion factor, in terms of total cell numbers.
Cluster size
The relative frequencies of AOB cluster diameters for all the samples investigated are presented in Fig. 2.
Fig. 2. Size distribution of cell clusters in the full-and partial-bed reactors
The results show that the majority of the clusters had diameters of 5 μm with the largest being 10 μm.These findings are quite consistent with the results obtained by Kloep et al (2000).Using probe Nsm 156, the majority of the hybridized clusters was found to be smaller than 10 µm and only a few were larger than 15 µm.Wagner et al (1995) clusters hybridized with probe Neu 23 having diameters between 3 μm and 20 μm from samples of municipal sewage treatment plants.Nitrifier agglomerates are therefore small, for example well below those particle sizes (>100 μm) effectively removed by conventional primary sedimentation (Kiely 1998).Their retention in the system must therefore be mainly due to interactions with the biofilm attached to the media elements in the bed.By visual observation, yellow clusters emerge on all biofilm samples as shown on Plates 1-4.
The AOB appear yellow due to double bindings of the fluorescene-labelled probe EUB 338 (emitted as green) and Cy5-labelled probe Nso 1225 (emitted as red).The formation of cluster growths is a feature of ammonia-oxidizing bacteria, in particular Nitrosomonas sp (Wagner et al 1995;Mobarry et al 1996).The clusters were spherical to oval shaped and appeared over diameters ranging from approximately 2.5 to 12.5 μm.
Enumeration of ammonia-oxidizing bacteria
The number of AOB cells per ml of biomass was calculated from the counts based on cluster diameters using an Excel spreadsheet developed by Coskunur (2000).The numbers of AOB cells obtained are given in The higher number of AOB cells present in the biofilm samples than in the suspended growth samples could be due to the fact that AOB are slow-growing bacteria that need long mean solids' retention times to become established.Nitrifying bacteria, when compared with the heterotrophic organisms, are very much slower growing.Watson et al (1989) observed that the doubling times of these bacteria range from 8 hours to several days and that they have a tendency to attach to surfaces and to grow in cell aggregates referred to as zoogloeae or cysts (Lipponen et al 2002).In order to maintain an effective population of nitrifying bacteria within a biological reactor, a long retention time is required (Barber and Stuckey 2000).This is in accordance with the results obtained by Hidaka et al (2003), who discovered that in a biofiltration process for the advanced treatment of sewage, attached biomass contributed to most nitrification activity.Gerceker ( 2002) reported the loss of nitrification between SRTs of 0.9 and 2.4 days in a closely controlled jet-looped membrane bioreactor.Noguiera et al (2002) found that competition in biofilm results in a stratified biofilm structure, the fast-growing heterotrophic bacteria being drawn to the outer layers where both substrate concentration and detachment rate are high, whilst the slow-growing nitrifying bacteria stay deeper inside the biofilm.The heterotrophic layer has a positive effect on the nitrifiers by protecting them from detachment as long as the bulk oxygen concentration is high enough to preclude its depletion in the biofilm.
It is a fact that biofilm is significant in controlling long SRTs in a system.The full-bed reactor, which has a higher mass of biofilm than the partial-bed, as a result of the greater volume and surface area of the fully packed reactor, has SRTs of 21.2, 27.5 and 11.1 days at the three backwashing rates used in the study.The partial-bed reactor, on the other hand, had much shorter SRTs of 3.3, 3.9 and 2.7 days.Meanwhile, the biofilm in the partial-bed reactor was kept thin and stable, and therefore was not easily washed out during the backwash operation.Therefore, the retention time of biofilm in the partial-bed reactor is actually longer than the overall SRT of the system.Chuang et al (1997) pointed out that satisfactory nitrogen removal is achieved at SRT > 10 days.
The suspended growth biomass in the reactors, and especially that of the partial-bed reactor, was always subject to being washed out by the backwashing operation and lost in the effluent.
Significance of AOB Cells in the biofilm and suspended growth cultures
Tests carried out to compare the significance of AOB cells in both types of cultures were based on nonparametric methods of one-way ANOVA.3 indicates that in both reactors there is a significant difference in the number of AOB cells in the biofilm and suspended growth samples.At 95% confidence levels, the p-value for the full-bed reactor is 0.042 whilst that of the partial-bed reactor is 0.024.Since the pvalues obtained are smaller than 0.05, this means that in both reactors, specific cell concentrations of AOB were found to be significantly higher in the biofilm samples as compared to the suspended growth samples.It was found that the AOB cells are more numerous in the biofilm samples than in the suspended growth samples of both the full-(p=0.042)and the partial-bed (p=0.024)reactors.
It is therefore interesting to compare the significance of the overall AOB cells in the full-and partial-bed configurations, knowing that the mass of biofilm is lower in the partial-bed reactor due to the reduced media volume compared to the full-bed reactor.Table 3 also indicates that there is no significant difference between the concentrations of AOB cells in the biofilm samples of the full-and partial-bed reactors (p=0.099), and also in the suspended growth samples (p=0.079).To put the overall abundance of AOB cells in the full and partial-bed reactors side-by-side, the AOB cells in the biofilm and suspended growth samples for each reactor were combined, giving total concentrations of AOB cells for that particular configuration.The p-value of specific AOB concentrations comparing the full-and partial-bed configuration is p=0.427.The value indicates an almost comparable AOB relative abundance in both the full-and partial-bed reactors.Higher mean AOB cells of the biofilm in the partial-bed reactor equate with the higher mean value of suspended growth samples in the full-bed reactor, resulting in almost equivalent mean AOB cells in both reactors.Lazarova et al (1994) made a point that the balance between biofilm losses and growth processes on the outside of the media was dominated by shear forces, exerted by the liquid as it flowed past the media surfaces in the reactor.In a study to evaluate the essential role of hydrodynamic shear force in the formation of biofilm, Liu and Tay (2002) pointed out that biofilm density quasi-linearly increases with the increase of shear stress.Chang et al (1991) discovered that the medium concentration and the turbulence indicated by Reynolds numbers, significantly affected biofilm density and thickness of a fluidized bed biofilm reactor.In this type of reactor, increasing medium concentration can be associated with increasing attrition due to particle-to-particle contacts and increasing turbulence correlates flow fluctuations that could create forces normal to the biofilm, i.e. the shear stress.(Chang et al 1991) In this study, since the medium is fixed, there is no attrition effect.Therefore turbulence effect could be the major factor that increases the detachment pressures, and caused the biofilm to become denser and thinner.The highest percentage of AOB was found in a sample from the middle of the full-bed reactor (0.0829%), followed by the top part (0.0295%), whilst very little was found in the bottom part (0.0216%).A low percentage of AOB was obtained at the bottom despite the fact that the substrate and oxygen sources were supplied from here.This anomaly could best be explained by the fact that competition between heterotrophic and nitrifying bacteria for substrates (oxygen and ammonia) and space in the biofilms resulted in the fast-growing heterotrophic bacteria dominating the bottom part of the reactor.Plate 8 of biofilm sample from the bottom of the full-bed reactor show that AOB clusters are not dense as in Plates 1-2 of the top and the middle positions.
Plate 8. CSLM image of a biofilm sample from the bottom of the full-bed reactor Detection of Ammonia-oxidizing Bacteria (AOB) in the Biofilm and Suspended Growth Biomass of Fully-and Partially-packed Biological Aerated Filters 89 The trend of AOB growth in the biofilm samples of the full-bed reactor was followed through for the partial-bed reactor (Fig. 4): The same argument of competition for substrates and space between heterotrophic bacteria and nitrifiers explained the lower percentage of AOB obtained in the middle (0.1019%) compared to the top part of the partial-bed reactor (0.2151%).
To validate the hypothesis made on AOB distribution in both the full and partial-bed reactors, a previous work by Wijeyekoon et al (2000) was used to investigate the effect of organic loading rates on nitrification activity.The three reactors, packed with the same weights of anthracite, were equipped with sampling ports at depths of 6 cm (port 1), 18.5 cm (port 2) and 37.5 cm (port 3) from the top end of the filters.The specific rate of NH 4 + -N oxidation in the reactors was determined by the biomass extracted from those ports.It was discovered that the highest rates in filter A and B were obtained at the effluent ends of the reactors, but in filter C, the rates were comparably high from all ports.Also, among the three reactors, filter C produced the highest rates, with an average of 48.1 and 56.4 g N/(mg protein.hr)for ports 1 and 2 respectively.The conclusion derived from the study is that at high organic carbon loadings nitrifiers are non-uniformly distributed along the length of a filter, with excessive growth of heterotrophs near the feed end and nitrifiers at the effluent end under the influence of comparatively higher organic loading.Meanwhile, at low organic loadings, the heterotrophs and autotrophs can coexist.Filter C had the lowest organic carbon loading and consequently had the lowest biomass density.Therefore, the nitrifiers in filter C may have experienced less competitive pressure from the faster-growing heterotrophic organisms for oxygen and space.The displacement of the nitrifying population by the heterotrophs is caused by the varying ratio of carbon and nitrogen entering the reactor.
The carbon loading used in this part of study, 5.71 ±0.16 kg COD/m 3 .d,was much higher than the loadings used by Wijeyekoon (Table 9.4), and therefore nitrifiers were not only displaced further away from the feed source, but also buried deeper into the biofilm (Ohashi et al 1995).Fdz-Polanco et al (2000) also observed that as the amount of organic carbon entering the filter increases, the nitrification activity is displaced to the upper part of the filter in an upflow process.Quyang et al (2000) also argued that the differences in biological activity at different filter heights were due to their varying loadings.
Rowan et al (personal communication) also investigated the percent value of AOB in a fullscale BAF plant treating municipal wastewater and obtained a value of 0.65%.This value is almost three times higher than the highest percentage obtained in this study (0.2151% from Figure 9.4).The difference in values could be attributed to a number of factors including carbon loading, nitrogen loading, pH, DO, media type and size, direction of flow, backwashing regime and thus mean SRT and biofilm attachment characteristics.
Conclusion
The extent of comparable nitrogen removal in the two reactor configurations needs further microbiological evidence, specifically that of the existence of AOB.The formation of a dense biofilm as a result of higher turbulence would account for the higher number of AOB cells enumerated in the biofilm samples from the partial-bed reactor (4.259 x 10 5 ±1.881 x 10 5 no of AOB cells/ml sample) as compared to those from the full-bed reactor (1.523 x 10 5 ±7.979 x 10 4 no of AOB cells/ml sample).Although biomass was washed out in the treated effluent and during backwash operation, the SRT at the high organic loading of 5.71±0.16kg COD/m 3 .dwas still maintained at 4.2 days for the partial-bed reactor and 7.6 days for the full-bed reactor.These SRTs were still longer than the limit noted by Sastry et al (1999), who claimed that a mean cell residence time > 3 days is desirable for nitrifiers to reach a stable population for effective nitrification, and Gerçeker (2002) who recorded a loss of nitrification below 2.5-2.7 days at an OLR of 5 kg COD/m 3 .dand a temperature of 25 o C.
Acknowledgement
This chapter of the book could not have been written without the help of my PhD supervisor Prof Tom Donnelly who not only served as my supervisor but also encouraged and challenged me throughout my academic program.He and the other faculty members, Dr. Davenport and Dr Joana of University of Newcastle upon Tyne guided me through the process, never accepting less than my best efforts.I thank them all.And last but not least the Government of Malaysia for the sponsorship of my study.
Fig. 1 .
Fig. 1.Sampling locations for biofilm and suspended growth biomass along the reactor's height
Plate 1 .
CLSM image of a biofilm sample from the top of the full-bed reactor Plate 2. CLSM image of a biofilm sample from the middle of the full-bed reactor www.intechopen.comDetection of Ammonia-oxidizing Bacteria (AOB) in the Biofilm and Suspended Growth Biomass of Fully-and Partially-packed Biological Aerated Filters 83 Plate 3. CLSM image of a biofilm from the top of the partial-bed reactor Plate 4. CLSM image of a biofilm from the middle of the partial-bed reactorPlates 5 -7 of suspended growth samples from the full-bed reactor show fewer AOB clusters than Plates 1 -4.Layers of filamentous bacteria can be seen dominating, especially the suspended biomass samples from the top and middle parts of the reactors.For the CLSM images of the suspended growth biomass samples from the partial-bed reactor, intense diffuse, green coloured fluorescence was often observed.This could have been due to debris, inorganic particles or the bacterial cells.A large number of coccoid structures was detected using the EUB 338 probe.They usually occurred in characteristic clumps and appeared ring shaped.MacDonald and Brozel (2000) observed the same phenomena in their study of bacterial biofilms in a simulated recirculating cooling-water reactor and suggested that this could result from dense chromosomal material at the cell center, leading to a concentration of ribosomes at the periphery of the cells.Plate 5. CLSM image of suspended growth biomass from the top of the full-bed reactor Plate 6. CLSM image of suspended growth biomass from the middle of the full-bed reactor www.intechopen.comDetection of Ammonia-oxidizing Bacteria (AOB) in the Biofilm and Suspended Growth Biomass of Fully-and Partially-packed Biological Aerated Filters 85 Plate 7. CLSM image of suspended growth biomass from the bottom of the full-bed reactor
3. 4 Fig. 3 .
Fig. 3. Percentage values of AOB in the biofilm samples of the full-bed reactor
Fig. 4 .
Fig. 4. Percentage values of AOB in the biofilm samples of the partial-bed reactor
Table 1 .
Features and conditions of probes during hybridisation
Table 2 .
Number of AOB cells per ml of biomass in the biofilm and suspended growth samples
Table 3 .
Table 3 lists the results obtained.Results of variance analysis of AOB cells (no.AOB cells/ml sample) in the biofilm and suspended growth samples Table
Table 4 .
Table 4 illustrates the results obtained in their study.Measured and calculated values for experimental runs with the fluidised bed biofilm reactor | 2017-09-12T20:32:51.172Z | 2011-09-09T00:00:00.000 | {
"year": 2011,
"sha1": "3eac063fbbcf2fdd610cb97d50f8572789801b79",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.intechopen.com/citation-pdf-url/19068",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "56acb38c4e6dc83683b7000167110fba2c495d0f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
8052844 | pes2o/s2orc | v3-fos-license | Antimicrobial Host Defensins – Specific Antibiotic Activities and Innate Defense Modulation
Current treatment of bacterial and fungal infections heavily relies on strategies which aim to inhibit and kill pathogens with high specificity. These strategies are very successful and antibiotics have contributed to increasing human life expectancy more than any other class of therapeutic drugs. However, antibiotics are losing efficacy as a result of high selection pressure and rapid resistance development. Thus, strategies that rely on boosting natural host defenses are gaining more attention, since compounds targeting host mechanisms should control infections regardless of the antibiotic resistance levels of pathogens. Antimicrobial peptides (AMPs) are considered as ideal candidates for such novel anti-infective strategies since they combine direct antibiotic activities with modulation of immune responses (Figure (Figure1).1). However, AMPs frequently lack specific molecular targets and tend to have membrane disruptive activities, bearing risks of cytotoxicity. For anti-infective drug development, AMPs should ideally inhibit specific microbial targets without impacting on membranes; peptides with such properties were recently identified in a large subfamily of AMPs, the defensins.
Current treatment of bacterial and fungal infections heavily relies on strategies which aim to inhibit and kill pathogens with high specificity. These strategies are very successful and antibiotics have contributed to increasing human life expectancy more than any other class of therapeutic drugs. However, antibiotics are losing efficacy as a result of high selection pressure and rapid resistance development. Thus, strategies that rely on boosting natural host defenses are gaining more attention, since compounds targeting host mechanisms should control infections regardless of the antibiotic resistance levels of pathogens. Antimicrobial peptides (AMPs) are considered as ideal candidates for such novel anti-infective strategies since they combine direct antibiotic activities with modulation of immune responses (Figure 1). However, AMPs frequently lack specific molecular targets and tend to have membrane disruptive activities, bearing risks of cytotoxicity. For antiinfective drug development, AMPs should ideally inhibit specific microbial targets without impacting on membranes; peptides with such properties were recently identified in a large subfamily of AMPs, the defensins.
All multicellular organisms produce AMPs to protect surfaces and tissues from invading pathogens. These peptides have been referred to as AMPs and more recently as host defense peptides (HDPs). HDPs are ancient effector molecules of innate immunity with multiple functions. They do not share specific sequence similarities, but can be generally defined as amphiphatic cationic peptides consisting of 12-50 amino acids. They are either linear (e.g., LL-37, magainin, and indolicidin) or have tertiary structures stabilized by disulfide bonds (Hancock and Lehrer, 1998;Shai, 2002;Zasloff, 2002). Defensins sensu stricto belong to the latter class and were first isolated from mammals, and subsequently also found in invertebrates and plants.
Plant, fungal, and invertebrate defensins share a common structural motif, the cysteine-stabilized αβ-motif that is composed of an α-helix linked to an antiparallel β-sheet with three or four disulfide bonds; they display either antifungal or antibacterial activity. Recently, it has been demonstrated that antibacterial defensins of fungi and invertebrates bind with high affinity to the bacterial cell wall precursor lipid II. They form an equimolar stoichiometric complex with lipid II, thereby inhibiting the incorporation of the cell wall building-block into the nascent peptidoglycan network (Schmitt et al., 2010;Schneider et al., 2010). NMRbased modeling of the plectasin-lipid II complex indicated that the fungal defensin interacts with the pyrophosphate moiety of lipid II by forming four hydrogen bonds (involving residues F2, G3, C4, and C37). Additionally, a salt bridge between the N-terminus (His18) and the d-glutamic acid in position 2 of the lipid II stem peptide is important for binding . Interestingly, the amino acid residues involved in the lipid II binding of plectasin are also present in many other fungal and invertebrate defensins, suggesting a conserved lipid II binding motif.
Cell wall biosynthesis is a prominent target of clinically used antibiotics. For example, the glycopeptide vancomycin, a last-resort antibiotic for treatment of infections with multi-resistant Gram-positive bacteria, binds to the D-alanyl-D-alanine terminus of the lipid II pentapeptide.
However, cross-resistance between vancomycin and plectasin could not be observed and also the presence of D-alanine-D-lactate found in vancomycin-resistant bacteria did not affect the activity of plectasin . In general, only modest resistance development toward HDPs has been observed under in vitro selection pressure (Zhang et al., 2005). The lipid II isoprenoid anchor (C 55 P) is also involved in the biosynthesis of other major cell envelope polymers (e.g., wall teichoic acid, capsules). Synthesis of C 55 P-anchored molecules always starts with the transfer of a sugar moiety to the lipid carrier, forming a pyrophosphate linkage. This structural motif is highly conserved, as it is part of several essential building blocks and therefore cannot be easily modified to confer resistance.
The antifungal action of plant and invertebrate defensins also appears to be highly specific and is based on interaction with particular sphingolipids in membranes and cell walls of susceptible fungi. For example, the interaction of RsAFP2 (from radish seeds) with fungal glucosylceramides causes the production of radical oxygen species and apoptosis as well as cell wall stress, septin delocalization, and ceramide accumulation (Thevissen et al., 2012). Other plant defensins such as DmaMp1 (from Dahlia merckii) bind specifically to inositol phosphoryl-containing sphingolipids leading to membrane permeabilization and ion efflux (Thevissen et al., 1996(Thevissen et al., , 2003. In contrast, the activity of vertebrate defensins may be of intermediate specificity for microbial targets with a broader activity spectrum. Vertebrate defensins comprise three subfamilies, α-, β-, and θ-defensins, which differ in their pairing of the six conserved cysteine residues. They are composed of three antiparallel β-sheets and exhibit a broad-spectrum activity against Gram-positive and Gram-negative bacteria, fungi, and some enveloped viruses. α-Defensins have been isolated from the granules of neutrophils and small intestinal Paneth cells whereas β-defensins are mainly expressed in epithelial tissues. The cyclized θ-defensins are found exclusively in leukocytes and bone marrow of Old World monkeys and arose from a pre-existing α-defensin (Ganz, 2003;Schneider et al., 2005). Lipid II binding has also been reported for the vertebrate α-defensin human neutrophil peptide 1 (HNP1) and human-beta defensin 3 (hBD3). However, the affinity of HNP1 to the cell wall precursor is significantly lower compared to that of the fungal peptide plectasin (plectasin-lipid II: 1.8 × 10 −7 M; HNP1-lipid II: 2.19 × 10 −6 M; de Leeuw et al., 2010;Sass et al., 2010;Schneider et al., 2010). Besides lipid II sequestration, hBD3 additionally seems to have more generalized effects on membrane bound processes such as electron transport (Sass et al., 2008). These findings indicate that the specificity of lipid II binding correlates to some extent with the antimicrobial spectrum. Defensins with high affinity for lipid II may have evolved to mainly act against Gram-positive bacteria, whereas defensins with lower lipid II affinity may have retained the capacity to interact with additional targets and therefore have a broader antimicrobial spectrum, including Gramnegative bacteria or fungi.
The combination of highly targeted antimicrobial activity with the capacity to positively modulate the immune response is highly attractive as anti-infective strategy. Mammalian HDPs are expressed either constitutively or are inducible in various tissues and cell types, including immune cells like neutrophils or macrophages, as well as keratinocytes and epithelial cells. The expression of these peptides is triggered by conserved microbial structures [lipopolysaccharide (LPS), lipoteichoic acid, CpG oligonuclecotides; via Toll-like receptors (TLRs)] or inflammatory effectors such as cytokines (TNF-α, IL-1β; Zasloff, 2002;Lehrer, 2004;Brown and Hancock, 2006). HDPs have been demonstrated to provide an important link between innate and adaptive immune response, acting as both pro-and anti-inflammatory mediators. They enhance beneficial immune responses and dampen harmful ones, enabling the host to control infections. HDPs modulate the expression of hundreds of genes in immune cells and epithelia, influencing processes like maturation of immune cells, cross-regulation of cytokines/chemokines, wound healing, and angiogenesis. The α-defensins HNP1-3 which are released by tissue invading granulocytes, have been shown to trigger secretion of TNF-α and IFN-γ from macrophages. The cytokine release stimulates the phagocytotic macrophage activity via an autocrine loop, thereby enhancing clearance of opsonized bacteria, as observed in vitro and in an murine model (Soehnlein et al., 2008). The β-defensin hBD3 activates professional antigen presenting cells (monocytes, dendritic cells) via TLRs 1 and 2 and thereby stimulates adaptive immune responses (Funderburg et al., 2007). Various defensins recruit immune cells by direct binding to chemokine receptors (CCRs). α-Defensins, for example, enhance the migration of T-cells, while β-defensins exhibit chemoattractant functions for immature dendritic cells, monocytes/macrophages, and mast cells (Yang et al., 2000;Niyonsaba et al., 2002;McDermott, 2004). Furthermore, defensins dampen endotoxin-induced secretion of proinflammatory cytokines by neutralization of extracellular LPS as well as modulation of intracellular signaling pathways (Scott et al., 2002;Mookherjee et al., 2006). Defensins aid in wound healing not only by direct killing of pathogens and boosting of host defense mechanisms, but moreover through stimulation of processes involved in tissue organization. HBD1-4 have been shown to enhance humane keratinocyte migration and proliferation through epidermal growth factor receptor signaling (Niyonsaba et al., 2007). Gene transfer and exogenous expression of hBD3 accelerated closure of infected diabetic wounds in a porcine model , suggesting a therapeutic potential for defensins in wound healing.
Bacterial peptides sharing the overall features of HDPs, i.e., cationic amphiphilicity, such as gramicidin S and polymyxin B have been used in clinics as topical agents. In contrast, no AMP of eukaryotic origin has so far been approved for the treatment of patients. In clinical phase III studies, the HDP-derivatives pexiganan (from Xenopus laevis magainin) and iseganan (from porcine protegrin-1) have been shown effective in the prevention of diabetic food ulcer and irradiation-induced oral mucositis, respectively (Trotti et al., 2004;Lipsky et al., 2008). Nevertheless, these substances were not Taken together, it appears a most promising approach to design future anti-infective drugs that target host defenses and may combine this with targeted antibiotic activities, even more since classic antibiotics such as macrolides also appear to have immune modulatory properties (Tauber and Nau, 2008). On the other hand, it is obvious that for systematic exploitation of this concept, we need to know more about both, the molecular mechanisms underlying the immune modulation and about specific, targeted antibiotic activities of HDPs -it would be rather surprising if these would occur only with defensins. 2009; Kraus et al., 2012). HDPs are reminiscent of peptides with nuclear localization signals and many peptides can migrate into the cell core; the cathelicidin LL-37 was even demonstrated to have nuclear translocation ability regarding DNA plasmids (Sandgren et al., 2004). Thus, it is obvious that such activities need to be extensively studied and taken into account for any drug development program. The increasing knowledge of the importance of immunomodulatory HDP functions, has led to the synthesis of so called innate defense regulator peptides (IDRs;Easton et al., 2009). These are small synthetic peptides derived from HDP templates, which were designed to selectively modulate the innate immune system without the detrimental activities displayed by certain natural HDPs (see above). Several recent studies focused on cathelicidinderived IDRs (Choi et al., 2012). The synthetic peptides IDR-1 and IDR-1002 (from bovine bactenecin), despite lacking direct antimicrobial activity, were shown to confer protection against systemic bacterial infection in mouse models challenged with methicillin-resistant S. aureus and vancomycin-resistant enterococci. Notably, these IDRs combine anti-infective and anti-inflammatory properties. IDR-1 and IDR-1002 contribute to bacterial clearance by inducing chemokines secretion and enhancing leukocyte recruitment. Moreover, the peptides suppress the induction of several proinflammatory cytokines, thereby dampening immunemediated inflammation and preventing tissue damage (Scott et al., 2007;Nijnik et al., 2010;Wieczorek et al., 2010;Turner-Brannen et al., 2011). IMX-942, which is based on IDR-1, is tested for its ability to help combat nosocomial infections in immune-suppressed cancer patients, and has recently completed clinical phase I trials 1 . The HLA-I-derived decapeptide RDP58 inhibits the synthesis of proinflammatory cytokines like TNF-α, IL-2, IL-12, and IFN-γ by interfering with MyD88signaling (Travis et al., 2005). RDP58 has proven safety and efficacy in clinical phase II studies with inflammatory bowel disease patients 2 . approved for medical use by the US Food and Drug administration. Various other synthetic HDPs are in clinical phase I or II trials, which do not only aim at exploiting the direct antimicrobial features of these peptides, but also their ability to modulate the human immune system (Yeung et al., 2011).
RefeRences
The cationic amphiphilic and peptidic nature of AMPs is often considered unfavorable for development of systemic drugs. However, protease lability, contributing to low serum half-life, may be overcome by different approaches, including the use of peptidomimetics, peptides composed of unusual or d-amino acids (instead of natural l-amino acid), and formulation (e.g., in liposomes; Oren et al., 1997;McPhee et al., 2005). Peptides based on defensin templates have not been investigated in clinical studies so far. Defensins are more protease-resistant due to their disulfide-stabilized structure (Wu et al., 2003;Maemoto et al., 2004), and therefore can have a higher serum half-life as other HDPs mentioned above; e.g., plectasin and its improved derivative NZ2114 showed potent activity in animal models, enhanced serum-stability, and extended in vivo halflife (Andes et al., 2009). Also, the plectasin example demonstrates that difficulties associated with high yield production of defensins and with correct cysteine-pairing, can be solved. The use of chemically modified prodrugs, could also improve pharmacokinetics and/or lower toxicity, as in the case of the parental antibiotic colistin (methane sulfonate derivative of polymyxin B; Falagas and Kasiakou, 2006).
Antimicrobial mechanisms based on defined target molecules such as lipid II reduce the risk of unspecific membrane disruption and cytotoxic activities, although HDPs clearly have some specificity for microbial membranes; eukaryotic membranes may be less susceptible due to the absence of anionic lipids on the lipid bilayer surface, the lack of a strong membrane potential gradient and the presence of cholesterol (Hancock and Sahl, 2006). However, it cannot be ignored that certain HDPs display potential harmful effects like degranulation of mast cells and enhancement of apoptosis (Niyonsaba et al., 2001;Barlow et al., 2006). It has been reported that hBD3 promotes the proliferation of oral carcinoma and osteosarcoma cells acting as a potential proto-oncogene (Kesting et | 2016-05-18T15:17:44.209Z | 2012-08-14T00:00:00.000 | {
"year": 2012,
"sha1": "7406f9e37b9c9ddd78348fb55f92271d579743ee",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2012.00249/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7406f9e37b9c9ddd78348fb55f92271d579743ee",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
3857137 | pes2o/s2orc | v3-fos-license | Priorities for family building among patients and partners seeking treatment for infertility
Background Infertility treatment decisions require people to balance multiple priorities. Within couples, partners must also negotiate priorities with one another. In this study, we assessed the family-building priorities of couples prior to their first consultations with a reproductive specialist. Methods Participants were couples who had upcoming first consultations with a reproductive specialist (N = 59 couples (59 women; 59 men)). Prior to the consultation, couples separately completed the Family-Building Priorities Tool, which tasked them with ranking from least to most important 10 factors associated with family building. We describe the highest (top three) and lowest (bottom three) priorities, the alignment of priorities within couples, and test for differences in prioritization between men and women within couples (Wilcoxon signed rank test). Results Maintaining a close and satisfying relationship with one’s partner was ranked as a high priority by majorities of men and women, and in 25% of couples, both partners ranked this factor as their most important priority for family building. Majorities of men and women also ranked building a family in a way that does not make infertility obvious to others as a low priority, and in 27% of couples, both partners ranked this factor as the least important priority for family building. There were also differences within couples that involved either men or women ranking a particular goal more highly than their partners. More women ranked two factors higher than did their partners: 1) that I become a parent one way or another (p = 0.015) and 2) that I have a child in the next year or two (p < 0.001), whereas more men ranked 4 factors higher than their partners: 1) that our child has [woman’s] genes (p = 0.025), 2) that our child has [man’s] genes (p < 0.001), 3) that I maintain a close relationship with my partner (p = 0.034), and 4) that I avoid side effects from treatment (p < 0.001). Conclusions Clinicians who support patients in assessing available family-building paths should be aware that: (1) patients balance multiple priorities as a part of, or beside, becoming a parent; and (2) patients and their partners may not be aligned in their prioritization of achieving parenthood. For infertility patients who are in relationships, clinicians should encourage the active participation of both partners as well as frank discussions about each partner’s priorities for building their family.
Plain English summary
Many couples who are unable to conceive a baby seek medical advice from infertility specialists. Even with that guidance, couples face difficult choices as they try to build their families. While we know that they want to have a baby, researchers know very little about how couples balance other priorities that could influence their decisions about whether to pursue treatment and what treatment will best meet their goals.
To learn more about couples' priorities, we created a list of 10 factors related to family-building decisions. We recruited 118 people (59 couples) who planned to see a reproductive specialist and asked them to each separately rank the importance of the 10 factors. Then we looked for similarities and differences in the priorities of men and women within couples.
We found that there were differences between men and women within couples for six of the 10 factors: becoming a parent one way or another; passing on the woman's genes; passing on the man's genes; having a child within a year or two; maintaining a close relationship; and avoiding treatment side effects. For two factors, partners in >25% of couples ranked the factors exactly the same: maintaining a close relationship (highest priority) and building a family in a way that doesn't make infertility obvious to others (lowest priority).
We recommend that infertility specialists be aware that the couples they treat are balancing many priorities and that partners may not agree about how to balance those priorities and that they should counsel them accordingly.
Background
While medical decision making is often difficult, several features of medical treatment for infertility make these decisions especially challenging. For example, because health insurers are not mandated to cover infertility treatment in 35 of the United States (US), cost is thought to be a major consideration for most Americans considering infertility care [1,2]. Because of uncertainty about whether any particular treatment will ultimately lead to a live birth, the upfront cost raises the stakes of treatment decisions for couplesa decision to invest in one path may limit the resources available to pursue other options if a treatment is unsuccessful. Other factors must also be weighed, such as the importance of a genetic connection to a future child, experiencing pregnancy and childbirth, and the potential for treatment side effects for the parent or child. Because various family-building paths are associated with these factors in different ways, the relative value a hopeful parent places on any given priority may point toward some paths while excluding others.
Additionally, treatment-related decisions about infertility necessarily involve more than one actor. Even when a couple is in agreement about seeking care to start a family, partners may not agree about where to set limits in terms of financial outlay or time invested, how to prioritize genetic parentage, or what treatment-related risks are acceptable. The question of how couples reach joint decisions is one that has been studied extensively and from a variety of perspectives, including gametheoretic [3][4][5], social-psychological [6], and sociological [7][8][9][10][11]. Extensive applications exist focusing on topics from relocation decisions among two-earner couples [12], to consumer behavior [13], to contraceptive use [14], and sexual relations [15].
Yet despite the potential for patients' relative valuation of family-building priorities to affect infertility treatment decisions, little research literature addresses this topic. Previous research has examined a related concept of couples' motivations and goals for childbearing and parenting with attention to the impact of infertility. In a study of 214 couples, Miller et al. found that infertile couples considering the use of assisted reproductive technology (ART) were more highly motivated by perceived positive aspects of parenthood and less concerned with perceived negative aspects of parenthood compared to couples with no known fertility problems [16]. Langdridge et al. compared parenthood motivations among 10 pregnant couples with no known fertility issues, 10 couples with infertility who were pursuing in vitro fertilization, and 10 couples with infertility who were pursuing donor insemination [17]. The three groups were more similar than different in terms of their reasons for pursuing parenthood, with respondents overwhelmingly endorsing a core "triad" of reasons to pursue parenthood: giving love, receiving love, and added enjoyment/fun in life. A phenomenological analysis of three couples over six months after beginning treatment with in vitro fertilization found that couples balanced their main goal of achieving parenthood with four other goals: biological parenthood, retaining emotional wellbeing, remaining financially secure, and maintaining good relationships with partners [18]. Finally, Thompson et al. found that in 37 couples seeking infertility treatment, both partners reported placing similar levels of importance on reaching the goal of parenthood [19]. These previous studies focused on general motivations for becoming a parent; to our knowledge no existing studies have examined specific factors related to achieving parenthood for men and women who are currently experiencing infertility. This is important since couples who are experiencing infertility may have individual values and preferences but must make joint decisions in the context of finite time and resources for family building.
Our objectives in this study were to describe how men and women in the early stages of seeking medical treatment for infertility prioritize different factors related to infertility decision making and to test for differences in priorities between partners within couples. We also present a novel tool to help individuals consider their priorities; the tool may also be useful for facilitating discussions about priorities with partners and providers.
Methods
A convenience sample of new patients at a Reproductive Medicine Center affiliated with a large academic medical center in suburban Milwaukee, Wisconsin was recruited between May of 2013 and June of 2014. Letters detailing the research study were mailed to 613 patients who had first-consultations scheduled at least one week in the future with a reproductive specialist (RS), specifically, either a reproductive endocrinologist and infertility specialist or a fellowship-trained reproductive urologist. Because of the short window of time to recruit to the study before the first appointment, people were only invited to participate once; no follow-up attempts were made. After receiving the letter, 155 patients contacted the study team to learn more about the study. We wanted to understand the experiences of couples who were naïve to specialty treatments for infertility, thus additional inclusion criteria included not having previously had a child using any ART, not having previously tried IVF, and the ability to provide data before the first appointment with the RS. One hundred eleven people met these criteria, and 92 patients and 68 of their partners enrolled in the study. For this analysis we included the 59 opposite-sex couples for whom we had data on the Family-Building Priorities Tool. All participants provided informed consent. The study was approved by the Medical College of Wisconsin/Froedtert Hospital Institutional Review Board.
The Family-Building Priorities Tool
We were unable to find an extant tool to assess familybuilding priorities for people experiencing infertility. Thus we created the Family-Building Priorities Tool ( Table 1). The development process is shown in Fig. 1. Available family-building options for couples experiencing infertility require trade-offs, so we wanted to assess the relative weight that individuals experiencing infertility place on different factors rather than asking them to rate how important each one is. The Tool instructs individuals to rank factors in order of importance from 1 to 10. Conceptually these priorities are not meant to represent a single construct or latent variable; as such, psychometric evaluation looking at internal consistency, reliability, or factor structure was not appropriate. We developed and evaluated the validity of the Tool as follows. First, we developed a list of candidate priorities after a review of the scientific literature [16,17,20,21] and popular infertility resources [22] and in consultation with the physicians and patients experienced with ART Table 1 Family-Building Priorities Tool a The wording for this item varied depending on the respondent's role in the couple. Women were presented with this item as it is worded in the table above. The wording for men was slightly adjusted: "That my partner gets to be the person who is pregnant and gives birth to my child" who were part of the study team. This process resulted in a prototype Tool with evidence for content validity. We then evaluated the prototype Tool using cognitive interviews. Cognitive interviews apply techniques from cognitive theory to systematically evaluate and revise questionnaire items through intensive verbal probing [23]. We conducted a total of 17 interviews with ten women and seven men recruited from the same Reproductive Medicine Center described above, but we specifically targeted individuals at all stages of the process of infertility decision making. During the cognitive interviews we asked participants to complete the Tool. Then we examined the instructions and each priority in turn, asking participants to rephrase priorities in their own words to evaluate comprehension, to add any priorities to the list that they thought were missing, and to share any other thoughts or ideas that came to mind while examining the Tool. Cognitive interviews were conducted iteratively, that is, after revisions were made to the Tool we retested the revised version in additional interviews. These cognitive interviews provided evidence for face validity of the Tool.
Data collection and analysis
Each participant completed a self-administered questionnaire using REDCap [24] prior to the first scheduled consultation with the RS (median three days; interquartile range = one to six days).
We compared sociodemographic and self-reported health characteristics between women and men accounting for the non-independence of the samples using Wilcoxon's signed-rank test (ordinal variables), McNemar's and extended McNemar's tests (categorical variables), and paired t-tests (continuous variables). In order to broadly summarize which factors were most and least important to participants, we describe the percentages of men and women who identified each of the 10 factors as a high (top three) or as a low (bottom three) priority, and within the groups of men and women, we also identify factors for which a sizeable proportion (we chose 25%) designated it as a high priority and at least as many designated it as a low priority. Given the non-independence of this data, we do not test for differences by women and men as groups. For each factor we show the percentage of couples who ranked it identically, and we used the Wilcoxon signed-rank test to assess whether male and female partners within couples ranked each factor differently from one another. We considered a two-tailed α level of 0.05 significant. Extended McNemar's tests were performed using SAS version 9.4 (SAS Institute, Cary, NC). All other analyses were performed using Stata 14 (Stata Corp., College Station, TX).
Sample characteristics
Over half of women in the sample were less than 35 years of age at the time of their first scheduled consultation, and most identified as white, non-Hispanic, had at least a bachelor's degree, were employed full-time, and did not have biological children (Table 2). Men in the sample were somewhat older than their partners, but like the women, most were white, non-Hispanic, employed full-time, with no biological children. Men had somewhat less education and higher personal annual incomes than their female partners. Both the women and men in the sample had health-related quality of life scores that were at or better than the US average, as measured by the PROMIS system (4-item short forms for each domain), on which a score of 50 corresponds to the average for US adults with standard deviation (SD) of 10, and higher scores correspond to more of that domain [25,26].
High and low family-building priorities figure). A majority of women ranked having a child in the next year or two, becoming a parent one way or another, and maintaining a close relationship with one's partner as high priorities and building a family in a way that doesn't make infertility obvious to others and avoiding side effects from treatment as low priorities. A majority of men ranked maintaining a close relationship with one's partner as a high priority and building a family in a way that doesn't make infertility obvious to others as a low priority. The importance of other factors proved to be more polarizing within each role, that is, at least one quarter of the group ranked the factor as a high priority while at least as many ranked the factor as a low priority. For women, cost was the single polarizing factor, and for men, the polarizing factor was becoming a parent one way or another. Table 3 shows the alignment of priorities within couples. The results are similar to those observed in Fig. 2. Partners in nearly 1/3 of couples identically ranked the importance of building a family in a way that doesn't make infertility obvious to others; in 27% of couples, both partners ranked this as their least important factor. Twenty-nine percent of couples were aligned about the importance of maintaining a close relationship with one another; in 25% of couples, both partners ranked this as the most important of the 10 factors. Two additional factors had alignment for nearly 1/5 of couples, namely that the woman gets to be the person who is pregnant and gives birth to a child and that the child has [man's] genes.
Differences in family-building priorities within couples
Within couples, men and women differed significantly in their prioritization of six of the factors: (1) becoming a parent one way or another was a higher priority for women compared to their partners (p = 0.015); (2-3) compared to their partners, men more highly prioritized genetic parentage, both passing on their own genes (p < 0.001) and passing on their partner's genes (p = 0.025); (4) having a child in the next year or two was a higher priority for women than it was for their partners (p < 0.001); (5) while both men and women tended to highly prioritize maintaining a close relationship with their partner, within more couples men ranked this factor higher than women did (p = 0.034); and (6) compared to their partners, men more highly prioritized avoiding side effects from treatment (p < 0.001).
Discussion
In the United States, securing a consultation with an RS requires some commitment and perseverance. Referral to an RS generally occurs after 12 months of unsuccessfully trying to conceive (or six months when a woman is 35 years of age or older) [27]; then couples often find that they must wait weeks or months for an opening in a specialist's schedule. Those who persist often incur high out-of-pocket costs for the consultation [28], especially for the more than half of US adults who live in states (including Wisconsin, where this study was conducted) without an insurance mandate requiring any coverage for infertility diagnosis or treatment. Given all of this, it seems reasonable for an RS to presume that the patients and partners who make it to their clinic have made family building a high priority and therefore will prefer whatever course of action is most likely to lead to having a child. The findings reported in our study cast doubt on this presumption.
In this study, using a tool to assess family-building priorities in the context of infertility, we found that in the relatively early stages of exploring options to address infertility, that is, after scheduling but before attending an initial consultation with an RS, not all respondents ranked achieving parenthood one way or another among their highest priorities, and women tended to prioritize this factor more than men did. Furthermore, partners often held different ideas about the preferred timing of adding a child to their family, with women more often prioritizing having a child within the next year or two. We anticipated that cost might emerge as a key priority for patients and their partners because infertility treatments can be expensive and because, as noted above, Wisconsin does not mandate that health insurers cover Missing data from two women and three men h P-value calculated using paired t-test medical care for infertility. Yet cost did not emerge as a top priority for most participants. However, a very clear message from the data is the emphasis placed on relationships: the majority of women and men prioritized the quality of their relationship, within couples more than a quarter of partners ranked it identically, and very few ranked their relationship among their lowest priorities, consistent with previous qualitative work [19].
If patient-centered care is a goal, RSs should be aware that a patient's presence in their clinic does not necessarily imply that that patient (and/or their partner) is singularly focused on achieving parenthood. The results of a discrete choice experiment in Dutch and Belgian fertility clinics suggested that patients were willing to trade-off a higher pregnancy rate for more patientcentered care from physicians [29]. Scheduling and attending a consultation with an RS is just one of many family-building decisions patients and their partners will make if they proceed through fertility treatments, and in those subsequent decisions patients and their partners must balance their parenthood goals with other simultaneous and sometimes competing priorities, as also demonstrated in previous qualitative work [18]. The RS's treatment recommendations must take into account patient and partner values and priorities along with their health history and test results. Our results highlight the need for RSs to be aware of the potential disconnect between patients and their partners on the importance of achieving parenthood and of the mutual importance placed on maintaining a close and satisfying relationship.
Recognizing that the family-building priorities of patients and their partners commonly differ, RSs should encourage involvement of both partners in any treatment-related decisions. A concrete way to do this may be to recommend that both partners together attend not just the consultation, but also any follow-up appointments to review results and create treatment plans. As couples seek advice on treatment plans, it may be appropriate for the RS to raise directly the possibility of discrepant priorities and to explore with the couple how each partner's priorities will or will not be served by the alternatives available. The RS might also recommend resources, such as counseling, when the two members of a couple struggle to reconcile disparate priorities. This research was conducted at a single, suburban academic medical center in a convenience sample of new patients. It is possible that the priorities of participants and non-participants may differ, and the use of a convenience sample renders findings potentially subject to selection bias. While we instructed participants to complete the questionnaire separately from their partner, we cannot be certain that some did not discuss the questionnaire with their partner while completing it. In addition, the sample size is relatively small, limiting our ability to differentiate priorities by potentially relevant demographic and medical characteristics, such as infertility diagnosis or household income. Additional research is needed to place these findings in the context of other means of assessing the role played by cost in patients' infertility treatment decisions. Better understanding of the complex associations among financial resources, infertility treatment, and ultimately outcomes, will illuminate the paths most likely to increase access to, and satisfaction with, care for all patients. Finally, future research should investigate the association between familybuilding priorities and various outcomes, including likelihood of achieving parenthood and long-term decisional satisfaction and regret.
Conclusions
Understanding the extent to which both members of a couple typically do or do not share common priorities has important implications for providers who support patients in assessing the pros and cons of available family-building paths. RSs may consider utilizing the Family-Building Priorities Tool in the clinic to engage patients and their partners in a discussion about tradeoffs and how different family-building paths align with patients' and couples' priorities. One fundamental consideration is that while medical procedures, including those for infertility, may involve just one patient, family building is typically a partnered activity, and the discussions and decisions that shape it should involve both prospective parents.
Abbreviations ART: Assisted reproductive technology; RS: Reproductive specialist; US: United States | 2017-08-08T19:48:08.217Z | 2017-04-05T00:00:00.000 | {
"year": 2017,
"sha1": "d62b4ba46943ca0797564fc141d40ecee6ee0e8b",
"oa_license": "CCBY",
"oa_url": "https://reproductive-health-journal.biomedcentral.com/track/pdf/10.1186/s12978-017-0311-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d62b4ba46943ca0797564fc141d40ecee6ee0e8b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255319223 | pes2o/s2orc | v3-fos-license | Potato Yield and Yield Components as Affected by Positive Selection During Several Generations of Seed Multiplication in Southwestern Uganda
Potato (Solanum tuberosum L.) is an important crop in Uganda but production is low. There is not a well-functioning official seed system and farmers use potato tubers from a previous harvest as seed. This study investigated how effectively the seed technology positive selection enhanced yield and underlying crop characteristics across multiple seasons, compared to the farmers’ selection method. Positive selection is selecting healthy plants during crop growth for harvesting seed potato tubers to be planted in the next season. Farmers’ selection involves selection of seed tubers from the bulk of the ware potato harvest. Positive selection was compared to farmers’ seed selection for up to three seasons in three field trials in different locations in southwestern Uganda using seed lots from different origins. Across all experiments, seasons and seed lots, yields were higher under positive selection than under farmers’ selection. The average yield increase resulting from positive selection was 12%, but yield increases were variable, ranging from − 5.7% to + 36.9%, and in the individual experiments often not significant. These yield increases were due to higher yields per plant, and mostly higher weights per tuber, whereas the numbers of tubers per plant were not significantly different. Experimentation and yield assessment were hampered by a varying number of plants that could not be harvested because plants had to be rogued from the experimental plots because of bacterial wilt (more frequent under farmers’ selection than under positive selection), plants disappeared from the experimental field and sometimes plants did not emerge. Nevertheless, adoption of positive selection should be encouraged due to a higher production and less virus infection of seed tubers in positive selected plants, resulting in a lower degeneration rate of potato seed tubers.
Introduction
Potato (Solanum tuberosum L.) is one of the main staple crops for food and nutrition security in Uganda (Whitney et al. 2017), where it serves also as a cash crop for smallholder farmers (Gildemacher et al. 2009;Olanya et al. 2012). While Uganda has a large potato production area, average yields with 4.2 Mg ha −1 are lower than in other East-African countries (FAO 2019) and far below the attainable yield of 25 Mg ha −1 (International Potato Center 2011). One of the most important yield-defining factors in potato production is the quality of the seed tubers planted (Struik and Wiersema 1999;Haverkort and Struik 2015). Smallholder farmers in Uganda generally plant tubers from an informal source, like their own harvest, the market or a neighbour (Gildemacher et al. 2009). Tubers for seed are mostly taken from the bulk of the ware potato harvest and selected based on size and visual inspection. This method is further referred to as "farmers' selection" or FS. These successively cycled seed tubers are often highly degenerated due to accumulation of tuber-borne pests and diseases (especially viruses and bacteria), resulting in poor yield and poor quality of the harvest (Turkensteen 1987;Salazar 1996;Struik and Wiersema 1999;Thomas-Sharma et al. 2016).
Due to the lack of a well-functioning formal seed system for purchasing high-quality and healthy seed tubers, Ugandan farmers have the following options to overcome poor seed quality. Farmers can buy quality-declared seed tubers from the Ugandan National Seed Potato Association (UNSPPA) (International Potato Center 2011). However, the availability of these tubers often does not meet the high demand (CTA 2014; Kakuhenzire et al. 2015). Moreover, many smallholder farmers cannot afford to buy these tubers. An option for improving seed tuber quality is the technique of positive selection whereby the most healthy-looking plants in a ware potato field are identified and pegged during flowering and checked for health thereafter. The tubers harvested from these most healthy-looking plants serve as seed tubers in the following growing season. With this technique the most healthy tubers are selected and a decrease in seedborne pests and diseases can be realized (Gildemacher et al. 2011;Okeyo et al. 2018). Another option is using the seed plot technique, 1 which seems appropriate for farmers who have a surplus of land to reserve it for improving their seed potatoes (Kakuhenzire et al. 2005;Kinyua et al. 2015). In a previous paper on the experiments described in the present paper, we confirmed that positive selection, when applied during multiple seasons (and thus for several generations of seed multiplication), could reduce the virus level in the seed potatoes compared to when applying farmers' selection (Priegnitz et al. 2019b).
Positive selection in ware crops during one cropping cycle results in an overall yield increase of 28% (Gildemacher et al. 2011), 30% (Schulte-Geldermann et al. 2012 or 37% (Siddique et al. 2017) compared to farmers' selection. This present study is related to the published paper by Priegnitz et al. (2019b), where in the same experiments the virus incidences were compared in seed lots from different origins and resulting from different seed selection treatments across multiple seasons. This present study focuses on the yield and yield components of those experiments and can be considered as the second part of the results of those experiments.
The objective of this study was to assess if positive selection during multiple seasons leads to an improvement in yield compared with farmers' selection and which yield components underlie this yield increase. Different sources of seed potatoes, potato cultivars and locations were included in the study which was carried out in Kabale district in southwestern Uganda, which is the most important potato cropping region of Uganda (Bonabana-Wabbi et al. 2013).
Experimental Design and Starting Material
In Kabale district, the main potato production region of Uganda, three field experiments were conducted at three locations across four production seasons. The experiments had a split-plot design with three replicated blocks and with the seed potato lot as main factor and seed selection treatment as sub-factor. Relevant information of all experiments and locations is presented in Table 1. For more information please refer to Table 1 in Priegnitz et al. (2019b, Table 1). Key aspects in all experiments were that four seed selection treatments were applied: (1) positive selection (PS) in all seasons (referred to as PS-PS-PS), (2) alternating seed selection in the seasons starting with positive selection in the 1st season and followed by farmers' selection (referred to as PS-FS-PS), (3) alternating seed selection in the seasons starting with farmers' selection in the 1st season (referred to as FS-PS-FS) and (4) farmers' selection (FS) in all seasons (referred to as FS-FS-FS). In the 4th season, plants were selected according to treatment, but without replanting the produced tubers in the next season. Consequently, the selection treatment carried out in the 4th season is not reflected in the experimental code because this treatment did not influence yield and underlying components of the crop in which it was carried out.
Per experimental plot, 60 tubers were planted in 6 rows at a spacing of 70 cm between rows and 30 cm within rows; for the net plot the border plants in the outer rows were excluded, so 40 plants were used for assessment. In the PS treatments, 15 healthy-looking plants per plot were selected during crop growth and harvested separately; this accounts for plants from 37.5% of all seed tubers planted in the net plot. Seed tuber selection was done by farmers by selecting medium-sized seed tubers from these PS plants after storage at planting time. Under the local conditions, the harvest of 15 plants was needed to achieve enough medium-sized seed tubers for the next planting season. In the FS plots, plants were not selected during crop growth, but medium-sized seed tubers were selected by farmers at planting time from the stored tuber bulk of the former harvest. In a few cases in the 3rd and 4th season, there were not enough medium-sized seed potatoes and smaller-sized seed potatoes had to be planted in some plots (Table 1). For each of the four selection treatments, tubers from the replicated plots were combined before selecting the seed tubers. The haulm was manually removed between 96 and 111 days after planting (DAP) and tubers were harvested between 111 and 118 DAP (Table 1). In the net plots receiving FS in a given season, tubers were harvested from all plants to determine tuber yield; in the plots receiving PS, tubers from the non-selected and the selected plants were harvested separately but the yields of the two fractions were summed to derive the yield per plot. During storage, the individual replicates of one treatment were combined and stored separately from the tubers of the other treatments.
Tubers were stored on wooden shelves either in a dark wooden shed (Experiment 2) or in a diffused light store (Experiments 1 and 3); the layer of tubers was sprinkled with insecticide a.i. Malathion 57% and covered with kikuyu grass (Pennisetum clandestinum) and couch grass (Digitaria abyssinica). Storage duration of the tubers until planting was between 67 and 75 days (Table 1).
Weather Data
Daily weather data can be found in Priegnitz et al. (2019b). For Experiment 2, weather data were derived from the internet platform awhere (awhere.com); for Experiment 1, rain data were manually recorded at the KAZARDI station in Karengyere. No reliable weather data were available for Experiment 3.
Plant Numbers
The number of emerged plants in the net plot was recorded 35-36 DAP in the 1st season (2013-LRS), and during leaf sampling (24-30 DAP) in the 2nd, 3rd and 4th season (2014-SRS, 2014-LRS, 2015Priegnitz et al. 2019b). The purpose of leaf sampling was to check for virus infection of the seed tubers. Details on infection by individual viruses in those seed tubers can be found in Priegnitz et al. (2019b). Plant establishment (especially to assess if no unaccounted loss appeared) was checked during PS pegging time (63-73 DAP). Plots were also inspected for bacterial wilt (Ralstonia solanacearum) every 10 days, and when infected, plants (including their tubers) were removed and their number was counted (rogued plants). At harvest, the numbers of harvested plants were recorded. In some cases, the number of plants at harvest was lower than the number of emerged plants minus the rogued plants, which might be attributed to thefts from the field. We defined these missing plants as "unaccounted loss". In Tables 2, 3 and 4 the numbers of plants emerged, rogued, lost and harvested are presented as the actual plant number and as percentage of the original number of seed tubers (planting positions) planted. In Table 5 and Fig. 1, numbers of plants are presented as percentage.
Number of Tubers, Tuber Yield, Number of Stems and Ground Cover
To establish number and yield of tubers resulting from the selection treatments in the previous year(s), data of all plants per net plot (including selected and nonselected plants in PS plots) were considered. At harvest, each plant was harvested separately and for each plant the total number of tubers was recorded. The average number of tubers per harvested plant in the net plot was derived from the sum of the individually harvested plants divided by the number of the harvested plants.
The harvested number of tubers per square metre was derived from the total number of tubers harvested per plot divided by the plot area. The weight per individual tuber was the total tuber fresh weight in the net plot divided by the total number of harvested tubers in the plot. The average yield per plant was calculated by dividing the total tuber fresh yield per plot by the number of harvested plants in the plot. The total tuber fresh yield was the total tuber fresh yield per plot recalculated into Mg ha −1 from the planted area of the plot. Tubers of each plot were graded into three size categories: large (> 60 mm), medium (30-60 mm) and small (5-30 mm) and the weight in each category was assessed. Canopy development in all plots was measured as ground cover (%) every 10 days and estimated by using a wooden frame of 0.70 m × 0.90 m divided into 100 equal units (which equals 100%); if one unit was filled more than half with green foliage it was counted as one percentage. The values presented represent the maximum ground cover (Supplementary Material Table 2A-4A). Main stems which emerged directly from the seed tuber were counted to assess the number of stems per plant. "-" indicates P value could not be obtained because of 0% Table 4 Effects of selection treatments, seasons and seed lots on agronomical characteristics in location Hawurma, Experiment 3 (selection treatment in italics refers to the seed planted in the specific season) Treatment "-" indicates P value could not be obtained because of 0% Different letters indicate significant difference between means according to LSD test at 5% ns not significant 0.0, no infection took place therefore no analysis was done Italics were used in P values of the significant F ratios from ANOVA (results section)
Differences Between Selected and Non-selected Plants
Additionally, in all plots receiving positive selection in a given season (
Data Analysis
Data were analysed using GenStat for Windows 18th Edition (VSN International 2016). General Analysis of Variance was used to test the effects of the factors selection treatment, seed lot and season and their interactions on the variables. The 1st season was not included in this ANOVA, because the seed planted in that season had not yet been subjected to different experimental selection treatments. Results of this 1st season are merely shown for comparison purposes. Where the P value in the ANOVA indicated significant effects or interactions (P < 0.05), significances of differences between means were assessed by the Fisher's LSD test at α = 0.05. Data related to proportions (numbers of plants emerged, rogued, unaccounted loss and harvested and ground cover) were transformed before analysis. They were recalculated to proportions and angular transformations were applied (Fernandez 1992). Proportions equal to 0 or 1 were replaced by (1/4n) and [1 − (1/4n)] respectively, where n represents the total number of sampled plants or tubers per net plot (Fernandez 1992). To assess differences in tuber number per plant and yield per plant between positive selected plants and non-selected plants in the same plots, boxplots were generated using data from the PS plots in all four seasons. The number of tubers and yield per plant of the positive selected plants and non-selected plants were compared and tested for significance with a paired t test.
Results
For yield and its underlying components, the full outcome of the ANOVAs and the means for the individual treatments in the three experiments are shown in Tables 2, 3 and 4; the supplementary data on yields per tuber size class, maximum ground cover and stem number per plant are shown in the supplementary material (Supplementary Tables 2A-4A). Significant three-way interactions (selection treatment × seed lot × season) were only found in Experiment 2 and only for the variates number of rogued plants, number of harvested plants (Table 3) and yield of large tubers (Supplementary Material Table 3A). There were some two-way interactions between selection treatment and seed lot and between selection treatment and season, whereas two-way interactions between seed lot and season were most often found (Tables 2, 3 and 4, Supplementary Tables 2A-4A). Table 5 presents the main effects of selection treatment and the interacting effects of selection treatment and seed lot, and Fig. 1 presents the main effects of season and the interacting effects of season and seed lot.
Fresh Tuber Yield per Hectare
In Experiment 1, the selection treatment × seed lot interaction was significant. In cv. Victoria, the fresh tuber yield per hectare was not significantly affected by the selection treatment, whereas in cv. Katchpot 1 the yield in the PS-FS-PS treatment was lower than the yield in the other treatments, which did not differ significantly from each other (Table 5). In Experiment 2, a significant main effect of selection treatment indicated that a lower yield was obtained in the FS-FS-FS treatment than in the other treatments, which did not differ significantly from each other (Tables 3 and 5). In Experiment 3, the average yield across the seed lots was highest in the PS-PS-PS treatment, but not significantly different from the other treatments (Tables 4 and 5).
Yield per Plant
In Experiment 1, the selection treatment × seed lot interaction was significant. In cv. Victoria, yield per plant was higher in PS-FS-PS than the FS-FS-FS treatment, with the other treatments not differing significantly from these extremes, whereas in cv. Katchpot 1, the highest yield per plant was found in the PS-PS-PS treatment, but this yield did only differ significantly from the yield per plant in the PS-FS-PS treatment (Table 5). In Experiments 2 and 3, yield per plant was not significantly affected by the selection treatment (Tables 3, 4 and 5); the average yield per plant across the seed lots was highest in the PS-PS-PS treatment (Experiment 3) and lowest in the FS-FS-FS treatment (Experiment 2), but the differences were not significant (Table 5).
Weight per Tuber
In Experiment 1, the selection treatment had no influence on weight per tuber (Table 5). In Experiment 2, there was a significant interaction between selection treatment and season (Table 3, Supplementary Table 3B); differences between selection treatments were not consistent across seasons. In the 2nd season, the FS-PS-FS treatment had a higher weight per tuber than the other treatments. In the 3rd season, differences between the selection treatments were not significant. In the 4th season, PS-PS-PS and PS-FS-PS had a higher weight per tuber than FS-PS-FS and FS-FS-FS. In Experiment 3, the selection treatment had a significant effect on weight per tuber, with Fig. 1 Effects of season on tuber yield and yield components of different seed lots in the three experiments; average values across four selection treatments. Different letters indicate significant differences according to Fisher's protected LSD test (α = 0.05). Capital letters reflect a significant main effect of season; lower case letters reflect a significant season × seed lot interaction. Season 1 data are not part of the statistical analysis because the seeds planted had not yet been subjected to different selection treatments the weight per tuber being lower in the PS-FS-PS treatment than in the PS-PS-PS and FS-PS-FS treatments and FS-FS-FS not differing significantly from the other treatments.
Tuber Number per Square Metre
In Experiment 1, there was a significant selection treatment × seed lot interaction for number of tubers per square metre. This was mainly caused by the PS-FS-PS treatment producing a relatively high number of tubers per square metre in the seed lot from cv. Victoria and a relatively low number of tubers in the seed lot from cv. Katchpot 1, whereas the other selection treatments did not differ from each other (Tables 2 and 5). In Experiment 2, the average number of tubers per square metre across the seed lots was lower in the FS-FS-FS than in the PS-PS-PS treatment (Table 5). Significant interaction of selection treatment × season indicated a higher number of tubers per square metre in the PS-PS-PS and PS-FS-PS treatments than in the FS-PS-FS and FS-FS-FS treatments in the 2nd season (2014-SRS), whereas differences were not significant in the 3rd season and not found in the 4th season (Table 3, Supplementary Table 3B). In Experiment 3, the number of tubers per square metre was not affected by the selection treatment (Tables 4 and 5).
Tuber Number per Plant
In Experiments 1 and 3, the number of tubers per plant was not significantly affected by the selection treatment. In Experiment 2, a significant selection treatment × season interaction indicated more tubers per plant in the 2nd season in the PS-FS-PS treatment than in the other treatments, and no differences between selection treatments in the 3rd season and 4th season (Table 3, Supplementary Material Table 3B).
Plant Numbers
In Experiment 1, there was a significant main effect of the selection treatment on the number of emerged plants (Table 2). Poor emergence was observed in the PS-FS-PS treatment, mainly in cv. Katchpot 1. No bacterial wilt occurred in this experiment; therefore, there was no plant loss due to bacterial wilt (Tables 2 and 5). No significant effects of the selection treatment could be assessed on unaccounted loss and the number of harvested plants (Tables 2 and 5), which was partly influenced by the large variation among individual plots. In some blocks, missing plants tended to occur more frequently in the PS-PS-PS plots, leading also to relatively low numbers of plants harvested in some plots.
In Experiment 2, there were significant main effects of selection treatment on the numbers of emerged, rogued and harvested plants (Tables 3 and 5). Across the seed lots, plant emergence was higher in the PS-PS-PS and PS-FS-PS treatments than in the FS-PS-FS and FS-FS-FS treatments. Bacterial wilt occurred across the seed lots with less in the PS-PS-PS and PS-FS-PS treatments (Tables 3 and 5). The selection treatment had no influence on the unaccounted loss, which was less in this experiment than in Experiment 1. Consequently, more plants were harvested in the treatments of PS-PS-PS and PS-FS-PS, compared to FS-PS-FS and FS-FS-FS treatments (Tables 3 and 5).
In Experiment 3, emergence in general was high and the selection treatment had no clear effect on the emergence of plants (Tables 4 and 5): significant interaction between selection treatment and season was found but showed no meaningful differences between selection treatments in the different seasons (Supplementary Material Table 4B). There was a significant interaction between selection treatment and seed lot on the number of rogued and harvested plants; bacterial wilt was higher in the PS-FS-PS treatment of cv. Victoria from the market than in all other selection treatments within the two seed lots of cv. Victoria. In cv. Rwangume, the lowest incidence was found in the PS-PS-PS treatment, but effects of the selection treatment on number of rogued plants could not be assessed as significant (Tables 4 and 5). The selection treatment had no significant effect on the unaccounted loss, but the unaccounted loss tended to be most frequent in the PS-PS-PS plots for cv. Victoria from UNSPPA and cv. Rwangume (Table 4 and 5). A low number of plants harvested appeared in cv. Victoria from the market in the PS-FS-PS treatment (Table 5), where in the 4th season only 54% of the planted seed potatoes could be harvested (Table 4).
Fresh Tuber Yield per Hectare
In Experiment 1, a significant selection treatment × seed lot interaction (Table 2) showed that for most selection treatments, there was no significant difference in tuber yield per hectare between the seed lots, but that in the PS-FS-PS treatment, yield of cv. Katchpot 1 was lower than that of cv. Victoria (Table 5). In experiment 2, significant seed lot × season interaction showed that tuber yield per hectare was lower for cv. Katchpot 1 than for cv. Victoria in the 2nd and 4th seasons, whereas no significant differences in yield per hectare between seed lots were found in the 3rd season (Fig. 1). In Experiment 3, the yield per hectare was not significantly different between the three seed lots (Tables 4 and 5).
Yield per Plant
In Experiment 1, a significant selection treatment × seed lot interaction showed a lower yield in cv. Victoria than in cv. Katchpot 1 in the FS-FS-FS and PS-PS-PS treatments, whereas the yield per plant did not differ significantly between seed lots in the other selection treatments (Table 5). In Experiment 2, seed lots did not differ in yield per plant (Table 3). In Experiment 3, the significant seed lot × season interaction revealed that the seed lots did not differ in yield in the 2nd season and 3rd season, but that the yield of cv. Rwangume was higher than the yield of the two cv. Victoria seed lots in the 4th season (Fig. 1).
Weight per Tuber
In Experiment 1, the seed lot had no influence on the weight per tuber (Table 2). In Experiment 2, a significant interaction of seed lot and season showed a higher weight per tuber in cv. Victoria than in cv. Katchpot 1 in the 4th season, whereas there were no significant differences between seed lots in the 2nd and 3rd seasons (Fig. 1). In Experiment 3, the weight per tuber in cv. Rwangume was significantly smaller than in both seed lots of cv. Victoria in the 2nd and 3rd seasons, whereas in the 4th season, the differences between seed lots in weight per tuber were small and cv. Rwangume still had smaller tubers than cv. Victoria from UNSPPA, but cv. Victoria from the market did not differ significantly from any of the other seed lots (Fig. 1).
Number of Tubers per Square Metre
In Experiment 1, significant seed lot × selection treatment and seed lot × season interactions (Table 2) showed that the difference between seed lots in number of tubers per square metre depended on season and selection treatment. Cultivar Victoria produced more tubers per square metre than cv. Katchpot 1 in the PS-FS-PS treatment, whereas no differences between seed lots were found in the other selection treatments (Table 5); cv. Victoria also produced more tubers per square metre than cv. Katchpot 1 in the 2nd season, fewer tubers than cv. Katchpot 1 in the 4th season, and a comparable number of tubers per square metre in the 3rd season (Fig. 1). In Experiment 2, the significant seed lot × season interaction showed that tuber numbers per square metre in the two seed lots differed only in the 2nd season, with more tubers in cv. Victoria than in Katchpot 1 (Fig. 1). In Experiment 3, the number of tubers in crops from the seed lot of cv. Rwangume was significantly higher than in crops from cv. Victoria (Table 4).
Number of Tubers per Plant
In Experiment 1, the significant seed lot × season interactions showed that more tubers per plant were produced in cv. Victoria than in cv. Katchpot 1 in the 2nd season, while cv. Katchpot 1 produced more tubers than cv. Victoria in the 4th season; no significant differences between seed lots were found in the 3rd season (Fig. 1). Similar trends were visible in Experiment 2, but differences between seed lots were not significant in any of the seasons (Fig. 1). In Experiment 3, the number of tubers per plant was higher for cv. Rwangume than for the two seed lots of cv. Victoria in all seasons (Tables 4 and 5).
Plant Numbers
In Experiment 1, the significant seed lot × season interaction showed that the emergence of plants was higher for cv. Victoria than for cv. Katchpot 1 in all seasons, but that the difference was most prominent in the 2nd season (Fig. 1). A significant seed lot × season interaction showed that the unaccounted loss was higher in cv. Victoria than in cv. Katchpot 1 in the 3rd season, whereas the unaccounted loss was rather small in both cultivars in the 2nd and 4th season (Fig. 1). A significantly higher number of plants were harvested in cv. Victoria than in cv. Katchpot 1 (Fig. 1) in the 2nd and 4th season, but not in the 3rd season.
In Experiment 2, plant emergence was higher for cv. Victoria than for cv. Katchpot 1 (Table 2, Fig. 1). Bacterial wilt occurred more in cv. Victoria than in cv. Katchpot 1 (Tables 3 and 5, Fig. 1). A significant seed lot × season interaction showed that the unaccounted loss was higher in cv. Victoria than in cv. Katchpot 1 in the 3rd season, whereas in the other seasons there was no difference between seed lots (Table 3). The significant three-way interaction (Table 3) showed that the number of harvested plants was still higher in cv. Victoria than in cv. Katchpot 1 for all selection treatments in the 2nd season, and half of the selection treatments (PS-PS-PS and FS-PS-FS) in the 4th season, whereas there were no significant differences between seed lots in harvested plants in the 3rd season and the remaining selection treatments (FS-FS-FS and PS-FS-PS) in the 4th season (Supplementary Material Table 3B).
In Experiment 3, seed lot had no effects on the emergence of plants or on the unaccounted loss, but plant losses due to bacterial wilt were higher in the cv. Victoria seed lots than in cv. Rwangume (Tables 4 and 5). Number of plants harvested did not differ among seed lots except in the PS-FS-PS treatment, where the number of harvested plants was lower in cv. Victoria from the market than in the other seed lots (Table 5).
First Season Results
Results of the 1st season are included in Fig. 1 to show variation across seasons but are not included in the statistical analysis because the seed planted in the first season had not yet been subjected to the experimental selection treatments. In the 1st season, fresh tuber yield, yield per plant and weight per tuber were among the highest found in the four experimental seasons, in all experiments (Fig. 1). In Experiment 1, cv. Victoria was yielding almost 40 Mg ha −1 , double to fourfold of what was found in later seasons. For cv. Katchpot 1 in this experiment, yield in the 1st season was similar to yields in the 3rd and 4th seasons. In Experiment 2, cv. Victoria yielded 30 Mg ha −1 in the 1st season, while yield of cv. Katchpot 1 was only slightly higher than in the following seasons. In Experiment 3, yields of 25-30 Mg ha −1 in the 1st season were also higher than in later seasons, but only slightly above those in the 3rd season, especially for cv. Rwangume. The data for number of tubers per square metre and number of tubers per plant in the 1st season were of a similar magnitude as the data in the later seasons, except for cv. Rwangume in Experiment 3, which peaked in number of tubers in the 2nd season.
In the 1st season, plant emergence and number of harvested plants were similar to those in the later seasons for seed lots of cv. Victoria in all experiments and of cv. Rwangume in Experiment 3. Emergence rate for cv. Katchpot 1 in Experiments 1 and 2 was comparably high in the 1st season and 3rd season (both LRSs) and higher than in the 2nd and 4th seasons (both SRSs). However, the harvested plant number for this cultivar was lower in the 1st season than in the 3rd season in both Experiments 1 and 2 because the unaccounted loss was high (20-23%) in the 1st season for cv. Katchpot 1. In the 1st season, there were no rogued plants due to bacterial wilt in Experiments 1 and 2 and no to very few rogued plants in Experiment 3.
2nd to 4th Season Results
Fresh Tuber Yield per Hectare In Experiment 1, a lower yield for both seed lots was produced in the 2nd season than in the 3rd and 4th seasons (Fig. 1). In Experiment 2, the significant season × seed lot interaction indicated the yield of cv. Victoria did not differ significantly between the 2nd, 3rd and 4th seasons while the yield of cv. Katchpot 1 was lowest in the 2nd season, highest in the 3rd season and with the 4th season not differing significantly from the 2nd and 3rd seasons (Fig. 1). In Experiment 3, lowest yields were produced in the 2nd season, while an increase was achieved in the 3rd season and a decrease obtained in the 4th season, for all seed lots (Fig. 1).
Yield per Plant
In Experiment 1, yield per plant was lower in the 2nd season (Fig. 1) than in the 3rd and 4th seasons for both seed lots. In Experiment 2, season had no effect on yield per plant. In Experiment 3, the significant seed lot × season interaction indicated that in cv. Rwangume the yield per plant was lower in the 2nd season than in the other seasons (Fig. 1), whereas in the two seed lots of cv. Victoria yields per plant were higher in the 3rd season than in the 2nd and 4th seasons (Fig. 1).
Weight per Tuber
There was a seasonal effect on weight per tuber in Experiment 1 with a lower weight per tuber in the 2nd season than in the later seasons (Fig. 1). In Experiment 2, the significant seed lot × season interaction showed that the individual tuber weights were lower in the 2nd season than in the 3rd season for both seed lots, whereas in the 4th season the weight per tuber was higher than in the 3rd season in cv. Victoria, and comparable to the weight per tuber in the 2nd season in cv. Katchpot 1 (Fig. 1). A significant selection treatment × season interaction (Supplementary Material Table 3B) showed a significantly higher weight per tuber in the 3rd and 4th season than in the 2nd season in all selection treatments except FS-PS-FS, where the weight per tuber was relatively high in the 2nd season and did not differ significantly from that in later seasons; weights per tuber did not differ significantly between the 3rd and 4th seasons (Supplementary Material Table 3B). In Experiment 3, a significant season × seed lot interaction showed that the weights per tuber were lowest in the 2nd season, especially for cv. Rwangume, and highest in the 3rd season, particularly for both seed lots in cv. Victoria (Fig. 1), with intermediate values in the 4th season for the cv. Victoria seed lots. In cv. Rwangume, weights per tuber did not differ significantly between the 3rd and 4th seasons.
Number of Tubers per Square Metre
In Experiment 1, the significant seed lot × season interaction (Table 2, Fig. 1) showed that in cv. Victoria, the number of tubers per square metre did not differ significantly across seasons whereas in cv. Katchpot 1 the number of tubers per square metre was lower in the 2nd season than in later seasons (Table 2, Fig. 1). In Experiment 2, the significant seed lot × season interaction showed a higher number of tubers per square metre for cv. Victoria in the 2nd season than in later seasons, whereas in cv. Katchpot 1 the number of tubers per square metre did not differ significantly in the different seasons (Table 3, Fig. 1). In Experiment 3, the significant main effect of season showed more tubers per square metre in the 2nd season (2014-SRS) than in later seasons (Fig. 1).
Number of Tubers per Plant
In Experiment 1, the significant seed lot × season interaction showed no significant differences in number of tubers per plant between the seasons for cv. Victoria, while for cv. Katchpot 1 a higher number of tubers per plant was found in the 4th season than in the 2nd season, with the 3rd season not differing significantly from the other two (Fig. 1). In Experiment 2, significant interactions for season × seed lot and season × selection treatment showed higher number of tubers in the 2nd season for cv. Victoria than in later seasons, whereas there were no differences between seasons in number of tubers per plant in cv. Katchpot 1 (Fig. 1). The number of tubers per plant did not differ between seasons within the individual selection treatments except in the PS-FS-PS treatment that had more tubers per plant in the 2nd season than in later seasons (Supplementary Material Table 3B). In Experiment 3, more tubers per plant were found in the 2nd season than in later seasons for all seed lots (Fig. 1).
Plant Numbers
In Experiment 1, the significant seed lot × season interaction showed that plant emergence was comparably high over seasons in cv. Victoria whereas a lower plant emergence was found for cv. Katchpot 1 in the 2nd season than in the 3rd and 4th seasons (Table 2, Fig. 1). Season had no effect on bacterial wilt, because it was always absent in this experiment. A significant seed lot × season interaction showed that the unaccounted loss was higher in the 3rd season for cv. Victoria than in the 2nd and 4th seasons and that there was almost no unaccounted plant loss for cv. Katchpot 1 (Table 2, Fig. 1). Consequently, in the 2nd season, a significantly smaller number of plants was harvested in cv. Katchpot 1 than in the other seasons, while for cv. Victoria the highest plant number was harvested in the 2nd season, which was similar to the plant number harvested in the 4th season.
In Experiment 2, the significant main effect of season showed higher plant emergence in the 3rd season than in the 2nd and 4th seasons (Fig. 1). The three-way interaction for rogued plants (Table 4) was due to a high seasonal incidence of bacterial wilt in cv. Victoria in the 4th season in all selection treatments, except in the PS-PS-PS treatment (Fig. 1, Supplementary Material Table 3B). The unaccounted loss was only substantial in the 3rd season in cv. Victoria (Fig. 1) (Tables 3 and 5). The percentages plants harvested for cv. Victoria did not differ significantly across seasons. For cv. Katchpot 1, the 2nd and 4th season showed a significantly smaller number of plants harvested than the 3rd season (Fig. 1).
In Experiment 3, emergence of plants across seed lots was higher in the 4th season than in the 2nd and 3rd seasons (Fig. 1). Incidence of bacterial wilt was also higher in the 4th season than in the 2nd and 3rd seasons (Fig. 1). The unaccounted loss was low in all seasons. The 4th season (2015-SRS) had the smallest number of plants harvested (Fig. 1).
Difference in Tuber Number and Tuber Weight per Plant Between Positive Selected Plants and Non-Selected Plants
In those experimental plots in which plants for production of PS seed were selected and other plants remained non-selected, the number of tubers per plant was significantly higher in PS-selected plants than in the non-selected plants in the same plots in all seed lots and experiments (Fig. 2). Also, tuber yield per plant was significantly higher in PS-selected plants than in the non-selected plants, with one exception in Experiment 3 for cv. Victoria from the market, where the difference was not significant (Fig. 3). In three out of the seven seed lots, a significantly higher weight of large tubers was harvested in positive selected plants than in non-selected plants when tuber yield per plant was divided into classes of large, medium and small tubers (data not shown).
Discussion
Our goal was to understand what influence positive seed selection during multiple seasons has on potato yield when compared to farmers' seed selection and which yield components underlie the differences in yield. Experiments were done under farming conditions in southwestern Uganda and were partly handled by farmers.
Earlier research on the same experiments (Priegnitz et al. 2019b) showed that virus incidence in the seed lots fluctuated across seasons, but that continuous PS was able to maintain PLRV and PVX incidence at lower levels than continuous . Boxplots show the range (rectangles from 25th to 75th percentile), mean (cross), median (line in rectangle) and minimum and maximum values in lines below and above the box; dots are outliers FS. PVA and PVY were only present in the seed lots at very low levels regardless of the selection treatment. PVS and PVM were present at very high levels in most seed lots, and PS was not more effective than FS in reducing their incidence. The high presence of PVS and PVM resulted in virtually no fully virus-clean plants being present in the seed lots of cv. Victoria, c. 50% clean plants on average in the seed lots of cv. Katchpot 1 and only in the seed lot of cv. Rwangume (Experiment 3) a maximum of more than 90% clean plants was found in the PS treatment in the last season. The high levels of virus present may have hindered the expression of large differences in yield.
This discussion will first focus on these differences in yield between crops under PS and FS and the yield components that underlie these differences. Thereafter potato production and productivity in the experiments under the local conditions will be discussed as well as their implications for the success of positive selection.
Effects of Seed Selection Treatments on Tuber Yield and its Underlying Components
Preamble Under the local farming conditions, yield levels were very variable and plot-to-plot variation was high. The alternating seed selection treatments PS-FS-PS and FS-PS-FS . Boxplots show the range (rectangles from 25th to 75th percentile), mean (cross), median (line in rectangle) and minimum and maximum values in lines below and above the box; dots are outliers added to this variation; therefore, the discussion will mainly focus on the two most contrasting seed selection treatments, continuously PS (PS-PS-PS) and continuously FS (FS-FS-FS). Figure 4 summarizes the differences between PS and FS in yield and related characteristics for all seasons, seed lots and experiments from the data in Tables 2, 3 and 4 and Supplementary Tables 2A-4A, by plotting the data of the PS treatments against those of the respective FS treatments.
Tuber Yield and Its Components
Yield differences due to seed selection treatments indeed were more difficult to achieve and smaller than expected beforehand. Tuber yield per hectare can be regarded as a function of the tuber yield per plant and the number of plants harvested. In all experiments (Table 5), the average tuber yield per hectare was higher in PS-PS-PS treatments than in FS-FS-FS treatments, but under the experimental conditions this positive effect was only significant in Experiment 2 and not that large. Also, tuber yield per plant seemed consistently, but not significantly, higher under PS than under FS in all experiments (Table 5).
When inspecting the size of the differences between continuous PS and FS in detail for all seed lots and individual seasons (Fig. 4), the tuber yield per plant was always higher under PS than under FS (Fig. 4b). Averaged over all cases, the yield per plant under PS was 9.8% higher than that under FS. The maximum difference was + 32.7%, the minimum + 0.6% (Table 3). For tuber yield per hectare, Tables 2, 3, and 4 show clearly an overall yield increase by PS; on average this yield increase was 12%. This is smaller than the yield increases of around 25-30% reported by Gildemacher et al. (2011) and. This smaller increase will be partly due to the degree to which PS was able to reduce the virus status. Due to the necessity of planting guard rows in an experimental set up, the selection pressure in the present experiments was probably lower than in other conditions (15 plants out of 40 planted tubers were selected to produce seed tubers). The maximum positive difference between crops under PS and FS was + 36.9% (Table 3). However, Fig. 4a also shows that in some cases, there was no effect of PS and there were even cases where the tuber yields per hectare were lower under PS than under FS (Experiment 1, cv. Katchpot 1 in 4th season; Experiment 3, seed lot UNSPAA/cv. Victoria in 3rd and 4th seasons and farm-saved/cv. Rwangume in 4th season; Tables 2 and 4). In all these cases of lower yield per hectare, the plant number harvested (Tables 2, 3, 4 and 5; Fig. 4c) was lower in the PS plots than in the FS plots by an even larger percentage. This shows that the plant number harvested was a variable of considerable importance in determining the yield per hectare in this research. In Experiment 2, plant numbers harvested under PS were higher than under FS (Table 5), but across all experimental data, there was no systematic relation between the plant number harvested under FS and PS (R 2 = 0.047) ( Fig. 4c; Table 5). We will elaborate on plant numbers below.
The higher yield per plant in PS than in FS treatments seemed to be more related to an increase in weight per tuber (Fig. 4d, Table 5), by on average 7.4%, than to differences in number of tubers per plant (Fig. 4e, Table 5). Under the experimental conditions in Uganda, the number of tubers per plant in most seed lots of cvs. Victoria and Katchpot 1 was relatively small with 5.5 tubers per plant.
Reasons for Differences in the Number of Plants Harvested
As shown above, the number of plants harvested was of considerable importance in determining the yield per hectare and there was no clear direct association between the (Fig. 4c). The plant number harvested may therefore vary also for reasons that may or may not be related to the selection treatment. Lower number of harvested plants was caused either by a lower number of emerged plants, a higher plant number rogued because of bacterial wilt and/ or more plant losses due to unaccounted reasons, like animal feeding or thefts. Plant emergence was generally variable and, surprisingly, not systematically higher under PS than under FS, except in Experiment 2 where planting PS seed resulted in a higher percentage emergence than planting FS seed (Fig. 4f). It is not clear to what extent the storage conditions might have affected these differences between experiments. In Experiment 2, the seeds were stored in darkness, in the other experiments under DLS. In Experiment 2, the higher number of emerged plants under PS than under FS (Table 5), together with a lower number of plants that had to be rogued because of bacterial wilt in plots under PS, clearly contributed to the higher number of plants harvested under PS than under FS ( Table 5). The number of plants rogued because of bacterial wilt also in Experiment 3 seemed lower under PS than under FS (Table 5, Fig. 4g). The lower number of plants with bacterial wilt in plots under PS is in line with observations by Gildemacher et al. (2011). Plant losses due to bacterial wilt did not occur in Experiment 1, in Karengyere, the site at the highest altitude of the three locations.
A very important factor determining large variation in number of plants harvested was the unaccounted loss of plants (Table 5). Due to the high variation in plant numbers, the differences between PS and FS in the number of plants lost for unaccounted reasons could not be assessed as significant, but in most cases (but definitively not in all) a higher unaccounted loss appeared in PS plots than in FS plots (Fig. 4h) which again led to smaller number of plants harvested; the maximum unaccounted plant loss was 22.5% in the 1st season in cv. Katchpot 1 in Experiment 2. We expect the plots under PS showed higher losses because they may have had the most attractive plants.
Effects of Positive Selection During Multiple Seasons on Yield Levels
Most research work thus far was done on effects of PS after one season of selection (e.g. Gildemacher et al. 2011;. In the present experiments, we particularly wanted to verify if seed tuber health and the yield levels from these seed tubers could be maintained or increased when continuing the selection methods during multiple seasons. The season itself obviously had a large effect on yield in our research (Fig. 1), butduring the seasons 2-4 when differently selected seed was compared-there were no indications that the absolute differences in yield per hectare or yield per plant between continuous PS or continuous FS increased or decreased when more rounds of selection were applied: there were no significant two-way interactions for seed selection method × season for yield per hectare or yield per plant nor significant three-way interactions (Tables 2, 3 and 4). This is consistent with the effects of PS on the virus status (Priegnitz et al. 2019b). PS seems to be able to keep the virus incidence at a slightly lower level than continuously FS and the yield at a slightly higher level. One case of regeneration was observed, but this was not (only) due to positive selection: cv. Rwangume produced the lowest yield in the 2nd season, when the seed planted had the highest incidence of viruses of all seasons. Cultivar Rwangume regenerated at the end of the experiments in becoming cleaner (Priegnitz et al. 2019b) and more productive in comparison to the other cultivars in this experiment, yet the PS treatment did not differ significantly from the FS treatment.
It seemed difficult to maintain the yield levels of quality-declared seed using PS only. In the 1st season, when the seed used had not yet been subjected to different selection treatments, a considerably higher yield (up to 39 Mg ha −1 ) was achieved by planting quality-declared seed of cv. Victoria (3G seed in Experiments 1 and 2 and 4G-UNSPPA seed in Experiment 3) than in the later seasons 2-4 ( Fig. 1) when the selection treatments that started in season 1 were continued. Although it cannot be excluded that this higher yield was due to favourable weather or more favourable physiological age of the seed tubers, this cultivar seemed to show clearly the importance of good seed quality in early generations for high productivity, like Schulte-Geldermann et al. (2013) and Demo et al. (2015) described. During the 1st season, plants became more infected by PLRV, PVX and PVA, resulting in a higher virus incidence of the seed tubers produced (Priegnitz et al. 2019b). The yield level of the 3G seed of cv. Katchpot 1 (Experiments 1 and 2) in the 1st season seemed to be sustained when compared to the 3rd and 4th seasons in Experiment 1 (Fig. 1), but the yield level assessed for this seed lot in the 1st season was reduced by a high percentage (c. 20%) of unaccounted loss of plants.
This suggests that under the present conditions in Uganda with high disease pressure and limited disease control, the seed quality and attainable yield of quality declared seed decrease already during the first multiplication, but that positive selection can keep production thereafter at a higher level than farmers' selection.
Potato Production and Productivity
Our long-term experimental yields ranged from 8.1 to 39 Mg ha −1 with an average of 18.5 Mg ha −1 and were much higher than the average yields reported for the country (4.2 Mg ha −1 ) and the average yields obtained by farmers in the region (9.5 Mg ha −1 ; Priegnitz et al. 2019a). This might have been due to relatively good crop management practices (van der Zaag 1987), including fertilization of 45 kg N ha −1 , spraying against Phytophthora infestans and rogueing against bacterial wilt in order to avoid a complete loss of potato plots. It is not known if these relatively good practices may also have reduced the differences between selection treatments. showed that under Kenyan conditions, increasing the fertilizer level from 45 to 90 kg N ha −1 increased the yield level, but not the absolute difference in yield from PS and FS selected seed.
Despite the relatively good management practices, yield levels obtained in our experiments were still far from maximum, as shown by the low maximum canopy cover during the seasons during which crops from PS and FS selected seeds were compared (Supplementary Tables 2A-4A). Due to shortage of precipitation (Table 1 and farmers' observation) in the short rainy seasons (2nd and 4th season) the crop suffered a reduction in yield (Experiment 1 (2014-SRS) and Experiment 3 (all SRS's)). Different seasonal weather conditions seemed to exert their effect on yield especially through changing the size of the tubers. Whereas tuber yield per hectare, tuber yield per plant and average weight per tuber varied strongly and similarly across seasons, the number of tubers per plant did hardly (Fig. 1). This is elaborated below.
A very uncertain factor is the physiological age of the seed tubers and how that affected crop production, yield and crop and tuber health. We expect the age to be relatively young, given the short storage duration of 67-75 days between harvest and planting. This may have resulted in uneven sprouting, relatively few main stems and probably, but not necessarily, late tuberization. Poor emergence in cv. Katchpot 1 might have been related to a longer dormancy of this cultivar. Poor emergence due to dormancy can hinder the effect of positive selection because of the need to be less selective in order to harvest enough seed tubers. Therefore, the average quality of tubers from a poorly emerged crop is most likely lower than with a good stand when positive selection is applied. During storage, tubers were covered by grasses, according to the local practices. This was said to enhance sprouting, which supports the idea of a young physiological age of the tubers being a point of attention.
Plant Losses
The varying number of plants that was harvested not just greatly increased the plot-toplot variation and thereby the experimental variation, it also resulted in reduction of the yield levels compared to what would have been possible. Losses were often larger in some plots than in others; in the most extreme case, only 54.2% of planted tubers produced harvested plants. Lower number of harvested plants was caused either by a lower number of emerged plants, a higher plant number rogued because of bacterial wilt or more plant losses due to unaccounted reasons.
Low plant emergence may have had different reasons. Low emergence was especially found in cv. Katchpot 1 in the short rainy seasons (the 2nd and 4th seasons). The low emergence might be attributed to unfavourable soil conditions (Struik and Wiersema 1999), like lack of rain and adverse soil structure. Drought during short rainy seasons and more uneven sprouting of some seed tubers might have hindered emergence. Additionally, in some cases, the planting depth used by the farmers to plant the experiments might have been deeper than optimum-which also may have affected emergence. At times, also smaller-sized seed tubers had to be used for planting when there were not enough medium-sized seed tubers from the previous harvest (Table 1), and a lower number of plants emerged in the plots when small-sized seed tubers were planted.
During crop growth after emergence, plant losses due to bacterial wilt and/or unaccounted reasons occurred in almost all experiments, seed lots, seasons and selection treatments. Bacterial wilt losses did not occur in Experiment 1, in Karengyere, the site at the highest altitude of the three locations. In Experiments 2 and 3, losses due to bacterial wilt seemed to increase slightly across the seasons (Fig. 1), mostly in farmers' selected seed (Fig. 1, Tables 3 and 4) and mostly in cv. Victoria (Fig. 1). The maximum loss due to bacterial wilt was 28.7% in one season in the seed lot of cv. Victoria from the market (Experiment 3). Unaccounted loss of plants, which again led to fewer plants harvested, appeared more frequent in PS plots than in FS plots; the maximum unaccounted loss of plants was 22.5% in the 1st season in cv. Katchpot 1 in Experiment 2.
All these causes of reduction in the plant number will not only decrease fresh tuber yield per hectare but also necessitate to select a larger percentage of the remaining plants as source for seed tuber production. Priegnitz et al. (2019b) mentioned a low selection pressure as an important factor for the high virus levels found-next to a high basic virus level and a high transmission risk. The necessity to select a relatively large part of the plants adds to reducing this selection pressure.
Tuber Number per Plant
Whereas tuber yield per hectare, tuber yield per plant and average weight per tuber varied strongly across seasons, the number of tubers per plant was relatively stable (Fig. 1). Under the experimental conditions, per plant only c. 5.5 tubers were produced in the seed lots of cv. Victoria and cv. Katchpot 1, with slightly higher numbers in Exp. 3. Inside a plot in which selection was carried out, the number of tubers per plant was only slightly higher in the selected plants than in the non-selected plants (Fig. 2); this difference in number was much smaller than the differences in yield per plant (Fig. 3). A low tuber number might be related to a low stem number due to the relatively physiologically young tubers that had to be planted-with the total period between harvest and planting being only 67-75 days. However, stem numbers only seemed to be related to tuber numbers to some extent in Experiment 3, suggesting a maximum number of tubers set per plant in Experiments 1 and 2 regardless of the stem number per plant (Fig. 5, summarizing data from Tables 2, 3 and 4 and Supplementary Tables 2A-4A).
The low numbers of tubers per plant have huge consequences for positive selection. The low number of tubers means that even in the ideal case that all tubers of a plant would be of the desired (medium) size for planting (which is not the case) and all planted tubers will result in harvested plants (which is also not the case), it may be difficult under the present farming conditions in southwestern Uganda, to increase the selection pressure to less than 1 plant out of every 5.5 plants. In our experiments in total 15 out of maximum 40 plants were selected (1 out of 2.7). This can be increased under farmer conditions to some extent (because there is no need for extra experimental tubers to plant the guard rows of the experimental plots) and this may also increase the quality of the seed tubers produced. However, in selecting plants for positive selection, it may not be sufficient to select 10-15% of the plants (1 out of 6.7-10 plants). This Tables 2, 3 and 4 and Supplementary Tables 2A-4A will never lead to enough seed tubers for planting the next crop in cultivars that produce only 5.5 tubers per plant and means that under the conditions leading to this multiplication rate, selection pressure may not be as high as would be desired.
Genotypes with a higher number of tubers per plant (like cv. Rwangume in Experiment 3) can improve the situation but may lead to very small tubers in seasons when yields are low. At this moment insight into the factors determining the stem and tuber number under the local conditions is not complete. Methods to increase the stem and tuber number per plant might be investigated, but they might interfere with the idea of positive selection to be carried out inside a ware potato crop. Nevertheless, the present method of positive selection at the present multiplication rate may already be sufficiently attractive for smallholder farmers as a possibility to increase tuber yield in potato.
Concluding Remarks
Vital points to combat seed degeneration due to high virus pressure in the environment are good seed quality and good crop management, because they determine potato tuber yields (Struik and Wiersema 1999;Haverkort and Struik 2015).
Continuous positive selection in multiple seasons was able to maintain yield levels at a higher level than continuous FS. The yield difference in the experiments varied but was on average 12%. The yield increase by using PS usually resulted from higher yields per plant and in Experiment 2 also from more plants harvested compared to using FS. The higher yields per plant under PS were associated with higher weights per tuber whereas the difference between PS and FS in number of tubers was not significant.
The field experimentation had to deal with a variety of circumstances (bacterial wilt, unaccounted plant loss, little rainfall in the short rainy seasons) due to the "real life" conditions in southwestern Uganda that limited the exploitation of the full potential of PS. These circumstances affected plant numbers and yield per plant. The high unaccounted losses in the experiments have hindered the success of positive selection. Crops under PS seemed to suffer more from unaccounted plant losses than crops under FS, but in crops under FS more plants were rogued because of bacterial wilt. It can be justified that continuously applied positive selection is a reliable option to keep yield at higher levels than farmers' selection.
In all experiments, the healthy-looking plants chosen for positive selection had more tubers and almost always a higher tuber weight per plant than the non-selected plants in the same plot (Figs. 2 and 3) and these tubers also were healthier or less infected (Priegnitz et al. 2019b). This shows that the visual selection based on aboveground performance was also effective in selecting plants with better belowground characteristics. The higher numbers of tubers in PS-selected plants also make the seed selection process and the multiplication slightly more efficient than expected based on average numbers of tubers per plant. This is especially important in cultivars producing only a low number of tubers per plant, like cv. Victoria and cv. Katchpot 1 under the investigated conditions.
The trials with good crop management practices showed that yields up to 25 Mg ha −1 can be achieved-which are much higher than the national mean yield of 4.2 Mg ha −1 . The experiments also showed that when seed tubers from positive selection are planted, an increase in yield can be achieved compared to when tubers from farmers' selection are planted. Positive selection is a tool to fit in the current seed system of southwestern Uganda to lower the degeneration rate in seed potatoes and to gain a higher yield in smallholder potato production. | 2023-01-01T14:59:02.977Z | 2020-05-30T00:00:00.000 | {
"year": 2020,
"sha1": "909af9f11c328ecc4ab6bbd3f3a097b81971684a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11540-020-09455-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "909af9f11c328ecc4ab6bbd3f3a097b81971684a",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
54628029 | pes2o/s2orc | v3-fos-license | Potential Energy of the Electron in a Hydrogen Atom and a Model of a Virtual Particle Pair Constituting the Vacuum
In a previously published paper, the author made some mistakes in calculating the potential energy of the electron in a hydrogen atom. Those mistakes occurred due to applying a potential energy formula with a certain range of application in a region where it is not applicable. Therefore, this paper corrects that error by deriving a formula for potential energy with no range of application. The paper also proposes a model in which a virtual particle pair present in the vacuum region inside a hydrogen atom simultaneously has a photon with positive energy and a photon with negative energy (In this paper, these photons are called dark photons). In the state where the relativistic energy re E is zero, the sum of the positive energy and negative energy of the virtual particle pair becomes zero. According to this model, this makes it possible for the particles to release photons, and capture negative energy.
Introduction
One of the most important relationships in the Special Theory of Relativity (STR) is as follows: ( ) ( ) Here, 2 mc is the relativistic energy of an object or a particle, and 2 0 m c is the rest mass energy.
Currently, Einstein's relationship (1) is used to describe the energy and momentum of particles in free space, but for explaining the behavior of bound electrons inside atoms, opinion has shifted to quantum mechanics as represented by equations such as the Dirac's relativistic wave equation.
For reasons such as these, there was no search for a relationship between energy and momentum applicable to an electron in the hydrogen atom.However, the author has ventured to take up this problem, and derived the following relationship (Suto, 2011).
E
is the relativistic energy of the electron, described with an absolute scale.From Equations ( 1) and (2) it is evident that, if a stationary electron begins to move in free space, or is incorporated into an atom, then the energy which serves as the departure point is the rest mass energy.Consider the case where an electron currently stationary in free space is drawn to a proton to form a hydrogen atom.At this time, the rest mass energy of the electron decreases.
The decrease in rest mass energy of the electron is expressed as 2 e .m c −Δ If the energy of the photon released when an electron is drawn into a hydrogen atom is taken to be hν, and the kinetic energy acquired by the electron is taken to be K, then the following relationship holds.The author presented the following equation as an equation indicating the relationship between the rest mass energy and potential energy of the electron in a hydrogen atom (Suto, 2009).3) and (4), it is evident that the following relationship holds between potential energy and kinetic energy.
( 5 ) Also, the potential energy V(r) of the electron is assumed to be 0 when the electron is at rest at a position infinitely far from the proton, and thus it becomes smaller than that inside the atom, and can be described as follows.(6) There is a lower limit to potential energy, and the range which energy can assume is as follows.
is substituted for V in Equation ( 6), then the r is (8) Here, e r is the classical electron radius.
From this, it is evident that Equation ( 6) has the following range of application.e r r ≤ .
( 9 ) However, the author also applied Equation ( 6) in the range where e .r r < Thus, in the following section, a formula for potential energy with no range of application is derived, and that error is corrected.
Formula for Potential Energy of the Electron with No Range of Application
The relativistic energy re E of the electron forming a hydrogen atom can be approximately defined as follows.
2 re e Equation ( 10a) is an approximation is because the total mechanical energy E of a hydrogen atom derived by Bohr is an approximate value.(A rigorous definition of re,n E is given below.) Here, the E in Equation (10a) corresponds to the decrease in the rest mass energy The following constraint holds regarding the relativistic energy re E of the electron due to Equation (10d) (here, the discussion is limited to the ordinary energy levels of the electron).
The following formula can also be derived from Equation (10b).( 1 2 ) Equation ( 12) is a formula for potential energy with no range of application.To determine the potential energy in all regions within a hydrogen atom, Equation (6) alone is not sufficient, and the support of Equation ( 12) is needed.
Here, α is the following fine-structure constant.Here, Equation ( 13) is divided into the following two equations, by taking the positive energy levels among the relativistic energy levels of the electron forming a hydrogen atom to be re, , n E + and the negative energy levels to be re, .
Incidentally, the virtual particle pairs constituting the vacuum are formed from a virtual electron and a virtual positron.As will be discussed below, the energy when 0 n = is thought to be not the energy of the electron, but the energy of a virtual electron, and thus it is excluded here.( re 0 E = is the energy of the virtual particle pair.However, the problem being addressed here is the energy of the electron, so here re,0 0 E = is regarded as the energy level of the virtual electron).
When Equation ( 16) is used, the normal energy levels of a hydrogen atom are as follows.
Now, if a Taylor expansion is performed on the right side of Eq. ( 18), When this is done, the equations for the energies is as follows.
Incidentally, in the classical quantum theory of Bohr, the energy levels B,n E of a hydrogen atom are given by the following formula.(Here the B in B,n E stands for "Bohr").
From this, it is evident that Bohr's energy equation, Equation ( 21), is an approximation of Equation ( 18).
The following compares energies when
E
Incidentally, the relativistic energy of the electron can also be written as follows.
Next, Equations ( 16) and (23b) are joined with an equals sign.That is, From this, the following quadratic equation is obtained.
0. 4 When the Taylor expansion of Equation ( 26) is taken, the result is as follows.
Here, the negative solution n r − of Equation ( 27), Since r − converges to e / 4, r e / 4 r can be regarded as the radius of the atomic nucleus of a hydrogen atom (i.e., the proton).Here, the theoretical value of the proton radius is: ( 2 9 ) However, if an attempt is actually made to measure the size of the proton (atomic nucleus), the energy of the proton changes.The size of the proton depends on the proton's energy, and thus the measured value does not match with Equation (29) (Randolf, 2010;Suto, 2014c).In addition, it is possible to predict that a different measurement value will be obtained from an experiment using a different measurement method.
Next, when the value obtained by setting
Correction of Potential Energy of the Electron in a Hydrogen Atom
The points where the author made a mistake in the value of potential energy of the electron are 1 * to 3 * in the following diagram (see Figure 1) (Suto, 2017).
Originally, the potential energy at 1 * to 3 * was found from Equation ( 6), but potential energy in this region must be found from Equation (12).That is, (32c) Next, the regions in a hydrogen atom are classified as follows at the level of classical theory while taking into account Equations ( 7) to (12) (see Table ).Region A is the region where the electron forming a hydrogen atom exists.However, in Region B, there is no change in the potential energy of the particle.Therefore, what exists in this region is not charged particles.Thus, this paper predicts that Region B is a region of a virtual particle pair formed from a virtual electron and virtual positron.Virtual particle pairs are the particles constituting the vacuum.In this region, the kinetic energy of a virtual particle pair decreases as the particle pair approaches the atomic nucleus.However, in the regions of the electron where e n , kinetic energy increases as the electron approaches the atomic nucleus.
A virtual particle pair with re 0 E = exists in State 0. When this virtual particle pair absorbs 2 e m c of energy, the virtual particle pair transitions into State a. (At this time, the energy of the virtual electron is 1/2 the energy of the virtual particle pair, and therefore is 2 e / 2 m c ).
Also, this paper predicts that this virtual particle pair will separate into a virtual electron and virtual positron in State a.
Region C is a region symmetrical with Region B in terms of energy.The virtual particle pairs existing in this region have a negative energy (mass).
Region D is symmetric with Region A in terms of energy.Electrons in this region have negative energy (mass) in Equation ( 17).The author has already pointed out that the system formed from a proton and an electron with negative energy is a candidate for dark matter, a type of matter whose true nature is unknown.(The author calls electrons with this negative energy dark electrons, and photons with negative energy dark photons.) When Figure 1 is corrected based on the above, the result is as follows (see Figure 2). is the region of the virtual particle pair constituting the vacuum, but the energy in this diagram indicates the energy of the virtual electron (The energy of the virtual particle pair is twice the energy of the virtual electron).Also, e e / 4 / 3 r r r < < is the region of the electron with negative energy (mass).This diagram is cited from another paper, but the values for potential energy at 1 * to 3 * are mistaken, and thus they will be corrected in this paper ).Also, as the electron in Region A, and the dark electron in Region D, approach the atomic nucleus, the kinetic energy of the electron increases.Thus K in this region is shown with a dashed line
Discussion
In the previous section, the potential energy value of the electron was corrected, and thus the original purpose of this paper was achieved.However, there are still a number of points that can be discussed.
1) How does a virtual particle pair with re 0 E = acquire negative energy?This paper examines two interpretations.
Interpretation 1: A virtual particle pair absorbs a dark photon with negative energy, and lowers its energy level.
However, a dark photon has never been observed in the natural world.Therefore, this interpretation cannot be supported.Thus, the previous view of virtual particle pairs with re 0 E = is reexamined.
That is, Previous view: A virtual particle pair with re 0 E = is in a state where rest mass energy has been completely consumed, i.e., (to use a vulgar expression) a naked state unclothed by photon energy.
The interpretation of this paper (Interpretation 2): A virtual particle with re 0 E = simultaneously has a photon with positive energy and a dark photon with negative energy.If here the positive photon energy is taken to be P E ( P 0 E < ) and the energy of the dark photon is taken to be DP E ( DP 0 E < ), then re 0 E = can be defined as the state where the sum of P E and DP E is zero.That is, re,0 Incidentally, the definition of the rest mass energy 0 E of the electron is 2 0 e .
E mc =
However, if the model here is used, this energy can be defined as follows.Also, according to this model of the virtual particle pair, the re,n E − in Equation ( 17) is not the energy of the dark photon belonging to the dark electron.re,n E − corresponds to the sum of the energy of the photon belonging to the dark electron, and the energy of the dark photon.That is, , .
2) To estimate the number of virtual particle pairs present in the vacuum region inside a hydrogen atom, let us look at triplet production.Now, consider the case where an incident γ-ray has the energy corresponding to the mass of 4 electrons (2.044 MeV).If this is discussed classically, the γ-ray can create an electron and positron near e / 2 r r = (see Figure 3).and an electron-positron pair will be created (↑①).When this γ-ray approaches closer to the atomic nuclear, and the electron in the orbital around the proton absorbs this energy, the electron will be excited and appear in free space (↑②).If multiple virtual particle pairs exist in the re 0 E = state, then there is potentially a probability that two electrons and two positrons are produced in the process in ①.However, a phenomenon of this sort has not been observed.
Even if 1.022 MeV of energy is consumed in this pair creation, the γ-ray still has the energy of corresponding to the mass of 2 electrons (1.022 MeV).If the γ-ray gives energy to an electron in the orbital near the proton, the electron will be excited and appear in free space.As a result, 2 electrons and 1 positron will appear in free space.
However, if multiple virtual particle pairs exists in the re,0 E state, then in process ①, there should also be a probability of producing two pairs (2 electrons and 2 positrons) from an energy of 2.044 MeV ( 2 e 4m c ).However, quadruplet production has never been actually observed.Thus, this paper predicts 1 virtual particle pair in State 0. There is also a probability that, aside from an electron, a single virtual electron and virtual positron are present in Region A. Taking these points into consideration, there is a possibility that the previous definition of the hydrogen atom is too simple, and reconsideration may be necessary.That is, Previous view: Hydrogen atom = 1 proton + 1 electron Model to be examined: Hydrogen atom = 1 proton + 1 electron + 1 virtual particle pair (or 1 virtual electron + 1 virtual positron) Here, if one virtual particle pair is added, then the model is applied when the particle pair is present in Region B, and if 2 virtual particles are added, the model is applied when the virtual particles are present in Region A.
3) If the energy absorbed by the virtual particle pair with re 0 E = is in the range , then the virtual electron and virtual positron separated in State a are present temporarily in Range A. Now, what is the difference between the electron forming the hydrogen atom and the virtual electron?It is difficult to discriminate these 2 particles from the perspective of energy.However, the virtual electron and virtual positron in Region A are likely not completely separated, and in a state of quantum entanglement.Therefore, the electron and virtual electron in Region A are not in the same state.
There is also thought to be a probability that a virtual positron separated from a virtual electron at e r r = will approach the electron of the hydrogen atom and form a virtual particle pair.If a new virtual particle pair is formed here, the remaining virtual electron will then behave as the electron of the hydrogen atom.If this model is assumed to be correct, the electron in the hydrogen atom does not describe a continuous trajectory, and its motion is discontinuous.Also, it is predicted that the electron will behave as though it had moved to another location instantaneously (at super luminal speed).4) As is also evident from Figure 2, the position occupied by the electron and dark electron in the hydrogen atom, and the region of energy, are only a small part of the whole.The remaining majority is the region of the virtual particle pair and virtual particles (virtual electron and virtual positron).If even the virtual particle pair is included in the constituents of the hydrogen atom, then there will be a need to derive the energy levels of the virtual particle pair.The energy levels of the electron and dark electron are discrete, and thus based on common sense, the energy levels of the virtual particle pair are also predicted to be discrete.
Conclusion
1) In this paper, Equation ( 12) was used to correct the value for potential energy in a hydrogen atom, previously found incorrectly by the author.As a result, Figure 1 has been corrected as shown in Figure 2.
2) According to the model proposed in this paper, a virtual particle pair simultaneously has a photon with positive energy P E and a dark photon with negative energy DP .
E
In this case, the previous energy is redefined as follows.
i) If the relativistic energy of a virtual particle pair is zero, Previous definition: Incidentally, the existence of dark photons cannot be directly demonstrated by experiment, just like virtual particle pairs.However, if in the future it is possible to demonstrate the existence of negative energy levels re,n E − and dark electrons in the hydrogen atom, then the existence of dark photons will also be simultaneously demonstrated.
10a), re E was defined from 2 e m c and E, but it is actually correct to define E from 2 e m c and re .
Figure 1 .
Figure 1.Relationship between energies of the electron and virtual electron present in a hydrogen atom, and their positions r.The region where the electron forming the hydrogen atom exists is e r r< .e
Figure 2 .
Figure 2. In this figure, potential energy (vertical line) has been erased in the region where potential energy does not exist ( e e / 3 r r r < < ).Also, as the electron in Region A, and the dark electron in Region D, approach the atomic nucleus, the kinetic energy of the electron increases.Thus K in this region is shown with a dashed line predict the energies of the photon and dark photon belonging to the virtual particle pair with re 0. E = However, for a dark electron to attain State d, the virtual particle pair with re 0 E
Figure 3 .
Figure 3. Interpretation of this paper regarding triplet production This γ-ray will give 1.022 MeV of energy to the virtual particles at e / 2, r r =
Table 1 .
Regions and states.This is Figure1made into a table.Here, the value of K was found fromEquation (12) | 2018-12-13T01:11:19.266Z | 2018-07-26T00:00:00.000 | {
"year": 2018,
"sha1": "fe40e361207b7b9282887a943f4c11010faf404b",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/apr/article/download/76686/42459",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fe40e361207b7b9282887a943f4c11010faf404b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255207833 | pes2o/s2orc | v3-fos-license | The Use of Polymers to Enhance Post-Orthodontic Tooth Stability
Relapse after orthodontic treatment occurs at a rate of about 70 to 90%, and this phenomenon is an orthodontic issue that has not yet been resolved. Retention devices are one attempt at prevention, but they require a considerable amount of time. Most orthodontists continue to find it challenging to manage orthodontic relapse; therefore, additional research is required. In line with existing knowledge regarding the biological basis of relapse, biomedical engineering approaches to relapse regulation show promise. With so many possible uses in biomedical engineering, polymeric materials have long been at the forefront of the materials world. Orthodontics is an emerging field, and scientists are paying a great deal of attention to polymers because of their potential applications in this area. In recent years, the controlled release of bisphosphonate risedronate using a topically applied gelatin hydrogel has been demonstrated to be effective in reducing relapse. Simvastatin encapsulation in exosomes generated from periodontal ligament stem cells can promote simvastatin solubility and increase the inhibitory action of orthodontic relapse. Moreover, the local injection of epigallocatechin gallate-modified gelatin suppresses osteoclastogenesis and could be developed as a novel treatment method to modify tooth movement and inhibit orthodontic relapse. Furthermore, the intrasulcular administration of hydrogel carbonated hydroxyapatite-incorporated advanced platelet-rich fibrin has been shown to minimize orthodontic relapse. The objective of this review was to provide an overview of the use of polymer materials to reduce post-orthodontic relapse. We assume that bone remodeling is a crucial factor even though the exact process by which orthodontic correction is lost after retention is not fully known. Delivery of a polymer containing elements that altered osteoclast activity inhibited osteoclastogenesis and blocking orthodontic relapse. The most promising polymeric materials and their potential orthodontic uses for the prevention of orthodontic relapse are also discussed.
Introduction
Society today is experiencing an increasing interest in cosmetic dentistry, making orthodontics an essential treatment field. Orthodontic treatment has become one of the most popular treatments in cosmetic dentistry; it is used to correct malocclusion, enhance occlusion, and attain dentofacial harmony [1]. Even after several years of post-treatment stabilization, corrected teeth frequently relapse. Relapse can be explained as a phenomenon that occurs after treatment, in which the corrected tooth arrangement returns to its original pre-treatment position [2]. Relapse following orthodontic treatment occurs at a rate that ranges from around 70 to 90%, and this occurrence is an orthodontic problem that has not yet been resolved [3]. Retention is regarded as the last and most important stage in (Figure 1), and patient acceptance [5,6]. Nevertheless, relapse is still a possibility 10 year after the retainer has been removed [7]. The process of alveolar bone remodeling was found to play a significant role in or thodontic relapse occurrence, as discovered by Franzen et al. in their animal study [8]. Th process of bone remodeling can be regarded as a kind of turnover, in which newly created bone replaces older bone [9]. The dynamic process of bone remodeling is controlled b osteoclasts, which are cells that dissolve bone; osteoblasts, which are cells that produc new bone; and bone mesenchymal stem cells. All of the aforementioned cells communi cate and collaborate to achieve bone remodeling [10]. Relapse can be effectively decrease by biological agents that inhibit bone resorption and stimulate bone formation [11]. Thes results indicate that controlling alveolar bone remodeling after active orthodontic toot movement is a crucial method for preventing relapse.
Polymer science has been the most popular field of study due to its vast applicabilit in engineering modern materials for the enhancement of structural and functional qual ties in clinical and biomedical applications. A variety of polymers have optimal qualities and the chemical modification of these polymers can improve their cytocompatibility, b oactivity, and antibacterial capabilities [12,13]. Several studies have revealed that poly meric materials may be used in tissue engineering to reconstruct cartilage, bone, and hear valves, and as skin, hip, and dental implants [14][15][16][17][18]. Polymers have gained a great dea of attention from academics in recent years due to their potential use in the rapidly devel oping field of orthodontics [19]. The local injection of epigallocatechin gallate-modifie gelatin inhibits osteoclastogenesis and has the potential to be evolved into an unique ther apeutic strategy that modifies tooth movement and prevents orthodontic relapse [20 Topically applied bisphosphonate risedronate with gelatin hydrogel reduces relapse days after tooth stability in a dose-dependent manner. The proposed gelatin hydroge method may administer risedronate to a specific area and give local effects, which is ad The process of alveolar bone remodeling was found to play a significant role in orthodontic relapse occurrence, as discovered by Franzen et al. in their animal study [8]. The process of bone remodeling can be regarded as a kind of turnover, in which newly created bone replaces older bone [9]. The dynamic process of bone remodeling is controlled by osteoclasts, which are cells that dissolve bone; osteoblasts, which are cells that produce new bone; and bone mesenchymal stem cells. All of the aforementioned cells communicate and collaborate to achieve bone remodeling [10]. Relapse can be effectively decreased by biological agents that inhibit bone resorption and stimulate bone formation [11]. These results indicate that controlling alveolar bone remodeling after active orthodontic tooth movement is a crucial method for preventing relapse.
Polymer science has been the most popular field of study due to its vast applicability in engineering modern materials for the enhancement of structural and functional qualities in clinical and biomedical applications. A variety of polymers have optimal qualities, and the chemical modification of these polymers can improve their cytocompatibility, bioactivity, and antibacterial capabilities [12,13]. Several studies have revealed that polymeric materials may be used in tissue engineering to reconstruct cartilage, bone, and heart valves, and as skin, hip, and dental implants [14][15][16][17][18]. Polymers have gained a great deal of attention from academics in recent years due to their potential use in the rapidly developing field of orthodontics [19]. The local injection of epigallocatechin gallate-modified gelatin inhibits osteoclastogenesis and has the potential to be evolved into an unique therapeutic strategy that modifies tooth movement and prevents orthodontic relapse [20]. Topically applied bisphosphonate risedronate with gelatin hydrogel reduces relapse 7 days after tooth stability in a dose-dependent manner. The proposed gelatin hydrogel method may administer risedronate to a specific area and give local effects, which is advantageous in orthodontic therapy [21]. Hydrogel carbonated hydroxyapatite-incorporated advanced platelet-rich fibrin is effective as a biological retainer for reducing orthodontic relapse. In this study, the method of applying an osteoinductive and osteoconductive substance was minimally invasive, cost-effective, and suitable for minimizing relapse after active orthodontic tooth movement [22]. In a rat model of orthodontic tooth movement, encapsulating simvastatin into exosomes generated from periodontal ligament stem cells improved simvastatin solubility and increased the inhibitory impact of relapse. Interestingly, exosomes of periodontal ligament stem cells administered locally can help prevent relapse as well [23]. The purpose of this review was to present an overview of the use of polymers as a material to lower the risk of post-orthodontic relapse.
Application of Hydrogel Carbonated Hydroxyapatite-Incorporated Advanced Platelet-Rich Fibrin Improves Post-Orthodontic Tooth Stability
Tissue engineering technologies have previously been promoted for manipulating alveolar bone remodeling, preventing orthodontic relapse, and improving tooth position stability. Because of its well-controlled calcium release and bone forming capacity, carbonate apatite (CHA) has great potential for bone tissue engineering [24]. Since it exhibits structural similarity to the interconnecting porous structure of bone, CHA is regarded as a good biomaterial for promoting alveolar bone rebuilding [25]. By elevating calcium and phosphate levels in the local area, which are essential for bone development, CHA promotes bone remodeling. The activity of the osteoblasts is controlled by the release of calcium and phosphate ions into the surrounding tissue. High levels of extracellular calcium also inhibit osteoclastic development and promote DNA synthesis and chemotaxis in osteoblastic cells [22]. CHA has also gained prominence for its capacity to function as a medication delivery system for protein transport into living cells [26].
A further advantage of CHA is its capacity to function as a drug delivery system in controlled release innovation [27]. One of the most recent issues in tissue engineering is the advancement of controlled release methods for bone tissue augmentation. The controlled release system is seen to be promising because it can convert materials with a low molecular weight into a system with a higher molecular weight, preventing degradation before the medicine begins to operate. In order to manage the water content of the hydrogel system, gelatin hydrogel was selected for this study to provide controlled release and degradability using a cross-linking technique. The hydrogel can be degraded enzymatically to produce water-soluble gelatin fragments, allowing bioactive-loaded components to be released [28].
Growth factors (GFs) are natural polypeptides that stimulate extracellular matrix synthesis and enhance osteoblast development [29]. Taking into account the presence of GFs, it is hypothesized that a suitable incorporation of hydrogel CHA exhibiting controlled release and GF could achieve more favorable bone regeneration outcomes. Platelet-rich fibrin (PRF) is a new generation of GF-rich platelet concentrate that is more beneficial than other platelet concentrates, such as platelet-rich plasma (PRP), due to its simple preparation, low cost, and absence of anticoagulants, such as bovine thrombin and calcium chloride, for platelet activation [30]. Endogenous thrombin, released during centrifugation, can quickly activate PRF. The restriction on the usual use of bovine thrombin due to the high risk of coagulopathy from antibody development [31] has limited its use. The use of calcium chloride and thrombin to coagulate platelets into a gel and engage the contained GF initiates a burst release and activation of all the GFs of PRP simultaneously; thus, the period of action of PRP is brief. Meanwhile, PRF can maintain GF activity for a substantially longer duration and efficiently induce bone repair., Kobayashi et al. [32] revealed that over a period of 10 days, PRF released GFs continuously and consistently. Another technique to improve PRP's half-life limits is to use a biodegradable gelatin hydrogel drug delivery technology. The release of growth factors to ischemic areas is controlled by biodegradable gelatin hydrogel. Platelets release growth factors when impregnated into biodegradable gelatin hydrogel [33]. Gelatin molecules electrically and physiochemically immobilize growth factors in the hydrogel [34]. After injecting PRP and biodegradable gelatin hydrogel into ischemic tissue, the growth factor-impregnated hydrogel slowly releases growth factors into the tissue over 2 weeks, resulting in more successful angiogenesis [35]. Specifically, angiogenesis may enhance oxygen and nutrient supply and offer a pathway for bone precursor cells to reach the intended area [29].
Advanced PRF, also known as aPRF, is a novel form of PRF modification that is made by slowing down the centrifugation speed of the conventional methods of fibrin preparation and increasing the amount of time that the centrifugation process requires. Platelet concentrations are increased as a result of this modification because during the centrifugation process, fewer cells settle to the bottom of the tubes, and a greater quantity of proteins, including platelets, are left in the higher part of the tubes, where the clot is isolated. This results in higher platelet concentrations [32,36]. The platelet count in PRF releasate was found to be 2.69 times greater than the count of platelets in whole blood, according to Burnouf et al. [37], whereas Alhasyimi et al. [38] demonstrated that the platelet count in aPRF releasate was 4.78 times higher than the count of platelets in whole blood.
The considerable increases in alkaline phosphatase activity at days 7 and 14 after orthodontic debonding suggest that intrasulcular injection of controlled release hydrogel CHA, including aPRF, has the potential to promote alveolar bone remodeling and prevent orthodontic relapse [25]. ALP has been identified as a measure of osteoblastic activity during bone formation [39]. The study found a link between alveolar bone remodeling and variations in ALP activity present in GCF. It suggested that osteoblastic cell activity was boosted when ALP levels were significantly elevated [40]. The increased osteoblast activity indicates that new bone is forming. Osteoblasts must continually drive bone regeneration to prevent relapse [22]. Osteoblasts subsequently commence bone apposition by generating fresh bone matrices within the osteoclast-formed trenches and tunnels on the bone's surface [41].
Hydrogel CHA with aPRF was shown to be effective in preventing orthodontic relapse in a rabbit model following tooth movement. CHA−aPRF intrasulcular injection has the potential to minimize orthodontic relapse by stimulating osteoprotegerin (OPG) expression and inhibiting the receptor activator of the nuclear factor−κB ligand (RANKL) level. Inhibition of osteoclastogenesis and osteoclast activity by local injection of CHA−aPRF could improve orthodontic retention, according to the findings of this study [38]. OPG is a natural receptor expressed by osteoblasts which inhibits osteoclast differentiation and activity by binding to the RANKL and blocking RANKL from interacting with RANK. The binding of RANKL and the RANK receptor leads to rapid differentiation of hematopoietic osteoclast precursors to mature osteoclasts [42]. During orthodontic relapse in rabbits, a similar study found that injections of CHA and CHA hydrogel−aPRF favorably upregulated transforming growth factor−β1 (TGF−β1) and bone morphogenic protein−2 (BMP−2) expression, but did not increase Runt−related transcription factor−2 (Runx−2) levels [43]. The expression of TGF−β1 and BMP−2 is crucial to the process of osteoblastogenesis. It is thought that osteoblastogenesis can balance out the activity of osteoclasts [44]. Runx−2 expression is reinforced and mesenchymal stem cell development is encouraged by a signaling cascade started by TGF−1 and BMP−2 [45]. TGF−β1 promotes osteoblast proliferation by recruiting osteoblast precursors or matrix-producing osteoblasts via chemotactic attraction and attempting to prevent osteoblast apoptosis [46]. TGF−β1 is the most potent bone formation stimulator, increasing fibroblast proliferation and stimulating collagen synthesis [47]. Furthermore, increased BMP−2 expression can induce osteoblast maturation and initiate alveolar bone formation to effectively prevent relapse [48]. BMP−2 can increase bone mass by decreasing osteoclastogenesis activity via the RANKL−OPG pathway [49]. Osteoclastogenesis inhibition has been shown to be effective in lowering relapse rates following orthodontic tooth movement [50]. Meanwhile, Runx−2 regulates the expression of RANKL and OPG by stimulating osteoclast differentiation [51]. However, the molecular mechanisms by which Runx−2 impedes improvements in osteoclastogenesis require additional study [43]. Figure 2 summarizes the process by which hydrogel CHA−aPRF prevents orthodontic relapse.
Polymers 2023, 14, x FOR PEER REVIEW 5 of 13 Figure 2. The mechanism of how hydrogel CHA−aPRF works to prevent orthodontic relapse.
Statins' Inhibitory Effect on Relapse after Orthodontic Treatment
The statin family of drugs is an effective treatment for arteriosclerotic cardiovascular disease. Statins have the capability to inhibit 3-hydroxy-3-methyl glutaryl reductase, a rate-limiting enzyme in the cholesterol biosynthesis mevalonate pathway [52]. Statins have been shown to have numerous favorable effects on human health, including anabolic effects on bone metabolism in various ways, in addition to their cholesterol-lowering properties. They promote the osteoblastic differentiation of bone marrow stem cells by upregulating BMP-2 gene expression and angiogenesis. Statins may also promote bone formation by preventing osteoblast apoptosis [53][54][55]. Statins suppress osteoclastic bone activity during periods of high bone turnover, resulting in the reduction of bone resorption. This effect involves the modulation of the receptor activator of nuclear kappa B (RANK), RANKL, and OPG, ultimately suppressing osteoclastogenesis [56,57]. Thus, their ability to stimulate bone formation while also exhibiting pleiotropic effects, such as anti-inflammatory and immunomodulatory properties, could justify their use in orthodontic relapse prevention [58]. Because osteoclastic resorption and osteoblastic formation of surrounding alveolar bone are important factors in relapse, stimulating alveolar bone formation or inhibiting bone resorption after orthodontic tooth movement should prevent relapse. Figure 3 illustrates the statin's role in reducing orthodontic relapse. Given the bone modulation properties of statins, their possible effect of blocking orthodontic relapse could be a consideration in orthodontic treatments. Table 1 summarizes the data from an animal study on the efficacy of prescribed statins in preventing postorthodontic relapse.
Statins' Inhibitory Effect on Relapse after Orthodontic Treatment
The statin family of drugs is an effective treatment for arteriosclerotic cardiovascular disease. Statins have the capability to inhibit 3-hydroxy-3-methyl glutaryl reductase, a rate-limiting enzyme in the cholesterol biosynthesis mevalonate pathway [52]. Statins have been shown to have numerous favorable effects on human health, including anabolic effects on bone metabolism in various ways, in addition to their cholesterol-lowering properties. They promote the osteoblastic differentiation of bone marrow stem cells by upregulating BMP-2 gene expression and angiogenesis. Statins may also promote bone formation by preventing osteoblast apoptosis [53][54][55]. Statins suppress osteoclastic bone activity during periods of high bone turnover, resulting in the reduction of bone resorption. This effect involves the modulation of the receptor activator of nuclear kappa B (RANK), RANKL, and OPG, ultimately suppressing osteoclastogenesis [56,57]. Thus, their ability to stimulate bone formation while also exhibiting pleiotropic effects, such as anti-inflammatory and immunomodulatory properties, could justify their use in orthodontic relapse prevention [58]. Because osteoclastic resorption and osteoblastic formation of surrounding alveolar bone are important factors in relapse, stimulating alveolar bone formation or inhibiting bone resorption after orthodontic tooth movement should prevent relapse. Figure 3 illustrates the statin's role in reducing orthodontic relapse. Given the bone modulation properties of statins, their possible effect of blocking orthodontic relapse could be a consideration in orthodontic treatments. Table 1 summarizes the data from an animal study on the efficacy of prescribed statins in preventing postorthodontic relapse. On the basis of their solubility, Statins are classified as lipophilic. Due to its low therapeutic potential and poor permeability, the distribution of a lipophilic substance presents a significant challenge for conventional delivery systems. According to studies, nanoscale-sized preparations can increase drug permeability by disturbing the li-pid layer and lengthening drug retention duration at the site of action [58,59]. Because it has stronger thermodynamic stability and drug solubilization capability than emul-sion and other dispersion systems, nanoemulsion may be a potential carrier delivery strategy for hydrophobic drugs. It also has a longer shelf life and requires little external energy to manufacture. A nanoemulsion is a dispersed system formed of na-noscale-sized (20-200 nm diameter) droplets of a solvent composed of an oil phase and a water phase and stabilized by the appropriate surfactant [60]. Statins in the form of nanoemulsions appear to be promising for orthodontic applications.
The statins used in the 7 studies listed above are atorvastatin and simvastatin. Differences in the chemical structures of the two statins, the efficacy of one drug over the other, and dose dependence are all factors that may contribute to inconsistencies in the conclusions and their extrapolation to human subjects. This complicates the already difficult task of predicting the future outcomes of statin administration in humans. Simvastatin is soluble in lipids, which may be one reason why its anti-relapse effect was not particularly remarkable. In a novel study, exosomes derived from periodontal ligament stem cells (PDLSCs-Exo) were used as drug carriers to load simvastatin into exosomes. This was accomplished using ultrasound and co-incubation. Simvastatin can be loaded and rendered more soluble by PDLSCs-Exo. Interestingly, during orthodontic relapse, PDLSCs-Exo may control local alveolar bone remodeling by carrying various osteogenesis signaling molecules. As a result, even when PDLSCs-Exo is injected alone, it can still prevent relapse after OTM [23]. Table 1. The effect of statins in animal models of orthodontic relapse.
Author(s) Administration's Route, Type, and Dose of Statin Result
Chen et al. [48] SIM systemic administration 2.5 mg/kg/day, 5.0 mg/kg/day, and 10.0 mg/kg/day Systemic administration of SIM could reduce the incidence of orthodontic relapse in rats, and a lower dose of simvastatin appeared to be more effective.
Han et al. [56] Intraperitoneal injections of SIM, 2.5 mg/kg/day • The SIM group showed shorter relapse distances than did the control group (p < 0.01). • The percentage of relapses in the test group was significantly smaller than that in the control group (p < 0.001).
AlSwafeeri et al. [61] Local injection (intraligamentous and submucosal) of SIM, 0.5 mg/480µL Local SIM administration helps postorthodontic relapse-related bone remodeling by reducing active bone resorption and increasing bone formation, but does not significantly reduce postorthodontic relapse.
Dolci et al. [62] ATO systemic administration, 15 mg/kg Statins reduce orthodontic relapse in rats by modulating bone remodeling. Decreased osteoclastogenesis and increased OPG protein expression explain this effect.
Vieira et al. [63] Oral gavage of SIM, 5 mg/kg/day SIM did not prevent relapse movement in rats, and there was no link between bone density and orthodontic relapse.
Feizbakhsh et al. [64] Local injection of 0.5 mg/kg SIM in 1 mL solution SIM local injection can reduce the rate of tooth movement and root resorption in dogs, but the differences were not statistically significant.
MirHashemi et al. [65] Daily gavage of ATO, 5 mg/kg In rats, ATO appeared to reduce tooth movement; however, its effect on osteoclasts, particularly regarding osteoclastic activity, requires additional research.
Epigallocatechin Gallate-Modified Gelatin (EGCG-GL) Inhibits Bone Resorption and Tooth Movement in Rats
Downstream of RANKL is an intracellular signaling molecule called reactive oxygen species (ROS) [66]. As a result, scavenging ROS is an attractive strategy for inhibiting osteoclasts. Inhibition of osteoclastogenesis can be achieved by activating nuclear factor E2related factor 2 (Nrf2) [67][68][69]. Epigallocatechin gallate's (EGCG) ability to stimulate Nrf2mediated anti-oxidation and ROS scavenging slows down orthodontic tooth movement. Nevertheless, EGCG injections must be repeated if they are to successfully slow OTM by blocking osteoclastogenesis [70]. A previous study observed that repetitive local injections of EGCG solution reduced osteoclastogenesis and as a result, slowed orthodontic tooth movement [67]. However, frequent local injections are not a viable therapeutic option for orthodontics. This issue was addressed by developing EGCG-GL, since it would benefit orthodontic patients by increasing anchorage strength and decreasing the rate of OTM [20]. In 2018, the first vacuum-heated EGCG-modified gelatin sponges for bone regeneration therapy were established. The observed increase in bone formation after vacuum heating can be attributed in part to the effect of the reduced degradability of the sponge caused by DHT cross-linking, which offers a scaffold for cells. The results indicate that the pharmacological impact of EGCG survives vacuum heating and is associated with an increase in bone formation [71]. EGCG-GL was produced by chemically crosslinking EGCG and gelatin using a simple and eco-friendly synthetic approach [70], while preserving EGCG's activity [71]. Mixing EGCG-GL with bromelain, a combination of proteolytic enzymes isolated from pineapples, maintains the steady release of EGCG by gradually breaking down the gelatin [20].
EGCG inhibits LPS-induced RANKL expression in osteoblasts [72]. Furthermore, EGCG enhances the prostaglandin-stimulated production of OPG in osteoblasts in a synergistic manner [73,74]. Consequently, EGCG reduces the RANKL/OPG ratio at the location, which indirectly suppresses osteoclastic differentiation. Improvements in orthodontic retention may be possible through the suppression of osteoclastogenesis and osteoclast activity [38]. Using flow cytometry, EGCG-GL showed inhibited RANKL-mediated intracellular ROS generation in RAW 264.7 cells. These findings indicate that EGCG-GL inhibits RANKL signaling through intracellular ROS formation [20].
The Potential Benefits of Using Bisphosphonate Risedronate Hydrogel to Prevent Orthodontic Relapse Movement
Bisphosphonates are medications used to treat diseases of the bone metabolism, such as osteoporosis. Bisphosphonates bind tightly to hydroxyapatite and inhibit bone resorption. They specifically target calcified tissues, where they are absorbed selectively by boneresorbing osteoclasts [75]. Once internalized, bisphosphonates downregulate the ability of osteoclasts to resorb bone by interfering with cytoskeletal organization and the formation of the ruffled border, resulting in apoptotic cell death [76,77]. Bisphosphonates have been proposed in orthodontics as a possible means of controlling relapse and even generating "pharmacological anchorage". The clinical utility of bisphosphonates stems from their capacity to prevent bone resorption. Anchorage loss and post-treatment relapse are two major concerns in orthodontic treatment [78]. Bisphosphonates were found to inhibit tooth movement in rats by decreasing osteoclast formation. Bisphosphonates also helped to prevent root resorption caused by orthodontic tooth movement. These findings imply that bisphosphonate may be beneficial for regulating orthodontic tooth movement and as a potential inhibitor of root resorption during orthodontic tooth movement and relapse after orthodontic tooth movement [79].
Bisphosphonates may cause bisphosphonate-related jaw osteonecrosis, an oral necrotic bone condition [80]. The duration, dosage, and intravenous and oral administration of bisphosphonate can cause a systemic effect [81]. All previously cited studies utilized pure bisphosphonates, without a carrier, to successfully treat periodontal disease. Adachi et al. revealed that the local injection of risedronate effectively lessens relapse, but with systemic side effects, including an increase in tibial bone mineral density [82]. Given that bisphosphonates affect the entire skeleton, the most effective method for treating periodontal bone loss would be a topical application [83]. On days 14 and 21 after active orthodontic tooth movement, the intrasulcular administration of bisphosphonate risedronate hydrogel altered the osteoclast-to-osteoblast ratio and raised alkaline phosphatase levels, and 7 days after the tooth stabilization period, the application of bisphosphonate risedronate with gelatin hydrogel efficiently reduces relapse in a dose-dependent manner. Hydrogel risedronate bisphosphonate improved the proliferation and maturation of osteoblasts, which play a crucial role in bone production, hence enhancing tooth stability during orthodontic movement. In orthodontics, the developed gelatin hydrogel technology can be used to administer risedronate precisely where it is needed for localized effects. These findings demonstrate the significance of bisphosphonate risedronate hydrogel in the bone remodeling process; hence, it has the ability to prevent relapse [21,84]. Figure 4 depicts the role that bisphosphonates play in reducing orthodontic relapse.
The standard of treatment in the medical field is shifting toward more minimally invasive techniques. The practice of dentistry known as "minimally intrusive dentistry" adheres to a philosophy that emphasizes the integration of prevention, remineralization, and minimal intervention in the placement and repair of restorations. The goal of treatment can be accomplished with minimally invasive dentistry by employing the least invasive surgical technique, removing the smallest possible amount of healthy tissue, lowering the risk of soft tissue reorganization, and focusing solely on the factors that pose the greatest potential for complications [84,85]. The use of a topical treatment is one example of a procedure that is regarded to be minimally invasive.Utari et al. employed a topical hydrogel risedronate formulation to minimize relapse in guinea pigs; however, its placement into the gingival sulcus was still difficult [86]. When compared to pure bisphosphonate solution, the risedronate emulgel using virgin coconut oil exhibited a regulated medication release, and it may be administered topically to prevent relapse [87]. Emulgel is an oil-in-water or water-in-oil emulsion that has been combined with a gelling agent. Emulgel preparations exhibit a number of advantages, including hydrophobic drug properties and a high loading capacity, which allows for ease of production, inexpensive production costs, and controlled drug delivery [88].The hydrophobic stratum corneum, which acts as a barrier to stop medication permeability, is the most significant possible issue associated with topical distribution. Delivering hydrophilic or larger molecular weight medicinal medicines across crystalline barriers thus becomes difficult. When compared to alternative carriers like microemulsions, liposomes, or solid lipid nanoparticles, nano emulsion may offer a number of major advantages including minimal irritancy, strong penetration ability, and high drug-loading capacity for topical delivery [89,90]. systemic side effects, including an increase in tibial bone mineral density [80]. Given that bisphosphonates affect the entire skeleton, the most effective method for treating periodontal bone loss would be a topical application [81]. On days 14 and 21 after active orthodontic tooth movement, the intrasulcular administration of bisphosphonate risedronate hydrogel altered the osteoclast-to-osteoblast ratio and raised alkaline phosphatase levels, and 7 days after the tooth stabilization period, the application of bisphosphonate risedronate with gelatin hydrogel efficiently reduces relapse in a dose-dependent manner. Hydrogel risedronate bisphosphonate improved the proliferation and maturation of osteoblasts, which play a crucial role in bone production, hence enhancing tooth stability during orthodontic movement. In orthodontics, the developed gelatin hydrogel technology can be used to administer risedronate precisely where it is needed for localized effects. These findings demonstrate the significance of bisphosphonate risedronate hydrogel in the bone remodeling process; hence, it has the ability to prevent relapse [21,82]. Figure 4 depicts the role that bisphosphonates play in reducing orthodontic relapse. BP can enter the osteoclasts at sites of bone resorption via endocytosis. BP inhibits the capability and activity of osteoclasts, which causes apoptotic cell death. BP also restricts mature osteoclasts from attaching to bone. By altering the osteoclast-to-osteoblast ratio and raising alkaline phosphatase levels, BP promotes osteoblast proliferation and maturation. BP: bisphosphonates; OC: osteoclast; OB: osteoblast.
Utari et al. employed a topical hydrogel risedronate formulation to minimize relapse in guinea pigs; however, its placement into the gingival sulcus was still difficult [82]. When compared to pure bisphosphonate solution, the risedronate emulgel using virgin coconut oil exhibited a regulated medication release, and it may be administered topically to prevent relapse [83]. Emulgel is an oil-in-water or water-in-oil emulsion that has been BP can enter the osteoclasts at sites of bone resorption via endocytosis. BP inhibits the capability and activity of osteoclasts, which causes apoptotic cell death. BP also restricts mature osteoclasts from attaching to bone. By altering the osteoclast-to-osteoblast ratio and raising alkaline phosphatase levels, BP promotes osteoblast proliferation and maturation. BP: bisphosphonates; OC: osteoclast; OB: osteoblast.
Conclusions
After receiving orthodontic treatment, retention is one of the most important methods that can be used to prevent orthodontic relapse. In spite of this, the mechanism of orthodontic relapse is still unknown, despite the fact that relapse is frequently observed in some patients regardless of the effective use of a retainer. Although the precise mechanism by which orthodontic correction is lost after retention is not fully understood, we presume that bone remodeling is a major contributor. Inhibition of osteoclastogenesis and a delay in orthodontic tooth movement were observed after the delivery of a polymer containing materials that influenced osteoclast activity. Osteoclastogenesis is strongly linked to relapse. PDL space widens a few days after relapse movement, which coincides with the appearance of the first osteoclast progenitor cells at the compression sites in the alveolar crest vasculature and marrow spaces. When compared to the sites of tension, compression tends to have a greater number of osteoclasts present. During tooth movement, proinflammatory cytokines are also produced, which points to the significance of inflammation in the process of initiating osteoclastogenesis. Compressive forces trigger a response from the tissue biomarker RANKL. On the other hand, an increase in the osteoprotegerin biomarker leads to a decrease in RANKL, which in turn prevents tooth movement. This new finding can serve as a key strategy for developing materials that effectively and efficiently prevent orthodontic relapse.
Numerous novel techniques exist for inhibiting osteoclastogenesis and preventing orthodontic relapse, but local application utilizing a drug delivery system is likely the most novel and well-controllable technique, as it provides the most effective control. Controlled release will be the result of the interaction between the drug and the polymer. Polymers are used to improve drug stability and facilitate release. For instance, CHA hydrogel was developed as a drug delivery system because of its capacity to perform the role of an intracellular protein transporter. The hydrogel has the ability to preserve the threedimensional structure of proteins, such as the growth factors in aPRF, while they are being transported, thereby preventing the proteins from becoming denatured or degraded before they reach the intended site. Moreover, emulgel possesses a mucoadhesive drug delivery system, which interacts with the mucus layer on the surface of the mucosal epithelium and mucin molecules. This interaction occurs by forming intensive contact between the drug and the target area. Consequently, the retention time of drug preparations in the intended application is lengthened.
The review presented here analyzes the prospects for the use of polymeric materials in the field of dentistry, particularly in orthodontics. The improvements detailed in this study chart a new course for relapse prevention materials, with the goal of increasing patients' quality of life. Despite the fact that the overall clarity of the evidence limits prospective recommendations for human trials, the outcomes of this analysis suggest a direction for further research. Furthermore, it is critical that any future animal research follows defined protocols. These protocols should consider the reproduction of human clinical circumstances in terms of the timing, dose equivalence, and route of medication administration, as well as the peculiarities of the mechanisms that cause tooth movement and the methods used to evaluate relapse. It is critical to calculate the correct sample size in order to increase the dependability of the findings and the overall impact of the research. | 2022-12-29T16:04:04.860Z | 2022-12-27T00:00:00.000 | {
"year": 2022,
"sha1": "4292c0778706d2528d9294497706a58d70dd87c6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/15/1/103/pdf?version=1672111288",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "de9576acd766b0eb0c44b8fce756f6ec51ed3b4d",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16120313 | pes2o/s2orc | v3-fos-license | Unveiling the Impact of Human Influence on Species Distributions in Vietnam: A Case Study Using Babblers (Aves: Timaliidae)
As developing countries give priority to economic growth, the effects of development threaten natural habitats and species distributions. Over the course of two decades, Vietnam has rapidly developed, especially in the expansion of agricultural production. However, no study has quantitatively measured the effects of recent human impact on the effects of past species distributions in Vietnam. We use locality data collected from multiple natural history collections, including several in Vietnam, to infer past species distributions. We assess habitat availability of five common babbler species (Aves: Timaliidae) using distribution models with data prior to rapid development that followed political reform. Overlaying the Global Human Influence Index with predicted distributions highlights the human impact on these distributions. Three important patterns emerge: (1) human impact influences common Timaliidae distributions similarly, (2) widespread species distributions show higher fragmentation due to human influence compared to narrowly distributed species in Vietnam, and (3) less than 20% of distributions overlap with nationally declared protected areas. We emphasize that conservation efforts should not only prioritize individual species, but also focus efforts on a regional scale, and that the use of museum data can be highly informative in conservation analyses. There are current obstacles to enforcing conservation of Vietnam's already fragmented habitats, but our results suggest there is still time to reevaluate conservation approaches.
Introduction
Vietnam is home to 90 million people while also harboring megadiverse natural habitats for thousands of species. The country's diversity of habitats-from the remnant chain of the Himalayas in the North to the jagged, narrow Central Annamite mountain range that encompasses the Kon Tum Plateau, to the lowlands of the Mekong Delta-explains in part its high species richness and endemism. However, researchers are only recently refocusing attention on the nation's biodiversity after many years of political turmoil. Within the last decade alone, hundreds of species from all taxonomic groups have been discovered, including new and remarkable large mammals such as the saola (Pseudoryx nghetinhensis) [1,2]. As scientists continue to identify new organisms, they are also continuously rediscovering species thought to be extinct for decades. Very recent sightings of species such as the Grey-crowned crocias (Crocias langbianis), the Thorel pitcher plant (Nepenthes thorelii Lecomte), and the Angel's kikri snake (Oligodon macrurus) demonstrate the uncertain status of biodiversity within this country [3][4][5][6][7]. Although these species are threatened by the nation's recent economic growth [8,9], there is still hope that enough habitat will remain to prevent the extinction of many rare and undiscovered species.
Between 1971 and 1984, there was a drastic decrease in Vietnam's agricultural area, likely caused by the U.S.-Vietnam war that terminated in 1975, followed by rapid urbanization during the recovery from this conflict [10]. However, after the 1986 implementation of đổi mới (a series of government economic reforms and strategies), Vietnam became a competitive player in the agricultural world market. In addition to being the third largest global rice exporter, Vietnam also swiftly moved to the top as a major coffee exporter by the year 2000, second only to Brazil [2,11,12]. Between 1994 and 2005, agricultural area has doubled and conversion to agriculture is currently the primary cause of deforestation [10,13]. For example, coffee plantations are largely responsible for a 4.6% decrease in forest cover between 2000 and 2010 in the Central Highlands, which harbor the highest number of endemic species in the country [13]. Vietnam's goal of becoming a competitive player in world markets has thus taken a toll on the nation's valuable natural habitats.
Here, we combine species distribution models with an index of human influence to quantify the impact of rapid environmental change on relatively common, widely distributed species. Our goal is to demonstrate how locality data can be leveraged to quantify threats to common species that are usually not targeted in conservation assessments. If distributions of commonly observed species are fragmented, the impact may be even more severe for those species with narrow and endemic distributions and insufficient data. Using locality data collected prior to rapid economic development, we estimate distributions as they were before the Vietnamese economic boom and demonstrate the value of readily available information in conservation [14,15]. We also show how human impact measures can help highlight regions in which natural ecosystems have been greatly disrupted for a particular taxonomic group.
We selected species of the Timaliidae family (commonly known as babblers) for this study because these birds are a large, diverse family and are a significant component of Southeast Asian avifauna [2,16]. Although babblers are widespread throughout Vietnam and the Indochinese Peninsula, they are restricted to forested habitat and are not normally found in croplands. We also analyzed the extent to which the distribution of these babblers lies in protected areas, discussing issues of conservation implementation of particular importance within Vietnam. The methods we apply can readily be deployed to assess threats in other megadiverse developing countries.
Habitat Data
Bioclimatic and altitude data for Southeast Asia were downloaded from the WorldClim database [18] at a 2.5' spatial resolution. We performed a principal components analysis with a random sampling of 10,000 points of the multi-layer raster object to identify the most informative, independent bioclimatic predictors for this region. The first four principal components (explaining 99% of the variance; Appendix 1-1) were chosen for use in the species distribution models and included temperature seasonality (BIO4), altitude, annual precipitation (BIO12), and precipitation of the wettest quarter (BIO16). The inclusion of altitude as a variable derives from the extreme topographic variation, from two distinct mountain ranges to the Mekong Delta. Vietnam's distinctive topography and expanse of 16 degrees of latitude allow for the extreme variations in climate and precipitation throughout the country.
Distribution Modeling
Species distribution models were inferred using a maximum entropy approach implemented in Maxent 3.3.3 [19,20]. This approach is robust to variable sample sizes and performs well compared to other presence-only techniques [21][22][23]. The default optimization settings were used to construct the SDMs with accuracy evaluated by assessing the area under the curve (AUC) of the receiver-operating characteristic (ROC) plot [19]. To test each model, 20% of the data from each run were selected at random by Maxent and compared to the remaining 80% of the data. Five replicates were carried out to ensure consistency across runs. Finally, cut-off thresholds for areas predicted as 'suitable' and 'unsuitable' were determined according to equated entropy of threshold and non-threshold distributions as provided by Maxent. The distributions were trimmed to show ranges within Vietnam.
Human Impact Influence & Protected Areas
To infer how human impact on the environment affects species distributions, we used the Global Human Influence Index (HII) version 2 (1995-2004) [24]. HII incorporates variables such as human population pressure (population density), human land use and infrastructure (e.g. land use/land cover), and human access (e.g. roads & railroads) and is measured on a scale of 0-64, in which 64 indicates the highest possible human influence. The layer was cropped to the predicted distributions for each species and overlaid onto a map of Vietnam. To determine what area of the predicted species distribution falls within conservation areas, we used the most recent GIS dataset incorporating the national parks and reserves in Vietnam [25] and extracted the percent overlap of distribution with protected areas. We assessed differences in HII score among species distributions using a Wilcoxon Rank Sum test comparing the percent overlap of distribution of each HII score for each species pair.
Niche Modeling Analyses
Similar outputs of each Maxent model for the five species showed that the Temperature Seasonality BIOCLIM variable (BIO4) was the most informative in predicting each species' distribution in the absence of human influence. All predicted distributions were similar to other published ranges of these birds, although the latter are at a rougher scale [26] and show high sensitivity (AUC > 0.95 for all). See Appendix 1-2 Maxent results.
Human Influence Analyses
For each of the 5 species distributions, the highest values of predicted species presence overlapped with HII values between 16-20 on a scale of 0-64 (Fig. 1). The percent overlap of distribution with each score of human influence shows a positively skewed pattern in which maximum percentage of distribution overlap for all species falls within human influence levels that are not severe. We then analyzed the percent of species distribution overlap with Vietnam's protected areas and found that there were no distributions that were more than 21% protected (A. peracensis 19.0%, G. chinensis 14.5%, G. leucolophus 14.7%, P. albiventre 20.1%, and P. ruficollis 14.0%).
Fig. 2. HII of Vietnam clipped to the distributions of five Timaliidae species.
Results of the differences of each HII overlap with distributions between species (Appendix 1-3) show that the proportion of the distribution of P. albiventre overlapping with high HII is significantly less than the other four species. G. chinensis and G. leucolophus (two widely distributed species) have distributions with large amounts of overlap with high human influence, fragmenting the projected distributions particularly in northern and coastal regions (Fig. 2). P. ruficollis, a species widely distributed in Northern regions of Vietnam, has a distribution that is highly fragmented by human influence, particularly in the regions in the northeast corner of the country where the capital city Hanoi is located. A. peracensis, a widespread species in the Southern portion of Vietnam, also exhibits a higher overlap with high HII levels than P. albiventre. P.albiventre, a common species restricted to the western region of the country, has yet to experience high overlap with high levels of human influence.
Discussion
Vietnam is presently faced with the challenge of balancing agricultural and economic development with the preservation of its environment. The present study uses species presence data documented prior to Vietnam's economic reform to identify the degree to which common species of low conservation priority are affected by human influence in Vietnam. Using distributions of common species allowed us to visualize regions of high human impact throughout the country that may highlight areas of high priority for rarer and more vulnerable species, which lack thorough sampling data. We also quantified the proportion of these distributions that are recognized as protected areas. Three important patterns emerge from our analyses: (1) the equivalent effects of human influence on differently distributed species, (2) higher fragmentation of widely distributed species than those that are more narrowly distributed, (3) and only about one-fifth of these estimated distributions are protected.
First, our results show five different species with very different predicted habitat distributions are all currently facing similar human influence indices. Because human influence is affecting biodiversity to similar degrees in different regions of this country, conservation and protected area focus should be on regions of high impact rather than on particular species in Vietnam. Although there are numerous documented successful re-establishments of species that were once endangered, local population diversity (i.e. number of populations) is decreasing a thousand times more rapidly than numbers of species [14,27]. Distributions of each babbler species in this study are influenced by human activity at levels ranked between 16-20. While these levels may not appear alarming at first, we argue that these results show that now is the time for urgent action. As Vietnam continues to urbanize and expand its agricultural areas, there is a highly probable increase in levels of human influence that overlap with suitable species' habitats. What makes the situation in Vietnam of particular conservation focus is the fact that the development is happening now and action can still be taken.
Additionally, we show that species with widespread distributions are most likely to be affected by human growth and development. For example, G. chinensis and G. leucolophus have predicted habitat distribution throughout most of the country, and there is a high probability that these species were present within this distribution prior to Vietnam's rapid urbanization. However, the highest level of current human influence overlaps with these widespread species (particularly in the North and along the coast), and it is unlikely that these forested species remain in the high influence areas of these predicted distributions. P. ruficollis, a widely distributed bird in montane forests, has a predicted distribution throughout Northern Vietnam, and its range is known to extend northwards into China and the Himalayas. Yet disruption of its distribution by high human influence virtually slices the P. ruficollis' range into two Tropical Conservation Science | ISSN 1940-0829 | Tropicalconservationscience.org 592 fragments in Northern Vietnam. Similarly, A. peracensis, a common, widespread species with distribution extending throughout the southern peninsula, shows a higher proportion of its predicted presence overlapping with higher levels of HII.
P. albiventre, however, is predicted to have a much narrower distribution throughout western Vietnam, yet this bird's distribution has less human influence fragmenting its predicted habitat, partly because its range is restricted to areas of high elevation that remain difficult to farm. The low levels of HII within the West central region of Vietnam, particularly Quang Tri province, may also be explained by the excess of unexploded landmines that litter this region, which received the heaviest bombings during the U.S.-Vietnam war [2,28]. The dangers posed to humans and reduced human access may have allowed flora and fauna to remain unaffected by high human impact [29]. However, these regions can continue to be unaltered only if the landmines do not detonate, which can hardly be seen as a positive influence for biodiversity [28,29].
Another important finding of our study is that only 14-20% of predicted distributions overlap with protected areas, assuming that the land within these distributions inside protected areas is protected. Although these areas have been deemed "protected," Vietnam's management of national parks and nature reserves is centralized, creating a complex, bureaucratic system of administration in which the managing of the forests often falls into the hands of local and provincial authorities [30]. The effectiveness of these measures within parks and protected areas continues to be compromised by the large number of individuals living legally within the borders of these forests as well as the high levels of agriculture and human activity in close proximity to these regions. Poaching, logging, and resource acquisition from these protected areas are a major issue. Many populations and species abundances throughout Vietnam have declined dramatically since the end of the U.S.-Vietnam war, some even to extinction, due to these pressures [30][31][32].
Implications for Conservation
A primary benefit from our analysis is the use of museum data to infer human impact on biodiversity. This source of inexpensive and noninvasive data can provide highly informative points of reference in nichemodeling analyses, which are useful in not only conservation studies, but also studies of invasive species, spread of diseases, and community structure and assemblage [33]. These data and methods are applicable to conservation biologists of any developing country, especially as the use of open source museum informatics tools continues to expand. Due to the urgency of conservation analyses for proper policy implementation, the use of existing knowledge found in museums and natural history collections can provide a basic source of information in conservation biology [14,33] .
As Vietnam continues to grow economically, prompt efforts to protect species' habitats and reduce deforestation must be made or we may face yet another unfortunate case of failing to act in time to protect a region of extraordinary biological diversity (Fig. 3). We urge researchers to continue surveying and documenting species, but to do so in ways that will clearly highlight conservation issues at hand so that policymakers can understand the actions needed [15].Our results also suggest that it is not too late to begin to prioritize and protect Vietnam's exceptional environment with further research using readily available data to develop effective measures of conservation. We argue that patterns in our results are relevant to other developing countries undergoing similar changes, highlighting that a rapid increase in human impact on species distributions is measurable and affects biodiversity similarly. | 2016-02-02T08:36:57.578Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "2b5ee3544b5082394a1eeff0d13b782337ea4316",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/194008291400700315",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e63c73c4da52225035c9f5a41fe55e1ca699fa9d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
10821788 | pes2o/s2orc | v3-fos-license | Leveraging Advances in Tuberculosis Diagnosis and Treatment to Address Nontuberculous Mycobacterial Disease
Recent advances in TB diagnosis and treatment must be considered in the basic scientific research of other mycobacterial diseases.
I n recent years, major investments in basic research related to Mycobaterium tuberculosis have culminated in the large-scale rollout of the GeneXpert (Cepheid, Sunnyvale, CA, USA) diagnostic platform, the approval of bedaquiline for treatment of patients with drug-resistant tuberculosis (TB), and a deeper fundamental understanding of how the bacteria causes disease. These advancements stand in stark contrast to the poor understanding of the nontuberculous mycobacteria (NTMs). The NTMs are a group of organisms within the genus Mycobacterium (excluding M. tuberculosis and M. leprae) that cause a spectrum of diseases that include TB-like lung disease; localized infections of the lymphatic system, skin, soft tissue, or bone; and systemic disease (1). Previous studies have helped uncover NTM prevalence in industrialized countries in which differentiating between TB and NTM infections is much less challenging because of the availability of molecular techniques for detecting and identifying microorganisms. However, recent studies that have been done to examine the NTM burden of illness in industrialized settings have consistently uncovered an unexpectedly large prevalence (Figure).
Major obstacles to adequately addressing NTM disease include the challenges of diagnosis and treatment as well as the lack of active research to understand the pathogenesis of these organisms. In each of these arenas, it is critical that we address gaps in the knowledge and capacity to deal with NTM-associated illness. By increasing funding to programs that seek to expand basic knowledge of NTMs and leveraging advancements in TB diagnostics and therapeutics, we can begin to form a deeper understanding of these pathogens and develop appropriate measures to address them. Here, we outline some of the challenges surrounding the diagnosis and treatment of NTMs and research of these organisms and propose avenues for how the road paved by the fight against TB can serve as a scaffold for advancing our understanding of these related, neglected pathogens.
The Challenges of Diagnosis
NTMs share many characteristics with M. tuberculosis that make the bacteria difficult to differentiate in resource-poor settings. The standard method for diagnosing TB is through microscopic examination of sputum smears, but when this approach is used, NTMs appear identical to M. tuberculosis. Without molecular methods, which are unavailable in much of the developing world, these organisms are difficult to distinguish. Furthermore, in resource-limited settings, patients are often assumed to have M. tuberculosis infections because the clinical manifestations of many NTMs can mimic those of TB. In a study in Nigeria, Pokam et al.
Leveraging Advances in Tuberculosis Diagnosis and Treatment to Address
Nontuberculous Mycobacterial Disease found that 16.5% of culture and sputum isolates thought to be M. tuberculosis were bacteria other than M. tuberculosis upon molecular typing; 25% of these misdiagnosed cases (4% overall) were found to be caused by NTMs (2). In another study in Nigeria, Aliyu et al. found that of 1,603 suspected TB cases, 15% were found to be NTM infections (3). Recent evidence has suggested that the rate of confusion between M. tuberculosis and NTMs may be even larger. Turnbull et al. discovered that inmates in a prison in Zambia who had symptoms of cough and an abnormal chest radiograph image showed an NTM prevalence of 5.4% compared with a 3.8% rate for M. tuberculosis (4).
These findings have substantial implications for global health approaches to TB. Given that traditional treatments for M. tuberculosis infection are ineffective against most NTMs, the unexpectedly high rate of NTMs is likely a contributing factor to perceived TB treatment failure. In Brazil, a national mycobacterial referral center found that of 174 patients with pulmonary NTM, 79% had undergone TB treatment for up to 6 months before NTM infection was diagnosed (5). Studies conducted in Burkino Faso and Mali found that 18%-20% of patients suspected of having chronic TB were found to have NTM in their sputum (6). Similarly, a study in Iran showed that as many as 30% of suspected cases of multidrug-resistant TB were in fact NTMs, further suggesting the generalization of this phenomenon (7). Understanding the true prevalence of NTMs in the developing world is especially valuable considering that evidence suggests that NTM infection may interfere with the Bacille Calmette-Guérin vaccine, a widely used tool in preventing TB infections in the developing world (8).
These studies must be taken with some caution, as it is often difficult to distinguish whether the NTMs are a true source of infection or a contaminant in biological specimens or laboratory equipment. To account for this, the American Thoracic Society and the Infectious Disease Society of America guidelines for NTM diagnosis require isolation and growth of the pathogen on >2 separate occasions from the same patient to diagnose a pulmonary NTM infection (9). Because clinicians in most countries in the developing world often make diagnosis of TB on the basis of clinical symptoms, these guidelines may place a tremendous burden on laboratories in resource-poor settings. Any new diagnostic platform for the NTMs must account for the issues behind species differentiation and contamination and do so in a way that is feasible for application globally.
The Challenges of Treatment
The difficulty of diagnosing NTMs and the frequent confusion of these pathogens with TB is compounded by the fact that standard TB treatments are often ineffective against NTM infections. Anti-TB medications produce a disappointing ≈50% response rate (10) in NTM-associated disease. As a result, misdiagnosis and mistreatment have huge implications on patient outcomes. Even within the NTM class, there is a substantial difference between the various species, which defies a one-size-fits-all treatment approach. This group encompasses pathogens with huge varieties of growth rates, host preferences, and inherent resistance to antibacterial drugs. The introduction of macrolides, such as clarithromycin, for treatment for NTMs did improve cure rates for certain species, but in a retrospective study by Huang et al., many patients treated with these drugs for at least 12 months continued to have symptoms, and chronic illness was documented among patients who were successfully treated (11). Moreover, macrolide resistance is now well documented (12). The recommendation of multidrug regimens to counter such resistance is a logical next step, but often these regimens are minimally studied, and few if any have been investigated in a rigorous clinical trial. Therefore, while many clinicians rely on multidrug regimens for the treatment for NTM disease, the ideal combination of agents, duration of therapy, and true efficacy remain unvalidated and unknown.
The Challenges of Current Research Paradigms
Fundamentally, poor understanding of the NTMs arises from a lack of investment. There have been few clinical studies of treatment for NTM-associated disease; most date from a time when advanced HIV infection was common in industrialized countries and opportunistic NTM infections were seen at an alarming frequency among HIV/AIDS patients. We conducted a search using the RePORTER tool (http://projectreporter.nih.gov/reporter.cfm) to find currently active grants from the US National Institutes of Health for this topic and found 228 grants related specifically to research on mycobacterial pathogens. Of these, only 5 (2.2%) were awarded to study specific aspects of the NTMs. These 5 grants cover a wide range of unmet needs, from understanding NTM susceptibility to novel drug discovery. However, this level of attention is clearly insufficient to address the many gaps that exist.
As prevalent as they are, many basic facts about the NTMs remain unknown. For example, it was largely thought that environmental exposure was the sole method of infection. However, a recent report that described whole genome sequencing as a molecular epidemiologic tool suggested that, in the context of cystic fibrosis patients, which is a population exceptionally susceptible to these pathogens, there may be a possibility of person-to-person transmission of M. abscessus (13). If true, control measures similar to those used for TB transmission might be effective, at least for highly susceptible persons. Although this study suggests that alternate modes of transmission may exist, it is still widely believed that environmental transmission is the major source of NTM infection, and numerous reservoirs such as household water sources have been identified (14). However, it can often be difficult to trace infections to a specific environmental source, which is a problem that is compounded by delayed and often incorrect diagnoses.
Additionally, it remains unclear why anti-TB medications are not effective treatment options. NTMs have complex cell walls, systems that modify both antibacterial drugs and their targets, and an extensive array of drug efflux pumps, but all these mechanisms also exist in M. tuberculosis. It may be that in NTMs, differing synergy of these and other mechanisms might conspire to produce a poor response to therapy. However, these differences must be studied to determine their significance. Such questions highlight the numerous areas in which our ability to address these pathogens would benefit from a better fundamental understanding.
Addressing the Challenges: Leveraging Advancements in the TB Field
Although there are no simple solutions to the challenges of effectively addressing the NTMs, recent advancements in the TB field have potential for synergistic effects. One of the major breakthroughs in TB diagnostics over the past decade was the move toward using molecular methods such as the GeneXpert system. Although GeneXpert testing distinguishes fairly well between NTMs and M. tuberculosis, it does not distinguish within the broad category of NTMs, which is a necessary prerequisite to effective treatment. However, this advancement provides an opportunity to reconfigure molecular diagnostic platforms to include at least common NTM pathogens, providing a rapid and specific detection method. Even so, this method would not be a panacea. Because NTMs can colonize humans without causing disease and can contaminate biologic samples and laboratory equipment, simply finding the organism does not provide a definitive diagnosis. However, even raising the possibility can alert a clinician to consider the diagnosis of NTM-associated illness, limiting misdiagnosis and, as a result, incorrect treatment. These systems would also provide researchers with a tool to uncover the true burden of illness from NTMs.
Substantial efforts have been invested in anti-TB drug development. These have yielded 2 new approved antibiotics, bedaquiline and delamanid; several others are in clinical development. These drug discovery platforms could easily be transferred to screening for NTM-active compounds, and some of the new agents have already been shown to have activity against NTMs. For example, bedaquiline is more effective than currently existing antimycobacterial agents in treating M. ulcerans in a mouse model of infection and has shown promise as a salvage therapy for M. avium and M. abscessus (15,16). Moreover, new oxazolidinones, a class of drug effective against many NTMs, are currently being developed for TB and might prove clinically useful (17). Although many antitubercular compounds have poor activity against NTMs, new agents could serve as starting points that could be optimized; for example, bedaquiline derivatives have been found to have broad-spectrum activity (18).
While building on discoveries in the TB field, developing new interventions to diagnose, treat and prevent NTM-associated illness will likely require a better basic understanding of these organisms. Mycobacteria are extremely diverse; >150 species have been identified to date and vary in pathogenicity, virulence, mode of transmission, and antibiotic susceptibility. Here, too, the path paved by TB biologists could afford some insight into NTM physiology. The sequencing of the M. tuberculosis genome and the development of genetic tools to probe the bacterium's physiology afforded novel insights into how this pathogen causes disease. Similarly, the sequencing of many NTM species, now being completed, could lead to novel discoveries that highlight why these organisms have been so difficult to treat in the past and why exactly they diverge from M. tuberculosis (19).
None of these measures will be taken without increased funding. The prevalence of NTMs not only in developing countries but also in the United States and many parts of Asia suggests that many public agencies should be willing to support NTM research. However, public funding is not the only avenue for increasing investment. Considering the prevalence and chronicity of these organisms in the industrialized world, a potentially substantial market of affected entities may exist. Resources from private interests and pharmaceutical companies could be leveraged to develop novel diagnostics and therapeutics if encouraged by advancements in the academic setting in our basic understanding of these pathogens.
The evidence that has been uncovered to date points to the NTMs as a source of substantial and growing burden of illness. NTM infections not only effect thousands of people, requiring lengthy and taxing treatments that are often not available in resource-poor settings; they also muddy the waters in the global fight against TB by draining resources resulting from misdiagnosis and mistreatment. Only through a concerted effort by researchers, clinicians, industry, and global health policy stakeholders to understand and address these issues can we respond to this large and neglected threat.
Dr. Raju is a physician and scientist and is currently a pediatric resident at Children's Hospital Boston, Boston, Massachusetts, in the Urban Health and Advocacy Track. His professional and research interests are in the biology of essential physiologic processes in M. tuberculosis and nontuberculous mycobacteria, with the overall goal of developing novel therapeutics for these neglected diseases. | 2016-05-10T05:27:00.856Z | 2016-03-01T00:00:00.000 | {
"year": 2016,
"sha1": "5d6c54e8d0f9f7744699dd1791c85e9c83813d1f",
"oa_license": "CCBY",
"oa_url": "https://wwwnc.cdc.gov/eid/article/22/3/pdfs/15-1643.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a0df465e2f47678c9ff1f40532c8b2823e4dc231",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15452761 | pes2o/s2orc | v3-fos-license | Attributable Fractions of Risk Factors for Cardiovascular Diseases
Background Cardiovascular disease (CVD) is a leading cause of death in Japan. To reduce the threat of CVD, it is important to identify its major risk factors. The population attributable fraction (PAF) is calculated from the prevalence and relative risk of risk factors and can be used to estimate the burden of these factors with respect to CVD. We analyzed the findings from several prospective studies to determine the PAFs of CVD. Methods PAF was calculated as pd × (multiadjusted relative risk − 1)/multiadjusted relative risk, where pd is the proportion of patients exposed to that risk factor category, according to data from the Ohsaki Cohort Study, EPOCH-JAPAN, NIPPON DATA80, Miyagi Cohort Study, CARDIA Study, and ARIC Study. Results Nonoptimal blood pressure explained 47% and 26% of CVD mortality in middle-aged and elderly Japanese, respectively. Cigarette smoking explained 34% of all-cause mortality in middle-aged men. The combination of hypertension and cigarette smoking explained 57% and 44% of CVD mortality in younger men and women, respectively. Furthermore, the presence of at least 1 nonoptimal risk factor explained most CVD deaths and all-cause deaths. Conclusions Established CVD risk factors, especially high blood pressure and cigarette smoking, explained a large proportion of CVD mortality and all-cause mortality. Prevention, early detection, and treatment of these conventional risk factors are required to reduce mortality risk.
INTRODUCTION
Cardiovascular diseases (CVDs), namely, heart disease and stroke, are leading causes of death in Japan. 1 Furthermore, because stroke is a major cause of certification for long-term care insurance in Japan, 1 risk factors for stroke also contribute to a decline in activities of daily living (ADL). Therefore, the prominent risk factors for CVD must be identified if we are to lower the risks for mortality and ADL decline. The population attributable fraction (PAF) is an estimate of the burden of a disease. 2 My colleagues and I estimated the PAFs of allcause death, CVD death, CVD incidence, ADL decline, and smoking-related diseases due to established CVD risk factors, [3][4][5][6][7][8][9][10] and the results are described herein.
Cohort studies Ohsaki Cohort Study
The setting and design of the Ohsaki Cohort Study have been reported in detail elsewhere. 11 In brief, this prospective cohort study started in 1994. A self-administered questionnaire requesting information on various health-related lifestyles was delivered to all National Health Insurance (NHI) beneficiaries aged 40 to 79 years living in the catchment area of the Ohsaki Public Health Center, Miyagi Prefecture, Japan. In Japan, the NHI is used by farmers, the selfemployed, pensioners, and their dependents. The Ohsaki Public Health Center, which is a local government agency, provides preventive health services for the residents of 14 municipalities. The questionnaires were delivered to and collected from the subjects' residences by public health officials in each municipality. This procedure yielded a high response rate of 94.6% (n = 52 029). A total of 776 subjects were excluded from the study because they had withdrawn from the NHI before 1 January 1995, when the prospective collection of NHI claim files began. Thus, 51 253 subjects formed the study cohort. Among the participants in the Ohsaki NHI Cohort Study, 16 515 (32.2%) underwent an annual health check-up between April and December 1995, and they provided their consent for the use of the results in the present study.
EPOCH-JAPAN
The EPOCH-JAPAN Study is a pooled analysis of 13 cohort studies that are examining the relation between health measures (laboratory measures plus lifestyle and behavioral factors) and disease (mortality and incidence) in the Japanese population. To be included in the meta-analysis, a study had to collect data on health examination measures, have a follow-up of at least 10 years, and enroll more than 1000 participants. Both nationwide and single-site cohort studies were included. Inclusion criteria for participants were age at entry (age 40-90 years) and availability of data on sex, age at entry, systolic blood pressure, and diastolic blood pressure. Because the end of follow-up varied between cohorts, age range during followup was limited to between 40 and 90 years, and the end of the observation period was set at age 90 years.
NIPPON DATA80
The subjects of this cohort study participated in the National Cardiovascular Survey of 1980. The standardized procedures used in that survey have been described elsewhere. 12 All household members aged 30 years or older were surveyed in 300 randomly selected census tracts throughout Japan.
The number of individuals selected was 13 771. Among these, 10 546 had complete baseline information on age, sex, and blood pressure (BP). The sample comprised the National Integrated Project for Prospective Observation of Noncommunicable Disease and Its Trends in the Aged (NIPPON DATA80).
Miyagi Cohort Study
From June through August 1990, self-administered questionnaires on health habits were delivered to 51 291 subjects who were aged 40 to 64 years and lived in 14 municipalities of Miyagi Prefecture, in northern Japan. Usable questionnaires were returned by 47 605 subjects, yielding a response rate of 91.7%.
CARDIA Study
The CARDIA study, a biethnic, prospective, multicenter epidemiologic study of the evolution of risk factors in young adults, has been described in detail elsewhere. 13 Briefly, from 1985 to 1986, 5115 African-American and white adults aged 18 to 30 years were examined in Birmingham, AL, Chicago, IL, Minneapolis, MN, and Oakland, CA. At the Birmingham, Minneapolis, and Chicago sites, participants were randomly selected from total communities or from specific census tracts. In Oakland, participants were randomly selected from members of the Kaiser Permanente Medical Care Program. At each site, recruitment achieved nearly equal numbers with respect to race (African American, white), sex, education (high school or less, more than high school), and age (18-24 years, 25-30 years). Fifty percent of invited individuals contacted were examined (47% of African Americans and 60% of whites) and formed the CARDIA cohort.
ARIC Study
The ARIC Study is a multicenter prospective cohort study investigating the natural history of atherosclerotic disease in the US communities of Forsyth County, NC, Jackson, MI, Washington County, MD, and the northwest suburbs of Minneapolis, MN. 14 At baseline, in 1987-89, the cohort comprised 15 792 men and women aged 45 to 64 years who were selected by using a list or area probability sampling. Race/ethnicity was self-reported; only African Americans were recruited in the Jackson study center. The baseline home interview assessed participant sociodemographic characteristics, smoking and alcohol-drinking habits, medication use, and personal history of diseases.
Calculation of population attributable fraction PAF was calculated using the formula 2 : PAF = pd × (relative risk − 1)/relative risk, where pd is the proportion of cases exposed to the risk factor.
RESULTS
Relationship of blood pressure with all-cause mortality and CVD mortality 3 To determine the impact of high BP on CVD mortality and allcause mortality, We investigated the relationships of BP category with CVD mortality and estimated PAFs using data from the Ohsaki Cohort Study. In accordance with the Joint National Committees Seventh Report (JNC7), 15 hypertension (HT) was defined as a systolic BP of 140 mm Hg or higher, a diastolic BP of 90 mm Hg or higher, or current use of antihypertensive medication. Participants who did not satisfy the HT criteria but had a systolic BP of 120 mm Hg or higher or a diastolic BP of 80 mm Hg or higher were regarded as having prehypertension (pre-HT). Those who satisfied neither set of criteria were regarded as having normal BP. A multivariateadjusted Cox proportional hazards model was used to estimate the hazard ratio (HR) of CVD mortality associated with BP status. During 12 years of follow-up, 321 participants died of CVD. Because the positive relationship between BP and CVD mortality was steeper in middle-aged (age 40-64 years) adults than in elderly (age 65-79 years) adults, the PAF of CVD mortality was calculated separately for these groups. The HRs (95% confidence interval [CI]) for CVD mortality for pre-HT and HT were 1.31 (0.59-2.94) and 2.98 (1.39-6.41), respectively, for middle-aged adults and 1.03 (0.62-1.70) and 1.65 (1.02-2.64) for elderly adults. Adults with either pre-HT or HT accounted for 47% and 26% of CVD deaths among middle-aged and elderly participants, respectively. Similarly, nonoptimal BP explained 18.9% and 4.6% of all-cause deaths among middle-aged and elderly participants, respectively.
Relationship between BP and all-cause mortality 4 To determine sex-and age-specific HRs and the effect of BP on all-cause mortality, and to estimate the contribution of high BP to all-cause death, a meta-analysis of data from 13 population-based cohort studies in Japan was conducted (EPOCH-JAPAN). Poisson regression was used to estimate all-cause mortality rates and ratios. In the model, BP data were treated as continuous (increments of 10 mm Hg) and categorical (every 10 mm Hg), in accordance with the JNC7 recommendations. 15 Potential confounders included body mass index (BMI), smoking, alcohol consumption, and cohort. The impact of HT was measured using PAFs. The adjusted mortality rate rose as BP increased, and the trend was more distinct in younger men and women. The trend in HRs was similar and more apparent in younger men (HR for an increase in BP of 10 mm Hg in men aged 40-49 years: systolic BP 1.37, 95% CI 1.15-1.62; diastolic BP 1.46, 95% CI 1.05-2.03) than in older men (age 80-89 years: systolic BP 1.09, 95% CI 1.05-1.13; diastolic BP 1.12, 95% CI 1.03-1.22). The PAF of HT was 22.7% in men and 17.9% in women when normal BP was defined as the reference level and 11.9% in men and 10.9% in women when the pre-HT group was defined as the reference level.
Relationship between BP and subsequent decline in ADL 5 To determine the relationship between baseline BP in 1980 and ADL in 1999 among a general population of Japanese aged 47 to 59 years, We analyzed the NIPPON DATA80 dataset. Using 1999 ADL data, We compared data from NIPPON DATA80 survivors without (n = 1816) and with (n = 75) impaired ADL, using baseline BP information collected in 1980. Multipleadjusted logistic regression analyses were used to estimate the risk of impaired ADL according to baseline BP category, as described in the JNC7 guidelines. 15 Stage 2 HT was defined as a systolic BP of 160 mm Hg or higher, a diastolic BP of 100 mm Hg or higher, or use of antihypertensive medication. Participants who did not satisfy the stage 2 HT criteria but had a systolic BP of 140 mm Hg or higher or a diastolic BP of 90 mm Hg or higher were regarded as having stage 1 HT. Those who did not satisfy HT criteria but had a systolic BP of 120 mm Hg or higher or a diastolic BP of 80 mm Hg or higher were regarded as having pre-HT. Those who satisfied none of the above sets of criteria were regarded as having normal BP. Excess impaired ADL due to nonoptimal BP was calculated. As compared with the normal BP category, the adjusted odds ratio (OR) of having impaired ADL was higher among those with pre-HT (OR, 1.50; 95% CI, 0.55-4.09), stage 1 HT (OR, 1.56; 95% CI, 0.56-4.32), and stage 2 HT (OR, 2.96; 95% CI, 1.09-8.05). Nonoptimal BP explained 45% (33.7/75) of impaired ADL. Blood pressure categories with a composite of mortality were also positively associated with impaired ADL.
Combined effect of hypertension and cigarette smoking on all-cause mortality and CVD mortality 6 To describe the fraction of CVD mortality and all-cause mortality that could be explained by current tobacco consumption and HT in Japan, we calculated the age-specific combined effect of smoking and HT on CVD mortality and all-cause mortality in NIPPON DATA80, which followed a representative cohort of 8912 Japanese men and women without a history of stroke or heart disease. Participants were categorized as a nonsmoker without HT, current smoker only, HT only, or current smoker with HT. Hypertension was defined as a systolic BP of 140 mm Hg or higher, a diastolic BP of 90 mm Hg or higher, or current use of antihypertensive medication. 15 The PAFs of CVD mortality and allcause mortality were calculated based on relative hazards assessed using proportional hazards regression models. During 19 years of follow-up, there were 313 and 291 CVD deaths and 948 and 766 all-cause deaths among men and women, respectively. The PAFs of CVD mortality due to smoking or HT were 35.1% for men and 22.1% for women. The PAF of CVD mortality was higher in participants younger than 60 years (57.4% for men and 40.7% for women) than in those who were older (26.3% for men and 18.1% for women).
Relationship between cigarette smoking and allcause mortality 7 To examine the relationship between smoking and all-cause mortality and estimate the PAF for all-cause death due to cigarette smoking, 18 945 men and 17 107 women (age 40-64 years) in the Miyagi Cohort study were followed. The relative risk (RR) of mortality was estimated using Cox regression according to smoking category, with adjustment for age, education, marital status, history of diseases, alcohol consumption, BMI, walking, and dietary variables. A total of 1209 men and 499 women died during the 11-year follow-up period. Multivariate RRs of all-cause mortality for current smokers as compared with those of never smokers were 1.71 (95% CI, 1.44-2.03) for men and 1.44 (95% CI, 1.06-1.94) for women. Of all deaths, 34% among men and 4% among women were attributable to current or past smoking. Relationship between tobacco consumption and self-reported disease before middle age 8 Evidence of harm from cigarette smoking during young adulthood is limited. We assessed associations between cigarette smoking and several self-reported illnesses in a prospective cohort study of healthy young adults. The data were derived from 4472 adults who participated in the CARDIA study. They were aged 18 to 30 years at baseline and were reexamined at least once after 7, 10, or 15 years. Tobacco consumption in 1985-86 was related to self-reported smoking-related cancers, circulatory disease, and peptic ulcer. The incidence of these diseases was 9.3 per 1000 person-years among current smokers and 4.5 per 1000 person-years among those who had never smoked and had no exposure to passive smoke; the relative risk (adjusted for race, sex, education, and center) was 1.96 (95% CI, 1.42-2.70). Assuming a causal relationship, 32% of these premature incidents were attributable to smoking. The relative risks of liver disease, migraine headache, depression, being ill the day before the examination, chronic cough, and phlegm production were also higher among smokers.
Low-risk profiles for cardiovascular disease incidence and mortality in a US population 9 A large proportion of CVD events among white Americans can be explained by borderline or elevated levels of CVD risk factors. The degree to which this is true among African Americans is unclear. Thus, to determine the proportion of such events, We analyzed data from the ARIC Study, which included 14 162 middle-aged adults who were free of recognized stroke or coronary heart disease and had baseline information on risk factors. Based on national guidelines, risk factors (BP, cholesterol levels, diabetes, and smoking) were categorized as optimal, borderline, or elevated. [15][16][17] The incidences of CVD (a composite of stroke and coronary heart disease; n = 1492), CVD mortality (n = 612), and all-cause mortality (n = 1824) were determined for a 13-year period. Overall, 6.2% and 70.2% of CVD incidence was explained by borderline and elevated risk, respectively. Similarly, 5.3% and 81.5% of CVD deaths were explained by borderline and elevated risk, and 7.2% and 64.5% of all-cause deaths were explained by borderline and elevated risk.
Low-risk profile for all-cause mortality and cardiovascular disease in a Japanese population 10 Studies have focused on low-risk profiles for CVD in Europe and North America, 9 but few have examined the longterm low-risk profile for CVD among the Japanese general population. The present study examined whether having a favorable risk factor profile yields lower all-cause mortality and whether the proportion of adults with a low-risk profile is larger among the Japanese population. The data were derived from NIPPON DATA80. A total of 8339 men and women who were aged 30 to 69 years and had no history of cardiovascular diseases were followed for 19 years. Low risk was defined as having all of the following baseline characteristics: a systolic BP lower than 120 mm Hg, a diastolic BP lower than 80 mm Hg, no antihypertensive medication, serum cholesterol 160 to 240 mg/dL (4.14-6.22 mmol/L), no history of diabetes, and no tobacco consumption. The long-term mortality of the low-risk group was compared with that of other groups using a Cox proportional hazards model. Overall, 9.4% of participants were classified as low risk. The multivariateadjusted HR among low-risk individuals as compared with others was 0.33 (95% CI, 0.15-0.74) for CVD and 0.63 (95% CI, 0.46-0.88) for all-cause mortality. The PAF associated with the elevated risk profile was 66% for CVD mortality and 36% for all-cause mortality. The greatest attributable risk factor for all-cause mortality was high BP. In conclusion, rates of all-cause and CVD mortality were lower among Japanese individuals with a favorable cardiovascular disease risk profile.
DISCUSSION
The PAFs of established risk factors were estimated for several endpoints (Table). These risk factors significantly contributed to the outcomes, especially high BP and smoking.
In the Ohsaki study, 47% and 26% of CVD deaths among middle-aged and elderly participants, respectively, were explained by nonoptimal BP. Similarly, approximately 20% of all-cause mortality was explained by nonoptimal BP in the EPOCH-JAPAN Study. These findings are similar to those of other Japanese studies. Sairenchi et al reported that in middleaged adults, 60% (men) and 15% (women) of CVD deaths were explained by nonoptimal BP; the corresponding values in elderly adults were 28% and 7%. 18 Ikeda et al reported that 38% of total CVD mortality in men and 36% in women would be prevented by elimination of high-normal to severe hypertension. 19 Thus, better management of high BP is necessary because the PAFs for all-cause and CVD mortality due to hypertension were high, especially in younger adults. However, there are several problems regarding the management of high BP. Although antihypertensive medications can potentially prevent CVD, 15 the rate of BP control has been insufficient. 15,20 One report showed that only 24% of adults with untreated hypertension at routine health check-ups had started treatment within the subsequent year. 21 Thus, more effort should be directed toward primary prevention, early detection, early treatment, and better control of high BP.
The PAF for all-cause mortality due to tobacco consumption is high in Japan, 5 especially among men. Hara et al reported that 22.2% of all-cause mortality was explained by smoking, 22 and Uno et al reported that 24.9% of all-cause deaths were explained by smoking. 23 Although the smoking rate is generally decreasing in Japan, 1 it is still higher than in other developed countries. Efforts to decrease smoking rates should continue.
We found that having any one nonoptimal risk factor explained large proportions of all-cause and CVD deaths. 9,10 In Japan, the combined effect of smoking and hypertension was extremely large, 6 which highlights the importance of combating smoking and hypertension. A recent report showed that the PAFs for all-cause and CVD deaths due to high BP and smoking were higher than those due to diabetes and suboptimal serum cholesterol. 10 However, the baseline survey was performed 30 years previously (1980); surveys conducted more recently show that the prevalence of both obesity and diabetes has increased among Japanese men. 1 Further studies that estimate PAFs using more recent baseline data are needed in order to assess the burden of these diseases.
In previous studies, my colleagues and I mainly used Cox proportional hazards models to estimate hazard ratios and inserted them into the traditional formula to calculate PAFs, as was done in several cohort studies. 24,25 However, recent publications have refined the calculation of PAFs derived from the Cox model 26,27 when a such model is applied to a cohort study. Further discussion and refinement will be necessary to establish PAF estimation in the cohort study setting.
In conclusion, CVD risk factors, especially BP and cigarette smoking, explain most all-cause and CVD mortality. Prevention, early detection, and treatment of these conventional risk factors are required to reduce the risk for mortality.
ACKNOWLEDGMENTS
The author is grateful to Drs. Shigeru Hisamichi, Ichiro Tsuji, Akira Fukao, Yutaka Imai, Hirotsugu Ueshima, Aaron R. Folsom, and David R. Jacobs Jr. and to all the co-investigators and staff members. The author also thanks the Japan Epidemiological Association and the Editorial Board of the Journal of Epidemiology for providing the opportunity to write this article. Finally, the author thanks Dr. Yoshitaka Murakami for his assistance with the statistical analysis.
Conflicts of interest: None declared. | 2018-04-03T04:28:02.458Z | 2011-01-29T00:00:00.000 | {
"year": 2011,
"sha1": "c0a87a052363eaf2367924009a6d9ec2cfc19946",
"oa_license": "CCBY",
"oa_url": "https://www.jstage.jst.go.jp/article/jea/21/2/21_JE20100081/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c0a87a052363eaf2367924009a6d9ec2cfc19946",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252918004 | pes2o/s2orc | v3-fos-license | He-star donor AM CVn stars and their progenitors as LISA sources
Ultracompact cataclysmic variables (CVs) of the AM CVn type are deemed to be important verification sources for the future space gravitational wave detectors such as the Laser Interferometer Space Antenna (LISA). We model the present-day Galactic population of AM CVn stars with He-star donors. Such a population has long expected to exist, though only a couple of candidates are known. We applied the hybrid method of binary population synthesis (BPS) which combines a simulation of the population of immediate precursors of AM CVn stars by a fast BPS code with subsequent tracking of their evolution by a full evolutionary code. The model predicts that the present birthrate of He-donor AM CVn stars in the Galaxy is $4.6\times 10^{-4}$ per yr and the Galaxy may harbour about 112000 objects of this class which have orbital periods less than 42-43 min. The foreground confusion limit and instrumental noise of LISA prevent the discovery of longer periods systems in gravitational waves. We find that about 500 He-star AM CVns may be detected by LISA with signal-to-noise ratio (S/N)>5 during a 4 yr mission. Within 1 Kpc from the Sun, there may exist up to 130 He-star AM CVns with the periods in the same range, which may serve as verification binaries, if detected in the electromagnetic spectrum. In the Milky Way, there are also about 14800 immediate precursors of AM CVn stars. They are detached systems with a stripped low-mass He-star and a white dwarf companion, out of which about 75 may potentially be observed by LISA during its mission.
Introduction
AM CVn stars are a small group of ultracompact cataclysmic binaries (CVs) with He-rich (He white dwarf or stripped He star) donors and carbon-oxygen (CO) white dwarf (WD) accretors. Currently, about 70 definite and candidate AM CVn stars are known, see Ramsay et al. (2018 , Table 1), Wevers et al. (2016); Burdge et al. (2020); Kato & Kojiguchi (2021), and van Roestel et al. (2021). Estimated orbital periods of AM CVns range from 5.4 min. to 67.8 min. Their evolution is driven by gravitational waves radiation (GWR, Paczyński 1967). Detailed reviews of them were published by Solheim (2010) and Ramsay et al. (2018). Lipunov & Postnov (1987) estimated that binary WDs may be the strongest GWR sources detected using lasers in space. Hellings (1996) recognised that AM CVn stars are also expected to be detectable by space GWR antennas. Along with massive black hole binaries and extreme or intermediate mass ratio inspirals, compact binaries are among the main objects expected to be observed by the planned space GWR observatories such as the Laser Interferometer Space Antenna (LISA, see Amaro-Seoane et al. 2022, and references therein), Taiji, and TianQin (Gong et al. 2021).
The special importance of AM CVn stars for GWR astrophysics stems from the fact that some of them belong to the so-called verification binaries (or guaranteed sources) for space GWR detectors (S. Phinney 2001, unpublished), (Stroeer & Vecchio 2006;Nelemans 2009;Kupfer et al. 2018), and Huang et al. (2020). It is expected that they will be discovered rather soon after the beginning of the missions because of a strong signal due to the proximity to the Sun; additionally, thanks to the knowledge of mass ratios of components, orbital periods and distances from observations in the electromagnetic spectrum, they will serve for testing and calibrating detectors.
Theoretical modelling and observations suggest three formation channels for AM CVn stars. A detailed analysis of them may be found, for instance, in Nelemans (2005) and Postnov & Yungelson (2014). Here, we recall only basic details of these channels.
The double-degenerate (DD, Paczyński 1967) channel envisions formation of a binary harbouring a CO WD and a less massive He WD companion. If the system is sufficiently tight, the He WD may overflow its Roche lobe in a time shorter than the Hubble time due to angular momentum loss (AML) via GWR, and under certain conditions stable mass exchange is expected to commence (see also Tutukov & Yungelson 1979;Nelemans et al. 2001a;Marsh et al. 2004). In the DD channel, the binary may evolve from P orb ≈ (2-3) min. at the RLOF to P orb ∼ 1 hr.
In the single degenerate channel (SD) (Faulkner et al. 1972) and (Savonije et al. 1986), a stripped 'semidegenerate' He-star may accompany CO WD. In this case, RLOF by the progenitor of the donor should occur before He exhaustion in its core (Iben & Tutukov 1987). At the contact, P orb is several dozen minutes, depending on the masses of components and the extent of exhaustion of He in the core of the future donor. The binary first evolves to P min ∼10 min., which is attributable to the change in sign of the M − R relation when the thermal timescale of the donor becomes substantially longer than the timescale of angular momentum loss by GWR (see Yungelson 2008, for more details). The scenario of formation of He-donor AM CVn stars was extensively discussed, for example, by Tutukov & Yungelson (1996); Nelemans et al. (2001a); Postnov & Yungelson (2006); Solheim (2010); Postnov & Yungelson (2014); Götberg et al. (2020), and Bauer & Kupfer (2021).
Following Nelemans et al. (2001a), the objects that formed via an SD channel are classified as AM CVns after passing P min . However, as noted by Bauer & Kupfer (2021), already before reaching P min , relatively high-mass subdwarf donors may transfer matter from (0.2 -0.3) M outer He shells that contain the ashes of CNO burning. In this case, accretion disks spectra resemble the spectra typical for AM CVns.
In the SD channel, when the donor mass decreases to (0.02 -0.03)M , it begins to cool, its thermal timescale becomes shorter than the mass-loss timescale, it becomes more degenerate, and the M − R relation gradually merges with that for the cool WD (see Wong & Bildsten 2021, for a detailed discussion). The formation of a particular AM CVn system via a DD or SD channel may be inferred from the abundances of elements and their ratios, especially N/He, N/C, N/O, O/He, and O/C ; however, the derivation of abundances is a formidable task.
The evolved donor channel (Tutukov et al. 1985) is, in fact, the standard scenario of formation of CVs, in which the donor overflows the Roche lobe when the hydrogen abundance in the core becomes < ∼ 0.1. The donor becomes a star with an almost H-depleted core and a thin H envelope, which later may be lost. Hypothetical AM CVn stars, which formed via this channel, typically have P orb > ∼ 30 min. and very rarely may evolve to P orb ∼10 min. (Podsiadlowski et al. 2003;Goliasch & Nelson 2015;Kalomeni et al. 2016;Yungelson 2018;Liu et al. 2021). Strictly speaking, this channel does not produce 'classic' AM CVn stars, since an abundance of H at the surface of the donors hardly decreases below 10 −4 − 10 −3 , while spectral lines of H should be observed for an abundance exceeding 10 −5 (Nagel et al. 2009). The parameters of the well-studied system Gaia14aae are apparently matched best by the evolved donors channel, but no H is observed in its spectrum .
For completeness, we note that about a dozen so-called HeCVs with an enhanced He/H abundance ratio have P orb below conventional P orb,min (Breedt et al. 2012;Breedt 2015;Kennedy et al. 2015;Lee et al. 2022). Some of them may finish their evolution in the sub-minimum periods' range or bounce (change the sign ofṖ) and return to P orb > ∼ 70-80 min, retaining H in the envelopes. Envelopes of other HeCVs may contract after the mass of them drops below a certain minimum. Then they experience a detached stage of evolution and join the family of DD AM CVns, due to continuing angular momentum losses. The rest of the H envelopes may be expected to be lost very soon after RLOF, whenṀ is high.
Observational data suggest that the population of AM CVn stars is totally dominated by the objects that formed via the DD channel. Currently, only several candidate AM CVn stars with a He-star donor are known -SDSS J0926+3624 (Copperwheat et al. 2011), ZTFJ1637+49, and ZTFJ0220+21 (van Roestel et al. 2021).
The first binary population synthesis (BPS) studies of AM CVn stars, including both DD and SD channels, were performed by Tutukov & Yungelson (1996);Nelemans et al. (2001a), andNelemans et al. (2004). Later studies, as a rule, were aimed at DD. This is mainly related to the apparent scarcity of observed He-donor AM CVns and the badly understood consequences of accretion of He onto WDs at the expected accretion rates, especially for rotating WDs. Even so, the studies of Hestar AM CVns may be important; this is because if they exist, but they are simply not recognised due to selection effects, they may increase the sample of LISA verification binaries.
For the present paper, we modelled the Galactic population of the stripped He stars with WD companions which later form AM CVn stars with He-star donors. We modelled the formation of them and followed their evolution to RLOF and through the mass-transfer stage to the instant when the mass of accretors reached M Ch or the mass of the donor decreased to (0.02 -0.03) M , the limit imposed by the evolutionary code used. We estimate the number of AM CVn stars and their direct precursors and evaluate the number of AM CVn stars that may be detected in GW with a signal-to-noise ratio (S/N) > 5 by space-detector LISA during a 4 yr long mission. We present our model in Section 2. In Section 3 the results of the modelling are presented. We discuss the results and summarise our conclusions in Section 4.
Population synthesis
For the modelling, we applied the hybrid BPS method (Nelson 2012;Chen et al. 2014;Goliasch & Nelson 2015): the evolution of binaries up to the formation of precursors of AM CVn systems, WDs accompanied by He stars, was computed by means of an updated fast analytic BPS code BSE (Hurley et al. 2002) 1 , while their further evolution was simulated using a grid of precomputed full evolutionary tracks. The advantage of hybrid population synthesis over other algorithms of BPS, implemented, in particular, in BSE, is that the latter are usually based on analytic approximations to the evolutionary tracks for relatively well-explored single stars. Description of the evolution of close binaries, for which systematic studies are much scarcer, since they are more complicated, is often based on 'educated guesses'. This is particularly true for the second RLOF in the systems with compact accretors. Furthermore, as test runs show, BSE does not reproduce the evolution of semi-detached systems with He donors, which gradually become more degenerate.
The crucial assumptions were as follows. The initial mass function (IMF) of primary components followed the Salpeter law (dN/dM ∼ M −2.35 ) in the mass range 1 ≤ M 1 /M ≤ 100. A flat distribution of mass ratios of components q = M 2 /M 1 ≤ 1 in the [0.1,1] range (Kraicheva et al. 1989) was assumed 2 . The stellar binarity rate was set to 50%, that is to say two-thirds of all stars are binary components. The initial distribution of close binaries over orbital periods was accepted after Sana et al. (2012): f (log P orb ) ∝ log P orb −0.55 . For common envelopes (CE) treatment, we applied the energy-balance formalism of Webbink (1984) and de Kool (1990) with 'CE efficiency' parameter α ce =1 and binding energy parameter λ values from Loveridge et al. (2011).
Evolutionary tracks for tracing further evolution were computed with the updated P.P. Eggleton evolutionary code STARS (Eggleton 1971(Eggleton , year 2006; for more details, readers can refer to Yungelson (2008). All computations were carried out for metallicity Z=0.02. The STARS code lacks opacity tables that would allow one to compute models with masses < ∼ 0.02 M corresponding to P orb > ∼ (42-43 min.) 3 .
By coincidence, at P orb which slightly exceeds 40 min., it is impossible to detect a GWR signal of AM CVn stars by LISA, due to the foreground and antenna noise (see Fig. 3 below).
Computation of gravitational waves' strain
Via BPS, we found the masses of WD (M WD ), stripped He stars (M He ), and orbital periods just after the end of RLOF or common-envelope phases P i for binaries, in which a He star may later initiate stable mass transfer. For the mission lifetime of LISA T = 4 yr, the characteristic strain of an inspiraling binary can be calculated as (Thorne 1987) h c ≈ 3.75 × 10 −19 f gw 1 mHz where f gw [Hz] = 2/P orb [s] is the GW frequency, d is the distance to the object, and is the so-called chirp mass. Helium-donor AM CVn stars are young objects (t 2 Gyr) and belong predominantly to the thin disk population (Ramsay et al. 2018). Therefore, to determine d, we assumed that the space distribution of progenitors of AM CVn stars in the Galaxy may be described as where 1 ≤ R ≤ 16 Kpc is the galactocentric radial distance, R d = 2.5 Kpc is the characteristic radial scale, z is the distance to the Galactic plane, and z d =0.3 Kpc is the characteristic scale height of the disk (Jurić et al. 2008). We did not consider the inner region of the Galaxy with R ≤1 Kpc, hosting a 'bulge/bar', where young stars are absent Joyce et al. 2022). The volume of the 'excluded' region is quite conservative, taking the complicated structure of the inner region of the Milky Way into account (e.g. Valenti et al. 2016). The Galactic thin disk age was set to 10 Gyr. The current number of proto-and AM CVn systems was obtained by convolving the birthrates of model systems with their lifetimes and star formation rate (SFR). considered to be constant over the past 2-3 Gyr and equal to 2 M yr −1 (Chomiuk & Povich 2011;Licquia & Newman 2015). The birthrate of AM CVns precursors for SFR 1 M yr −1 may be found as ν = C × N AM /N BS E , where N AM is the number of precursors obtained by evolving N BS E initial systems by BSE and C=0.045 is the percentage of systems with M 1 ≥ 1 M in the [0.1,100]M range (for the Salpeter IMF). The number of stars that spend time ∆t k in the rectangular cell of ( , where ∆ f and ∆ h c are the steps of the regular grid, respectively, was computed as follows: where 0 ≤ ∆t k,l ≤ ∆t k is the duration of the AM CVn stage for the system l, N S T ARS i, j is the number of systems in the given cell according to the STARS grid, and N BS E is the number of systems initialised in BSE. We used N BS E = 10 5 .
Formation of He-donor AM CVn stars
Modelling the evolution of 10 5 stars in BSE resulted in the production of 524 progenitors of He-star AM CVns. Corner plots in Fig. 1 show the relations between the masses of components and orbital periods of initiated systems and stellar types of binary components at ZAMS, prior to the first RLOF in the system, which results in the formation of WD components, and prior to the second RLOF, which results in the formation of stripped He components. We note that some initiated systems have extremely large orbital eccentricities, but in the course of evolution they circularise. While the first RLOF for some systems proceeds stably, the second one almost always involves the formation of common envelopes.
Distributions of the parameters of the pairs of naked He-stars+WD after the second RLOF is shown as a grid in Fig. 2. The BPS results suggest that He stars are predominantly the lowest mass ones, from M min ≈0.32 M to ≈0.4 M , but in some very rare cases they may be as massive as about 1.2 M . Accretor masses are predominantly in the 0.65 M to 1 M range. The presence of relatively massive donors implies that even in the case of not very efficient accretion, WDs may accumulate M Ch , as suggested by Solheim & Yungelson (2005). The distribution is dominated by the binaries that formed with P orb < ∼ 100 min., but sometimes P orb attain 150 min. In wider systems, He in the cores of stars may be exhausted before RLOF and they end their lives as WDs and merge with companions due to AML via GWR. In low-mass donors (M sdB,0 < ∼ 0.4 − 0.5M ) and more massive donors which overflow Roche lobes when He abundance in the cores is still Y c > ∼ 0.3, He burning is quenched almost immediately after RLOF and they transform into AM CVn stars. In more 3 at RLOF, core He burning continues. When He in the centre is almost exhausted, the stars contract and burn remainders of He. Expansion after exhaustion of He results in a merger with their companions (for details, see the case of the (0.65+0.8) M , P orb,0 =90 min. system in Yungelson 2008). Figure 2 provides information on the space of the initial parameters of immediate progenitors of He-star AM CVn systems and their birthrate. We estimate the current Galactic birthrate of He-donor AM CVn stars as ≈ 4.6 × 10 −4 yr −1 . This number is commensurate with the value 2.7 × 10 −4 yr −1 found by Nelemans et al. (2004) for the case of high-mass of the layer of accreted He, necessary for its detonation and prevention of formation of an AM CVn star (see further discussion below).
Population of He-donor AM CVn stars detectable by LISA
Based on the information provided in Fig. 2, we computed a grid of 275 tracks with different combinations of M WD,0 , M He,0 , and P orb,0 with the STARS code. For computation of the f − h c relation, every computed system was assigned 20 000 random positions in the thin disk of the Galaxy. The h c obtained were then taken into account with the weight 1/20000. Some cells shown in Fig. 2 contain up to five systems, and thus, altogether, we had 365 tracks. In the case of several systems in a cell, the computation of h c accounting for random positions was repeated appropriately. Figure 3 shows the present-day distributions of detached pre-AM CVn and AM CVn systems in the f − h c plane. It is mainly defined by the distances to the objects and lifetimes of stars in the Article number, page 3 of 9 A&A proofs: manuscript no. 44225 Relations between the parameters of detached He-star+WD systems which will evolve into He-star AM CVns. The subpanels show the relations between the masses of WDs and post-CE P orb (a), between the masses of the nascent stripped He star and P orb (b), between the masses of WDs and stripped star components (c), the differential (red line) and cumulative (black line) distributions over P orb (d), the differential and cumulative distributions over M WD (e), and the differential and cumulative distributions over masses of He stars (f). The plots in the panels are normalised to unity.
given frequency range. Evidently, the darkest shades are 'populated' by distant and long-living stars with low-mass donors. The possibility of detecting signals from most systems is limited by the sum of the 'confusion limit' that is formed by the signals of the population of unresolved detached close DWD and AM CVn stars (Evans et al. 1987;Bender & Hils 1997;Hils & Bender 2000;Nelemans et al. 2001b) and LISA instrumental noise, which prevents the detection of gravitational signals from binary WDs, unless they are very strong. We applied the 'observationsdriven' confusion limit (Karnesis et al. 2021;Korol et al. 2022) 4 , based on the results of the studies of the local DWD population using large samples of objects. The GW foreground dominates at f < ∼ 1.8 MHz, while LISA sensitivity (for S/N >5) defines detectability at higher frequencies (see Fig. 4). the foreground is dominated by detached WDs and AM CVn stars do not play a noticeable role in its formation (Hils & Bender 2000;Nelemans et al. 2001b). This justifies the use of the confusion limit inferred from the observations of detached WDs only.
As it is mentioned above, our evolutionary code does not allow one to compute the evolution of stars less massive than (0.02 -0.03) M (P orb > ∼ (42-43) min.). We find that the Galaxy harbours about 112 000 He-donor AM CVn stars with P orb < ∼ (42-43) min. However, as it is clearly seen in Fig. 3, for most systems the periods exceeding 42-43 min. are longer than the periods at the confusion limit.
It is clear from Fig. 3 that, currently, most AM CVn stars should have masses of donors below 0.02 M and relatively massive accretors ( > ∼ 0.7 M , unless evolution is strongly nonconservative).
In order to illustrate our conclusions, we plotted in Fig. 3 an evolutionary track for the system with initial parameters M He,0 = 0.43 M , M WD,0 = 0.87 M , and P orb,0 =90 min., assum- ing that its distance to the Sun is 1 Kpc. Such a system was chosen since its track both starts and ends below the fore-ground+sensitivity line. We split the track into 'detached' and 'semi-detached' parts. The system remained detached for about ∆t ≈36 Myr. The He star overflows its Roche lobe when P orb ≈20.2 min. Before this, the binary might be observed as an sdB+WD system. Several such possible proto-AM CVn systems with well-measured parameters are known and indicated in Fig. 3. The minimum P orb of the binary is 11.46 min., and it is reached at t ≈ 42 Myr.
Our exemplary binary becomes undetectable in GW when its orbital frequency declines to 1 mHz (P orb ≈ 33.3 min.) at t ≈ 131 Myr. At this t, the mass exchange rate is ≈ 1.4 × 10 −9 M yr −1 , meaning that the system should still be bright in optical. Later,Ṁ rapidly declines. The code breaks at t ≈ 496 Myr after the formation of a He-star + WD system, when P orb ≈ 43.2 min. and M He ≈ 0.023 M .
Discussion and conclusions
We have presented the results of the first study of He-star AM CVns and their immediate precursors by the hybrid BPS. The advantage of the method is more precise tracking of semidetached binaries than by analytic approximations. However, the physics implemented in the evolutionary code (opacity tables) restricts the range of orbital frequencies of AM CVns available for the study.
Comparison to other studies
We estimated that the birthrate of the Galactic thin disk He-donor AM CVn stars is 4.6×10 −4 yr −1 , and the number of objects with P orb < ∼ (42 − 43) min. is ≈112 000. We expect that about 500 of them may be discovered during a 4 yr long LISA mission.
In addition, we found that within 1 Kpc around the Sun, there might be approximately 130 He-star AM CVns with P orb < ∼ (42-43) min. (assuming that they follow space distribution (3)). This is the lower limit of the number of He-star AM CVn systems, since we were not able to trace evolution for P orb > ∼ 43 min. In fact, we lost the majority of the systems, since evolution slows down. One hundred and thirty stars correspond to the space density of 3.1 × 10 −8 pc −3 . This number may be compared to a 2σ limit on the space density of AM CVns based on Gaia DR2 data ρ > 7 × 10 −8 pc −3 (Ramsay et al. 2018). Keeping uncertain selection effects in mind, serendipitous discoveries of AM CVn stars, taking into account, on the one hand, that He-star AM CVn stars may evolve much longer than we can account for, but, on the other hand, WDs might explode without leaving bound remnants or disrupting the binary, we deem that this result does not contradict the finding of only three candidate He-donor AM CVns in the sample of known objects.
Published estimates for the population of AM CVns were obtained under different assumptions. The most important computed parameters are distributions of the initial binaries over the IMF of the primaries, mass ratios of components, orbital separations and eccentricities, the spatial distribution of binaries, star formation history (SFH), the CE formalism, the efficiency and consequences of accretion, and the treatment of the evolution of a He star. In addition, the estimates of the number of AM CVn stars that might be detected by LISA vary as the project itself and suggested data processing evolve. We list below the results of some computations with available data on assumed detection details. Tutukov & Yungelson (1996) found the birthrate of Hedonor AM CVn stars ν He−AM = 4.9 × 10 −3 yr −1 for CE efficiency α ce =1 (when considering common envelopes, they set the donor envelope binding energy factor λ = 1). The birthrate declined to 0 for α ce =0.1. The reason is clear: future He-donors are formed via CE. Roughly, post-and pre-CE separations are related as a f ∝ α ce × a i (see below). For small α ce , all possible precursors merge in CE. If α ce exceeds some limit, all post-CE systems are so wide that He in the cores of potential donors is exhausted before RLOF. The evolution of He-donor AM CVn stars was integrated, assuming constantṀ = 3 × 10 −8 M yr −1 . For α ce =1, the number of objects was estimated as N He−AM = 4.9 × 10 5 to 1.9 × 10 5 . The reason for the rather small number of objects was the assumption that, depending on M WD andṀ, the accreted layer may either detonate after the accumulation of 0.2 M of He and destroy the WD in a supernova (SN) or it may strongly expand and an R CrB star may form. Nelemans et al. (2001a) and Nelemans et al. (2004), who used code SEBA, varied SFR(t), several assumptions on stellar Article number, page 5 of 9 evolution, the Galactic model, and age. As opposed to most other studies, the CE formalism based on the conservation of angular momentum was used for stars with non-compact components. Most important, it was assumed that an accreted He layer may detonate after the accretion of either 0.15 M or 0.30 M . Within the range of different assumptions, ν He−AM varied from 0.27 × 10 −3 to 1.6 × 10 −3 , N He−AM varied from 1.1 × 10 7 to 3.1 × 10 7 , and surface density ρ exceeded 2 × 10 −5 pc −3 . Nissanke et al. (2012), using the updated version of SEBA (Toonen et al. 2011) and varying assumptions on the occurrence of double detonations and the Galactic model, estimated N He−AM as 0 to 1.12 × 10 7 . The number of detectable He-star AM CVn systems in the most optimistic case (5 Mkm detector) was only about 80.
The basic difference between the present study and the studies involving SEBA is the treatment of the evolution of He stars. In SEBA, an analytic approximation to the M − R relation was used, minimum masses of the donors were 0.007 M , and the Galactic age was set to 13.5 Gyr. The M − R relation roughly approximated the post-period minimum part of the donor track for the system (0.5+1.0) M in which the donor overfilled the Roche lobe almost unevolved (Tutukov & Fedorova 1989). For the sake of comparison, we computed the evolution of the (M He,0 + M WD,0 , P 0 )=(0.43M +0.87M , 90 min.) and (M He,0 + M WD,0 , P 0 )=(0.33M +0.72M , 35 min.) systems using this M − R relation. Computations were continued until M He ≈ 0.007 M , as in Nelemans et al. (2001a); readers can refer to Fig. 4 for more information. This Figure clearly shows the major difference to the tracks computed by a full evolutionary code: a shift to larger frequencies and strain. The lifetime of the systems in the region of the f − h c diagram above the LISA sensitivity line is 41.4 Myr and 67.5 Myr, by about 30% and 50% longer, respectively, than that of the systems computed by an evolutionary code. These factors, along to the higher birthrate and different Galactic model, may be partially responsible for the higher number of potentially detectable systems and their higher frequencies in the models of Nelemans and his coauthors.
Other studies of AM CVn stars and their detection neglected He-star systems, as they have been deemed unimportant sources for LISA compared to detached binary WDs and DD AM CVn stars. We only list below some estimates of detection rates. Nelemans et al. (2004) estimated the total number of detectable DD AM CVns as ≈11 000 for a 5 yr mission and S/N>5; Ruiter et al. (2010), assuming a 1 yr long mission, S/N>5 and a 5 Gm arm length for the detector found N=5300 sources; Nissanke et al. (2012) estimated the number as N < ∼ 2000; Yu & Jeffery (2013) found 8010, 19820, and 3840 objects for quasi-exponential, constant, and instantaneous SFH, respectively, after a 1 yr of integration and S/N>3; Kremer et al. (2017) found 2700 sources, requiring S/N>5 and negative chirp < 0.1 yr −2 ; Breivik et al. (2018) provide N∼3000 as an average of different assumptions on common envelope parameters for a 4 yr long mission. Keeping all of the uncertainties in BPS in mind, as well as different detection criteria, all estimates are, in fact, in the same range. Götberg et al. (2020), using a simplified BPS and a grid of tracks (Götberg et al. 2019), estimated the number of stripped (0.3 -2.5 M ) He stars with WD companions in the Galaxy as ∼ 90 000 and suggested that 15% of these systems are currently in the mass-transfer stage. Götberg et al. (2020) do not present M WD − M d relation for the binaries that started mass transfer, but a naked-eye comparison of Fig. 2 from the present paper and Figs. 2 and 3 from Götberg et al. (2020) suggests that no more than about 30% of binaries (≈4000) from the 'interacting' sample will experience stable mass transfer (M d < ∼ 1 M , P orb < ∼ 1 hr). This number is still commensurate with our estimate of 14 800 pre-AM CVn stars, keeping differences as to assumptions about the initial distribution of binaries over masses, periods, CE parameters, and evolutionary codes in mind. Götberg et al. (2020) estimate that under extremely favourable assumptions, LISA will be able to detect, within a 10 yr mission, about 100 He-star+WD systems with S/N>5. However, realistic assumptions suggest numbers < ∼ 10.
Dependence on assumptions
While the existence of DD AM CVn stars is beyond a doubt, there are some questions concerning their formation and fate. This is associated, foremost, with the orders of magnitude discrepancy of observed and predicted space densities of the objects ρ. Observations suggest ρ>7 × 10 −8 pc −3 (Ramsay et al. 2018), which is at least an order of magnitude less than the numbers listed above. Among the major unsolved problems is the stability of mass exchange immediately after RLOF by He WDs since, in the case of inefficient spin-orbit coupling, most systems should merge (Nelemans et al. 2001a;Marsh et al. 2004;Brown et al. 2016). Existing observational data on the slow rotation of two AM CVn stars ) are too scarce for any conclusions to be drawn. Shen (2015) noted a problem common to other CVs as well (see also Metzger et al. 2021;Shen & Quataert 2022). In the initial stages of RLOF, H-rich matter is transferred, leading to the classical novae outbursts. In the AM CVn stars, a later transfer of He may cause outbursts. If these events result in the formation of envelopes that engulf both components, dynamical friction may shrink the orbits, enhance the mass-transfer rate, and finally lead to the merger of components. Just-formed 'stripped stars' possess H-rich envelopes (Ziółkowski 1970). Thus, they may first experience a series of H novae, ifṀ is appropriate and, later, outbursts of He burning, also accompanied by the formation of common envelopes. However, these inferences should be confirmed by modelling WD motion inside postulated envelopes.
We assumed that mass exchange in pre-AM CVn and AM CVn systems is conservative. This is a certain simplification, since it is known that He accretion onto WDs at the rates close to the range of expected accretion rates in AM CVn stars may result in thermonuclear outbursts of a different strength in the layer of accreted He (e.g. Taam 1980;Nomoto 1982a,b;Iben & Tutukov 1991;Woosley & Kasen 2011;Piersanti et al. 2014), ranging from weak flashes to detonations, potentially initiating sub-Chandrasekhar SNe Ia via a mechanism of doubledetonation (Livne 1990) or, for example, faint peculiar SN Iax due to deflagration in a He layer (Justham et al. 2009). If SN disrupts the binary, a single helium-rich object may be formed. suggested that runaway helium star US 708 is a remnant of a binary disrupted by a double-detonation sub-Chandrasekhar SN. Two further candidates of the same class were recently suggested by Neunteufel et al. (2022). We note, however, that an attempt to model a population of high-and hypervelocity He stars in the latter paper hints to a very low rate of events that disrupt He-star+WD binaries.
In the strong flashes, accretor loses accumulated He layer partially or completely. However, the issue of possible detonations and matter retention efficiency in flashes has not been solved yet, especially since the character of thermonuclear flashes depends on the rotation of the accretor. In the sample of pre-AM CVn stars generated by BPS, we found no system satisfying all conditions for accumulation of the He layer prone to detonation of rotating WDs formulated by Neunteufel et al. (2019). However, as an illustration of the possible influence of the loss of matter, we present in Fig. 4 the track for the (M He,0 + M WD,0 , P 0 )=(0.43M +0.87M , 90 min.) system computed under an extreme assumption that all accreted matter is lost by WD by isotropic re-emission. The experiment shows two effects. First, a decrease in the total mass of the system results in reduced strain (see Eqs. (1) and (2)), but the difference is not significant. Second, since isotropic re-emission slows down widening of the system, the non-conservative binary spends about 60 Myr above confusion limit, which is by 50% longer than the time spent by its conservative counterpart. The combined effect would be an increase in the number of AM CVn stars above the confusion limit, but with slightly weaker signals.
To compare this with the (M He,0 + M WD,0 , P 0 )=(0.43M +0.87M , 90 min.) system discussed above, we plotted in Fig. 4 the track for a more typical system (M He,0 + M WD,0 , P 0 ) = (0.33M +0.72M , 35 min.), also assuming the distance of 1 Kpc. The system is observable as a detached He-star+WD binary for 8.9 Myr. It spends about 89 Myr above foreground, that is 3 times longer than the more massive system. However, at a given frequency, the signals are quite comparable. At the orbital frequency f ≈ 10 −3 Hz, characteristic strain h c becomes lower than the foreground level. The mass of the donor at this instant declines to ≈0.045 M , and the mass exchange rate becomes 1.7 × 10 −9 M yr −1 , that is to say the star should still be relatively bright. Further evolution of the system as an AM CVn star, which we were able to trace with STARS, lasted for ∆t ≈ 334 Myr. The mass of the last donor model is ≈0.026M , and the orbital period of the system is 43.7 min.
The greatest uncertainty in the BPS is the treatment of common envelopes (see Ivanova et al. 2020, for the latest review). It is still an unsolved 3D problem and, therefore, simple energy or angular momentum balance considerations were applied. Since the post-CE separation of components a f was evaluated using the energy balance, where M d is mass of the donor, M 2 is the mass of the accretor, M c is the mass of the donor core, M e is the mass of the donor envelope, α ce is the so-called common envelope efficiency, λ is the binding energy parameter of the donor envelope, and r L is the Roche lobe radius. The most problematic term in Eq. (5) is α ce × λ. It is evident that α ce ≤ 1, unless highly uncertain additional energy sources (see Ivanova et al. 2020) are invoked and α ce should be 'individual' for all binaries. In its own turn, λ depends on the evolutionary stage of the star, core-boundary definition, possible account of terms, other than gravitational binding energy. Since a f /a i ∝ α ce × λ, the attempts to evaluate α ce from the empirical data, in fact, provide this product, but not α ce , unless some assumptions as to λ were made. Theoretically, the run of λ along evolutionary tracks may be evaluated for certain sets of assumptions. As shown, for example, by Dewi & Tauris (2000) and Loveridge et al. (2011), for intermediate-mass stars, precursors of components in He-star AM CVns, λ is about 0.2-0.4 in the RGB stage and it becomes closer to 1 in the AGB stage. Keeping in mind that the formation of He-star AM CVns invokes two common envelope episodes ( Fig. 1) and uncertainties in the derivation of α ce and λ, we consider α ce =1 as a reasonable and conservative assumption.
As a test, we performed three runs of BSE for 10 6 initial systems, assuming α ce =0.5, 1, and 4. The obtained numbers of precursors of He-star AM CVns were 275, 5296, and 21670. This is understandable: by virtue of the relation a f /a i ∝ α ce × λ, more systems merge in common envelopes for small α ce and many more survive for, probably nonphysical, high α ce . We note that even for α ce =0.5, the number of those potentially observed by LISA stars would be small, but not negligible (in our computations for 10 5 initial binaries, there were 524 progenitors for α ce =1).
Observed sdB+WD binaries
Along to the He-donor AM CVn stars, we estimated the number of their immediate precursors -detached stripped He-star+WD binaries. It is clear that the lifetime in the pre-AM CVn stage is short, and the number of precursors is small, close to 14 800. About 80 of them may be detected by LISA and they should be among the strongest sources (Fig. 3). In observations, they should be identified with detached sdB+WD systems.
We plotted in Fig. 3 positions of well-studied sdB+WD binaries with estimated distances. It is worth considering their destiny and possible relation to the AM CVn stars. Neunteufel et al. (2019) have shown that in the systems with combinations of donor and rotating WD masses, as in these systems, detonation of an accreted He layer does not occur. Rather, deflagrations and ejection of some matter may be expected. Thus these binaries may become AM CVn stars. For PTF J2238+7430, our conclusion is in disagreement with that of Kupfer et al. (2022), who expect double-detonation after the accumulation of 0.17 M of He, but they did not take rotation effects into account.
HD265435 is a massive (M sdB ≈ 0.62 M , M WD ≈ 0.91 M ), wide (P orb ≈90.1 min.) system (Pelisoli et al. 2021). Large P orb suggests that the RLOF by sdB will happen when He in its core will be considerably exhausted. Because of the high mass of the donor, the presumably low abundance of He in the core, and the expected continuation of He burning after RLOF, we expect that HD265435 will not become an AM CVn star, but its donor will merge with the companion, possibly, with a SN Ia. A similar conclusion was also reached by Pelisoli et al. (2021).
As described in §3, post-RLOF evolution of He stars depends on their mass and degree of exhaustion of He in their cores. An additional factor, which defines the possible outcome of accretion of He onto WDs, is rotation.
The semi-detached system ZTF J2055+4651 (Kupfer et al. 2020b), with M sdB ≈ 0.4 M and M WD ≈ 0.68 M , has large P orb = 56.35 min. The presence of H in the spectrum means that the subdwarf overflowed the Roche lobe close to the current P orb . This implies that He in the core of the subdwarf is almost exhausted and it will evolve into a hybrid WD and merge with the companion, as also suggested by Kupfer et al. (2020b). The system ZTF J2130+4420 (Kupfer et al. 2020a) is similar to ZTF J2055+4651: M sdB ≈ 0.337 M , M WD ≈ 0.545 M , and P orb =39.34 min. There is H in the spectrum too and it may be expected that its evolution must be similar to that of ZTF J2055+4651.
The most recently discovered system J1920-2001 (Li et al. 2022), with M sdB ≈ 0.337 M and M WD ≈ 0.545 M , differs from the above-mentioned systems by a large P orb =3.4946 hr. The position of the subdwarf in the T e f f − log g diagram suggests that it is in the He-shell burning stage. After completion of this stage, it will turn into a WD and merge with its companion within 1 Gyr, as estimated by Li et al. (2022).
The number of stripped He stars in the Solar vicinity
At the suggestion of the referee, we compared the model number of stripped He stars in the 1-Kpc vicinity of the Sun with the number of objects in the same region in the catalogue of known hot subdwarfs (Geier 2020). In fact, the solution to such a problem requires full population synthesis of subdwarfs and is far beyond the aim of the present paper.
The stars in Geier's catalogue have parallaxes π from GAIA DR2. To obtain the distances to the stars d, we cross-correlated this catalogue with that of Bailer-Jones et al. (2018), which provides Gaia distances corrected for non-linearity of π → d transformation.
Assuming Galactic SFR after Yu & Jeffery (2011) and using the same assumptions as for modelling an AM CVn population, we estimated, using BSE, the birthrate ν and current Galactic number N of detached He-star+WD systems (ν ≈ 5.8×10 −4 yr −1 and N ≈ 40), He-star+MS star binaries (ν ≈ 1.9 × 10 −3 yr −1 and N ≈ 130 for M He ≤ 1.5 M ), and single sub-dwarfs that formed by the merger of He WDs (ν ≈ 9.4 × 10 −5 yr −1 , N ≈ 35). The number of obtained model objects (≈ 200) is 5 times lower than the number of objects within 1 Kpc from the Sun in the subdwarfs catalogue.
However, observed hot subdwarfs are a mixture of objects (see, e.g. comprehensive review by Heber (2016)), possibly forming also via channels different from those listed above. We note that we did not consider a possible merger of red giants and low-mass main-sequence stars in the common envelopes. As suggested by Politano et al. (2008), such mergers may result in the formation of single hot subdwarfs. The solution to the problem requires 3D modelling, which is still beyond our current capabilities. In our model, such mergers occur at the rate ν = 1.8×10 −2 yr −1 . If this channel really produces predominantly hot subdwarfs and if their typical lifetime is (200 -300) Myr, as that of the subdwarfs that formed via 'stripping' channel, this scenario may occur to be the main route for formation of single hot subdwarfs. Keeping in mind that within 1 Kpc from the Sun reside about 0.1% of all hot subdwarfs formed via the 'stripping' channel, formation via merger may also resolve the problem of 'deficiency' of model stars respective to the catalogued ones.
We also remind readers that there is still an unresolved hypothesis about single subdwarfs precursors -'hot flashers' (see, e.g. D' Cruz et al. 1996, and references therein) -or suggestions that subdwarfs may be formed due to an interaction of red giants with brown dwarfs (Nelemans 2010) and even planets (Soker 1998).
Conclusions
To summarise, we explored the formation of short and moderate period (P orb < ∼ 43 min.) AM CVn stars with He donors, conjoining fast BPS for their progenitors with detailed evolutionary computations for the AM CVn stage itself. We found that the number of such systems in the Galaxy -if their components do not merge due to tidal friction in the novae envelopes in the early stages of mass transfer and if they are not destroyed by He detonations -may amount to 112 000. About 500 of them may be detected by LISA with S/N>5 during a 4-yr mission. In addition, LISA may detect up to 80 of their immediate precursors.
Helium-star AM CVns were modelled separately from DD in several papers using code SEBA (Nelemans et al. 2001a and its clones only. Since we did not cover the entire range of P orb , it is impossible to compare predicted numbers directly. Nevertheless, it is clear that we expect a much lower number of AM CVns, mostly because of the difference between the results of evolutionary computations for binaries and results, based on analytic M − R relations. A low rate of predicted detections of AM CVns in GWR in our study may be justified since the main limiting factor is the confusion limit. The scarcity of He-star AM CVns in the total sample of the observed AM CVn stars may mean that unrecgnized recognized selection effects still exist. As well, the problem of possible merger of components due to tidal friction in the envelopes ejected in strong flashes of nuclear burning of accreted H and He still remains open. The same concerns possible explosions of accreting WDs as SNe or the decay of binaries due to mass ejection in the outbursts of He burning. Possible non-detection of these stars by LISA may confirm these assumptions. The deficiency of He-donor AM CVn stars may also point to a low value of the product of 'common envelope efficiency' and binding energy parameter α ce × λ. A much lower number of model hot subdwarfs within 1 Kpc from the Sun compared to the number of them with Gaia distances, catalogued by Geier (2020), may indicate that there are other scenarios of formation of single subdwarfs, apart from the merger of WDs or the disruption of binaries. This problem definitely deserves a dedicated study. | 2022-10-18T01:16:13.415Z | 2022-10-16T00:00:00.000 | {
"year": 2022,
"sha1": "f54e0975b1559cad7cd57782c623ee6c779ac5f9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f54e0975b1559cad7cd57782c623ee6c779ac5f9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
109228167 | pes2o/s2orc | v3-fos-license | Multifocal Colonic Endometriosis: Diagnostic Challenge and Therapeutic Modalities
Endometriosis occurs in 7-10% of women in the general population. It is an estrogen-dependent disease and, thus, usually affects reproductive aged women. intestinal involvement of endometriosis causing obstruction is relatively uncommon and is difficult to differentiate from neoplasia before surgery. Among women with intestinal endometriosis, the rectum and sigmoid colon are the most commonly involved areas (75–90%). We report a 22year-old female presented with abdominal pain and rectal bleeding, and who had multifocal colonic endometriosis involving sigmoid colon and two other proximal foci (15 cm apart) diagnosed by histopathology after resection of the involved area laparoscopically with end to end anastomosis. She notices significant pain relieve one month after surgery and GnRH therapy. We did a review in the literature to find the best approach in such cases. Endometriosis is a common debilitating benign gynecologic condition. Multifocal symptomatic colonic endometriosis is rare, delayed and underdiagnosed disease. But it should be considered in the differential diagnosis of reproductive age women with cyclic abdominal pain and rectal bleeding. In this report, we present a case of young female with multifocal colonic endometriosis and discuss its clinical, radiological and colonoscopic findings. Literature review was done to find the best approach in managing such cases.
Introduction
Endometriosis, the presence of endometrial-like glands and stroma outside the uterus, is a common, poorly understood, and extremely debilitating benign gynecological condition.No cure exists for the disease, and treatment is directed toward medical suppression, surgical excision, and symptoms alleviation.
Intestinal endometriosis is rare condition and can have variable presentations; moreover, the diagnosis is often delayed.Among women with intestinal endometriosis, the rectum and sigmoid colon are the most commonly involved areas.It rarely affects whole colon circumference, having the potential to cause obstruction or perforation.Its differential diagnosis involves colonic neoplastic disorders (benign and malignant), inflammatory bowel disease [1], ischemic colitis [2] and some types of diverticulosis.
We report a case of multifocal colonic endometriosis involving rectosigmoid and descending colon.
Case Report
A 22-year-old, single, Saudi female who was well till 6 years ago when she started to complain of abdominal pain.
Her menarche at age of 12-year-old pain is diffuse, gradual of onset, colicky, intermittent, and lasting for 1-2 hours/day during menstruation, increased two days before menstruation, improved by defecation, rest and antispasmodic mediations.Associated increased frequency of urination and intermittent bloody stool the first 3 days of menstruation, mixed with some mucus.
In the last year, the pain became localized in the Lower left quadrant.Her medical history otherwise would be unremarkable.Her symptoms became a lot worse in the last month.
Colonoscopy reveals normal rectal mucosa, and narrow rectosigmoid where the scope could not be passed, incidentally, many pinworms were seen (Figure 1).Abdomeno-pelvic CT scan, Figure 2 shows that there is excentric thickening at recto-sigmoid junction and sigmoid colon.The thickness of the wall ranges from (12-18 mm), Chocolate cyst were noted in both ovaries.No evidence of liver focal lesions or ascitis.MRI pelvis (Figure 3) done and it reveals two separated solid enhancing mass lesions protruded intra-luminal in the colon, one of them at the anterior aspect of the recto-sigmoid measures about (3 × 2 cm) and another similar one (15 cm proximal to previous one) measures about (4.5 × 0.5 cm).Laparoscopic exploration reveals extensive pelvic adhesions involving sigmoid colon, left ovary and tube with hydrosalpnix (Figure 4).There were 2 small (1 × 1 cm) black lesions noticed over the adhesions (endometriotic tissue) (Figure 5).Adhesolysis and resection of the recto-sigmoid colon followed by end to end anastomosis was done with cauterization of the area in contact to prevent recurrence.
Histopathology of the specimen showing segment of large bowel (26 × 3 cm) and the wall shows 3 foci of mural thickening, largest focus measures (5.5 × 2.5 × 1) cm.Serosa shows adhesions.The microscopic examination showed cystic structures lined by endometrial glands and stroma (Figure 6).Several reactive lymph nodes were identified in the pericolic fat, some of which shows endometriosis.No evidence of malignancy seen.She received GnRH therapy monthly (for 6 months) and was followed up in surgery and gynecology clinic.Significant improvement of pain was seen after 1 month of surgery and GnRH therapy.6 months later CA125 level was 47 U/ml.
Discussion and Conclusion
Endometriosis is a common debilitating benign gynecologic condition and occurs in approximately population [3].It is an estrogen-dependent disease and therefore, usually affects reproductive-aged women.Among women with intestinal endometriosis, the rectum and sigmoid colon are the most commonly involved areas (75-90%) [4].other parts of the bowel less commonly affected are the distal ileum (2-16%) and appendix (3-18%).Classically, the hallmark symptom of colonic endometriosis is rectal bleeding during menstruation; however, intussusception, hemorrhage, perforation, small bowel or colonic obstruction has also been reported [5].There are many theories explaining the pathophysiology of this disease, But, most widely accepted is a retrograde flow of endometrial tissue through the fallopian tubes into peritoneal cavity and then to the intestinal wall where external implantation occurs.Diagnosis of intestinal endometriosis is difficult as the implanted lesion is hard reach the colonic mucosa [6].Before this, it is important to consider it in the differential diagnosis of such presentation even if no overt colonoscopic pathology found.
The Goals of the treatment are to relieve the symptoms (pain relief, defecation, urination), decrease the recurrence rate and enhance the fertility.Medical therapy should be given after discussion with gynecologist and includes NSAIDs or GnRH analogs, oral contraceptive pills, aromatase inhibitor and progestins.Surgical intervention indicated when the symptoms of endometriosis are severe, there has been an inadequate response to medical treatment or there is anatomic distortion of the pelvic organs or obstruction of the bowel or urinary tract.
If surgery indicated in colonic endometriosis, GnRH agonists should not be used before surgery to reduce the extent of peritoneal (superficial implants) disease [7].By reducing the implants' size and number, the surgery will be more difficult as the surgeon will not find the actual size and number of the deposits, therefore, affect the surgical outcome.Unlike post-operative, use of GnRH agonist immediately following surgery reduces the rate of symptom recurrence and increases the length of time before symptoms recur [8].Laparotomy or laparoscopic intervention is also controversial issue in colonic endometriosis.It seems to be equally effective in the treatment of infertility and chronic pelvic pain associated with severe endometriosis.Laparoscopic surgery has advantages in a form of hospital stay, early mobilization and wound infection but in such cases require experienced surgeon.
Figure 3 :
Figure 3: MRI showing the lesion on the anterior wall of recto sigmoid. | 2019-04-12T13:29:46.247Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "07d297b244d886489bf26428a46ca8beab92586b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2471-8556.1000150",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6635e42f544d1e44bfb8312f99a00c54a271d95e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265353071 | pes2o/s2orc | v3-fos-license | Gains in the current understanding of managing neovascular AMD with brolucizumab
Background Unresolved retinal fluid and high injection burden are major challenges for patients with neovascular age-related macular degeneration. Brolucizumab addresses these challenges by providing robust vision gains and superior fluid resolution, with the potential for longer treatment intervals. Brolucizumab has been associated with adverse events of retinal vasculitis and retinal vascular occlusion typically in the presence of intraocular inflammation (IOI). To define the incidence of the adverse events, Novartis convened an external safety review committee, which found a rate of 4.6% for definite or probable IOI, 3.3% for retinal vasculitis, and 2.1% for retinal vascular occlusion in the HAWK and HARRIER trials. Novartis also established a coalition to explore 4 areas regarding the adverse events: root cause, patient characterization, event mitigation and vigilance, and treatment protocols for the adverse events. Based on the coalition findings, a risk mitigation framework was developed. Prior to initiating treatment with brolucizumab, it is important to weigh the potential benefit against risk of adverse events and to consider patient risk factors such as prior history of IOI and/or retinal vascular occlusion. To mitigate the potential for IOI-related adverse events, it is important to conduct a thorough dilated eye examination before each injection and closely monitor patients throughout treatment. Patients should be educated on symptoms of IOI to monitor for. Brolucizumab should not be injected in the presence of active IOI. If an adverse event is identified, prompt and intensive treatment should be considered. Conclusion Progress has been made in understanding how to mitigate IOI-related adverse events following treatment with brolucizumab.
Despite the benefit of anti-vascular endothelial growth factor (anti-VEGF) therapy, significant unmet needs exist in the management of patients with neovascular age-related macular degeneration (nAMD).Patients with nAMD may have unresolved fluid, even with monthly injections [1].In a retrospective analysis of electronic health records from the US Retina database, more than 50% of eyes were found to have residual retinal fluid after 2 years [2].High injection burden and treatment adherence are also major challenges for patients [3][4][5].An analysis of real-world data from the IRIS Registry showed that, at the end of year 1, almost 40% of patients were receiving intravitreal injections more frequently than every 8 weeks [4].A systematic review identified nonadherence as a prevalent problem, with up to 57% nonadherence at 1 year [5].Undertreatment due to nonadherence may lead to long-term vision loss [6].
In 2019, brolucizumab received US Food and Drug Administration (FDA) approval for treatment of nAMD, and was subsequently approved in more than 40 countries [7].Brolucizumab offers an important treatment option for patients with nAMD.The HAWK and HAR-RIER trials demonstrated that brolucizumab (q8 or q12 weeks) was noninferior to aflibercept (q8 weeks) in visual acuity gains at week 48 [8], and these gains were maintained out to week 96 [9].Brolucizumab also showed greater reduction in central subfield thickness than was seen with aflibercept [8,9], and rates of intraretinal fluid and subretinal fluid presence were lower in brolucizumab-treated eyes than in aflibercept-treated eyes [8,9].Furthermore, approximately half of patients who were treated with brolucizumab were maintained on 12-week dosing intervals after the initial loading dose, through week 48 [8].
Brolucizumab demonstrated an overall favorable benefit-risk profile in the HAWK and HARRIER trials.At 96 weeks, pooled data showed intraocular inflammation (IOI) in 4.5% of eyes and retinal artery occlusion in 0.9% of eyes treated with brolucizumab 3 mg or 6 mg compared with 0.8% and 0.1%, respectively, of eyes treated with aflibercept 2 mg [9,10].Despite these events, visual acuity outcomes were comparable between brolucizumab and aflibercept in both trials [9].
After the FDA approval of brolucizumab [11], postmarketing reports led to an emerging safety signal for adverse events (AEs) of retinal vasculitis and retinal occlusive vasculitis [12][13][14].To further define the incidence of these AEs and the risk of vision loss, Novartis convened an external safety review committee (SRC) composed of global retina and uveitis specialists, imaging experts, and ophthalmology experts from 2 separate external data monitoring committees [15].The SRC conducted an independent unmasked post hoc review of the investigator-reported cases of IOI, retinal artery occlusion, and endophthalmitis from the HAWK and HARRIER trials [12,15].The SRC reviewed patient images and determined whether cases were likely to be drug related and within the spectrum of IOI, retinal vasculitis, and/or retinal vascular occlusion, regardless of the Medical Dictionary for Regulatory Activities (MedDRA) terminology used in the trials.The SRC found a rate of definite or probable IOI of 4.6% in the HAWK and HARRIER trials, which was similar to the incidence of IOI (4.5%) reported by the study investigators [12,15]; a rate of retinal vasculitis of 3.3%; and a rate of retinal vascular occlusion of 2.1%, which was higher than that reported by the investigators [12,15].The rate of retinal vasculitis and retinal vascular occlusion may have been higher for the SRC because it was conducting an extensive and thorough review of the cases, with definitions of the events and outcomes proposed a priori and evolving during the review based on the observations made by the SRC [15].The incidence of at least moderate vision loss associated with IOI was < 1% in each of the brolucizumab and aflibercept groups.In addition, the overall incidence of moderate or severe vision loss (including that associated with definite or probable IOI, retinal vasculitis, and/or retinal occlusion) was similar for brolucizumab-and aflibercept-treated eyes (7.4% and 7.7%, respectively) [15].
In addition to the SRC, Novartis established a coalition composed of a fully dedicated internal team of 150 Novartis associates, who worked with more than 40 external medical experts from leading universities, hospitals, medical centers, and clinics around the world to explore 4 key areas regarding the AEs: the root cause of the AEs, patient characterization, event mitigation and vigilance, and treatment protocols for the AEs [10,12].Findings from the coalition workstreams have contributed to a better understanding of the AEs and helped provide physicians with the information they need to make informed treatment decisions at each step of the patient journey.Based on the coalition findings, a risk mitigation framework was developed.
Patient selection
Patients who have persistent retinal fluid and are showing deterioration in vision because of uncontrolled disease with other therapies should be considered as possible candidates for treatment with brolucizumab.Before treatment is initiated, it is important to weigh the potential benefits of brolucizumab against the risks of retinal vasculitis, retinal vascular occlusion, and vision loss.Prior history of IOI, retinal vasculitis, and/or retinal vascular occlusion in the previous 12 months has been identified as an important potential risk factor for IOIrelated AEs following brolucizumab [16,17].Female sex has also been identified as a weaker potential risk factor [16,17].Patients with active IOI, retinal vasculitis, and/ or retinal vascular occlusion should not be injected with brolucizumab [11,18].Physicians should discuss with the patient the potential benefits and risks of brolucizumab, so the patient understands the potential benefit of better disease control and, at the same time, the risk of retinal vascular occlusion and vision loss with brolucizumab.
Event mitigation and vigilance
A thorough dilated eye examination should be conducted before each brolucizumab injection, and patients should be closely monitored throughout treatment [19][20][21].The examination should include visualization of the anterior chamber, vitreous, and retina.Patients with active IOI, retinal vasculitis, and/or retinal vascular occlusion should not be treated with brolucizumab [11,18].In the HAWK and HARRIER studies, approximately three quarters of the IOI-related AEs occurred in the first 6 months of treatment [15].Patients should be educated on symptoms to monitor, including changes in visual acuity, eye pain, floaters, discomfort, or ocular hyperemia, to help with early identification of any AEs [11,20].
Treatment of the adverse events of interest
In the event of IOI, retinal vasculitis, or retinal vascular occlusion, prompt and intensive treatment should be considered, [7,19] applying standard-of-care guidelines.Intensive treatment may include multimodal topical, systemic, and intravitreal steroids, depending on the presentation of the inflammation [7].Treatment with brolucizumab should be discontinued following IOI, including retinal vasculitis and/or retinal vascular occlusion [18].In a post hoc analysis of HAWK and HARRIER, Singer et al. found that most events of IOI were managed conservatively, and recommended vigilance and prompt treatment [21].Real-world evidence from independent publications has shown that it is possible for IOI-related AEs following brolucizumab to be managed with intensive treatment, with reversal of reduced visual acuity possible [22][23][24][25].
Potential mechanistic drivers of adverse events following brolucizumab injection
A thorough root-cause analysis was performed to identify, characterize, and prioritize potential mechanistic drivers of the AEs of interest following treatment with brolucizumab.The parameters studied included but were not limited to manufacturing, pharmacology, antidrug antibodies, neutralizing antibodies, and other immune-mediated mechanisms [17].Immunogenicity occurs when there is an immune response against a therapeutic protein, leading to production of antidrug antibodies [26].Consequences of immunogenicity can include lack of evidence of clinical effect, loss of efficacy, or serious AEs [26].Findings have shown that immunogenicity against brolucizumab appears to be necessary for developing retinal vasculitis and/or retinal vascular occlusion with brolucizumab.However, unknown factors must also play a role, given that many patients with antidrug antibodies do not develop retinal vasculitis or retinal vascular occlusion following treatment with brolucizumab [27,28].
Case study 1: brolucizumab used to increase nAMD treatment durability
The following case study provides an example of the use of brolucizumab to increase treatment durability.A 70-year-old woman, who was diagnosed with nAMD in 2013, required intravitreal injections every 4 to 5 weeks.Treatment history included 15 injections of bevacizumab, 8 injections of ranibizumab, and 37 injections of aflibercept.She received 8 injections of aflibercept in 2019.On August 5, 2019, 6 weeks after an aflibercept injection, the patient showed disease activity, with subretinal fluid on optical coherence tomography (OCT) and best corrected visual acuity (BCVA) of 20/30 (Fig. 1A).Because there was disease activity, the treatment interval was decreased to 5 weeks.On October 16, 2019, 5 weeks after the aflibercept injection, disease activity was deemed well controlled per OCT, with BCVA of 20/30 (Fig. 1B); however, there was a question of whether the patient wanted to try brolucizumab as a more durable treatment option.At the time, the community lacked awareness of the AEs of retinal vasculitis and retinal vascular occlusion with brolucizumab.At that appointment, the patient decided to start treatment with brolucizumab and was monitored every month until 8 weeks and then monitored every 2 weeks.The patient reached 14 weeks (January 27, 2020) with good disease activity control, no fluid, and stable visual acuity following a single brolucizumab injection (Fig. 1C), thereby more than doubling the treatment interval with brolucizumab compared with aflibercept.Once the risk of retinal vasculitis and retinal vascular occlusion became known, the patient was informed of the risk and asked if she wanted to continue with brolucizumab.The patient chose to continue because of the durability benefits; she was then educated on the signs and symptoms of IOI to monitor for (changes in visual acuity, eye pain, floaters, discomfort, and ocular hyperemia).Following the second injection of brolucizumab, the patient returned to the clinic after 15 weeks (May 19, 2020), which was a longer interval than recommended, and had recurrent disease activity, with decreased BCVA of 20/50 (Fig. 1D).The patient decided to continue brolucizumab, and the clinic ensured that she returned in a timely fashion.Thirteen weeks after the third brolucizumab injection (August 28, 2020), there was trace subretinal fluid on OCT, BCVA of 20/40 +1 (Fig. 1E).Ten weeks after the fourth injection (November 6, 2020), the patient had no disease activity and BCVA of 20/40 −1 (Fig. 1F).The patient continued to have good disease control and good visual acuity with brolucizumab administered every 10 to 12 weeks (January 22, 2021, BCVA: 20/30 −2 ; April 2, 2021, BCVA 20/30 +1 ).Every time this patient comes to the clinic, the eye is dilated and examined for inflammation so that any AEs can be promptly treated.A discussion about the risks and benefits of brolucizumab occurs at every visit.The patient reports benefit from coming to the clinic every 3 months instead of every 4 to 5 weeks.
Case study 2: treatment of retinal vasculitis following brolucizumab injection
The following case study describes retinal vasculitis and its treatment following an injection of brolucizumab.An 88-year-old Caucasian woman presented to an urgent care eye clinic on October 6, 2021, with sudden onset of severe pain and decreased vision 3 weeks after the first intravitreal injection of brolucizumab OS.The patient had a history of nAMD OS and non-nAMD OD.The patient's treatment history included photodynamic therapy and repeated intravitreal injections of ranibizumab and aflibercept OS; however, the nAMD was not under control.Brolucizumab was employed with the goal of achieving superior control of disease activity.Three weeks before presentation, BCVA was 20/50 OD and 20/500 OS.In both eyes, the cornea and anterior vitreous were clear, the anterior chamber was deep and quiet, and the intraocular pressure was normal (11 mm Hg).Three weeks after injection with brolucizumab OS (at the time of the urgent visit to the clinic), BCVA OS dropped to counting fingers.Keratic precipitates were visible in the cornea.The anterior chamber showed 3 + cells and 2 + flare and the anterior vitreous showed 3 + cells and 3 + haze.Intraocular pressure remained at 11 mm Hg.
Fundus examination showed vitreous haze, optic nerve hyperemia, and retinal vessel sheathing (Fig. 2A).Fluorescein angiography showed optic nerve leakage and perivascular leakage of the vessels in the posterior pole, as well as in the peripheral retina (Fig. 2B, C).OCT examination showed the foveal contour was semi-preserved, but there was subretinal fluid.No inflammatory findings or vascular leakage were found in the right eye.Based on the clinical findings and the timing of the AEs, the patient was diagnosed with brolucizumab-induced panuveitis with nonocclusive vasculitis in the left eye.Given her age and social circumstances, the patient was admitted to the hospital, and treatment with intravenous methylprednisolone infusions (750 mg per day for 3 days) was initiated.The decision to use systemic rather than intravitreal steroids at the acute stage was made to avoid worsening of the disease in case the etiology was infectious.In addition, because of the anterior segment inflammation, the patient was started on prednisolone acetate qid.Timolol bid was employed to protect the pressure; a dilating drop (atropine bid) was also employed.One day after the first methylprednisolone infusion, the patient reported symptomatic improvement, with no pain and improved blurriness; however, ocular examination findings remained the same.Three days after the first infusion, visual acuity remained at counting fingers, but keratic precipitates, anterior chamber cells and flare (1 + cells, 1 + flare), and anterior vitreous cells and haze (2 + cells, 2 + haze) had improved.The patient noted visual improvement.However, she developed a psychotic AE (thoughts of jumping from the window), which was thought to be secondary to the steroids.Therefore, even though the methylprednisolone infusions were stopped, the patient was not started on oral systemic steroid therapy; instead, an intravitreal dexamethasone implant was provided.At that time, the inflammation was thought not to be of infectious etiology.Ten days after the dexamethasone implant, visual acuity remained at counting fingers, the corneal keratic precipitates had decreased, the anterior chamber was deep and quiet, and the anterior vitreous had improved to 0.5 + cells and 1 + haze.Although vision did not recover, aggressive treatment was nevertheless important to prevent further vision loss.
This case is consistent with the finding that IOI with brolucizumab can occur after the first or any subsequent injection [15,20,24,25].It can manifest as anterior, posterior, or panuveitis, ranging from anterior cellular reaction to optic nerve inflammation or to retinal vasculitis [29,30].The onset of the AE can range from the day after, to a month or more after injection [15,20].Furthermore, the inflammation can occur quite suddenly [14].We believe it is necessary to manage the patient carefully and aggressively and rule out other infectious and noninfectious causes [7,13].For example, in this patient we needed to rule out giant cell arteritis, given her age.Once other etiologies are ruled out, aggressive therapy is needed to manage the inflammation.Both systemic and intravitreal steroids should be considered.Local therapy should also be considered when the time is appropriate.Immunomodulatory therapy may also be indicated.Eye care professionals should consider prompt and intensive treatment of IOI, applying standard-of-care treatment guidelines.In cases of IOI, including retinal vasculitis and/or retinal vascular occlusion, brolucizumab should be discontinued [18].In addition, it must be remembered that brolucizumab is contraindicated in eyes with active IOI [11,18].
Conclusions
A large unmet need still exists for patients with nAMD, with patients experiencing unresolved fluid, high injection burden, and drop-off in adherence [2,4,5].Brolucizumab addresses these unmet needs by providing robust vision gains and superior fluid resolution, with the potential for longer treatment intervals [8,9].Through the coalition findings and other work, Novartis has made significant progress in better understanding IOI-related AEs associated with brolucizumab [7, 15-17, 20, 21, 31].The risks of AEs and vision loss may be mitigated at clear steps along the patient journey.When patients are selected for treatment with brolucizumab, it is important to weigh the potential benefit against risk of AEs and to consider patient risk factors such as prior history of IOI and/or retinal vascular occlusion and female sex [16,17].To mitigate the potential for IOI-related AEs following brolucizumab injection, it is important to conduct a thorough dilated eye examination before each injection and closely monitor patients throughout treatment [19,20].Brolucizumab should not be injected in the presence of active IOI [11,18].It has been shown that most IOI events occur in the first 6 months of treatment [15].Patients should be educated on which symptoms to monitor for [11,20].If an AE is identified, prompt and intensive treatment should be considered [7,19,21], and brolucizumab should be discontinued in cases of IOI, including retinal vasculitis and/or retinal vascular occlusion [18].
Fig. 1
Fig.1OCT B-scans of a patient switched to brolucizumab to increase the treatment interval duration.A On August 5, 2019, 6 weeks after an aflibercept injection, subretinal fluid was apparent.B On October 16, 2019, 5 weeks after the next aflibercept injection, disease activity was controlled.C On January 27, 2020, 14 weeks after the first brolucizumab injection, the patient showed good disease control.D On May 19, 2020, 15 weeks after the second brolucizumab injection-a longer treatment interval than recommended-disease activity had returned.E On August 28, 2020, 13 weeks after the third brolucizumab injection, there was trace subretinal fluid.F On November 6, 2020, 10 weeks after the fourth brolucizumab injection, there was no disease activity.OCT, optical coherence tomography
Fig. 2
Fig. 2 Panuveitis with nonocclusive vasculitis in the left eye following injection with brolucizumab.A Fundus examination showed vitreous haze, hyperemia of the optic nerve, and sheathing around some of the retinal vessels.B, C Fluorescein angiography showed optic nerve leakage and perivascular leakage in the posterior pole and peripheral retina | 2023-11-23T14:52:13.348Z | 2023-11-23T00:00:00.000 | {
"year": 2023,
"sha1": "92f7094087fa2265ca17ab11b2100e99e25ac936",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "92f7094087fa2265ca17ab11b2100e99e25ac936",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252681273 | pes2o/s2orc | v3-fos-license | Risk perception, community myth, and practices towards COVID-19 pandemic in Southeast Ethiopia: Community based crossectional study
Objective The objective of this study was to assess risk perception, community myths, and preventive practice towards COVID-19 among community in Southeast Ethiopia, 2020. Methods Community-based cross-sectional study was conducted among 854 participants selected using a multistage sampling technique. Data were collected using a structured questionnaire adapted from previous literature. Descriptive statistics were done to summarize the variables. A generalized linear model with binary logistic specification was used to identify factors associated with risk perception and practice. Accordingly adjusted odds ratios with 95% confidence intervals were calculated and those with p-value < 0.05 were considered as significant factors associated with risk perception and practice. Cluster analysis using a linear mixed model was performed to identify factors associated with community myth and those with p-value <0.05 were reported as significant factors associated with community myth. Results All 854 respondents gave their answer yielding 100% response rate. Of these 547 (64.1%) were male, 611 (71.5%) were rural residents, 534 (62.5%) got information about COVID-19 from TV/radio, 591 (69.2%) of them live near health facility, 265 (30.8%) have a history of substance use and 100 (11.7%) have a history of chronic illness, and 415 (48.6%) of them have a high-risk perception, 428 (50.1%) have a wrong myth about COVID-19 and 366 (42.9%) have poor practice respectively. Residence, distances from health facility and myths were significantly associated with risk perception. Occupation, knowledge, and practice were significantly associated with community myths. Also level of education, living near health facilities, having good knowledge and wrong myth were significantly associated with the practice of utilizing COVID-19 preventive respectively. Conclusion The study found high-risk perception, high wrong community myth, and relatively low utilization of available practices towards COVID-19 and factors associated with them.
Introduction COVID-19 was initially started in December 2019, and later it was stated as pandemic and has been declared as a Public Health Emergency of International Concern by the WHO [1]. Approximately, 20.0% of COVID-19 patients developed severe symptoms, which included respiratory and bleeding disorders [2]. The virus can also be transmitted through the respiratory tract of patients with signs and symptoms but can also transmit from asymptomatic individuals before the onset of clinical features [3]. Susceptibility to COVID-19 looks to be associated with sociodemographic characteristics such as low education, age, and low access to information as well as with underlining co-morbidities like diabetic mellitus, cancer, and chronic respiratory illnesses [4].
The contagious COVID-19 virus outbreaks needs a urgent response from all stakeholders and communities [5]. Increasing public awareness and working in collaboration with the communities has endless benefits in curbing this pandemic [6]. People's risk perception of the pandemic affects the utilization of available preventive measures [7]. The true risk from the COVID-19 virus might be low, but it gets media attention and become the candidate of social media discussion, which might have effects on risk perception, which in turn may determine communities' behavior in adopting and using these pandemic preventive measures [8]. Understanding the risk perception among people was crucial to understand ways of delivering information for communities using correct information channeling [5,9]. Few studies conducted previously reveal conflicting findings on the level of perceived risk towards Novel Coronavirus. One survey conducted in Italy identified the effect of age on risk perception and recommend importance of delivering correct information about the disease and its prevention mechanism [10].
The others studies conducted in Iran found moderate risk perception in the community [11,12]. Opposing to the above two, the study conducted in United States reveals a low level of risk perception [13]. Perceived risk differs across different sociodemographic characteristics including age, educational level, residence, and access to information [9,11,14]. The study conducted on COVID-19 risk perception in Vietnam also identified the effect of using social media on risk perception towards COVID-19 [5].
Misinformations that transmit from different media also start helping the effect of this pandemic in affecting people's behavior towards it. This leads to the development of some popular myths like "COVID-19 doesn't exist at all", "it can't affect people in the hot or cold environment" and "COVID-19 was deliberately created by the people". These myths have good and bad consequences on health [15]. These bulk of the information which is circulating through multiple channels influences how people think about the disease and their readiness to stick to available preventive methods [5]. This mandate as channeling of basic information should be from trusted sources. Various studies identified the level of utilizing available COVID-19 pandemic preventive techniques and variability of using these methods across different sociodemographic and socioeconomic characteristics [11,16,17]. Hence understanding level of risk perception towards the COVID-19 pandemic, identifying myth developed in the communities following this pandemic, and their utilization of available preventive measures has crucial importance in reducing COVID-19 transmission. Therefore this study was conducted to identify risk perception, community myth, and practice towards COVID-19 pandemic in Southeast Ethiopia.
Study setting, design, and period
A community-based crossectional study was conducted from March to June 2020 among 854 adult populations who were permanent residents of 22 Kebeles in two Zones of the Oromia region, Southeast Ethiopia.
Inclusion and exclusion criteria
Respondents with an age greater than 18 years were included. Any individual who was not a permanent resident of the study area, critically ill, with hearing impairment, and has changes in consciousness level or cognitive disorders were excluded from the interview.
Sample size determination
Single population proportion formula for sample size determination was used to calculate the required sample size by taking the proportion of participants with high-risk perception against COVID-19 50%; at 95% confidence level, 5% margin of error, 10% non-respondents, and design effect of 2. This gave the final sample size of 854 individuals.
After multiplying by design effects = 768. Then 10% of non -respondent rate was added and the overall sample size became 854.
Sampling procedure and data collection
A multistage sampling technique was used in which woredas and administrative towns were selected after grouping. From selected woredas and administrative towns 22 kebeles (villages) were selected by using lottery methods. Then 854 participants were randomly included from systematically selected households. Data regarding risk perception, community myth, and preventive practice against COVID-19 were collected using tools adapted from previous studies and WHO recommendations [2,5,13,[18][19][20][21][22][23][24]. Data regarding sociodemographic characteristics and source of information about COVID-19 were collected by using tools adapted from EDHS and previous articles [9,16,19,25]. The questionnaire was initially prepared in English and then translated into Afan Oromo. Translation back to English was done to check for consistency by languages experts. A questionnaire pretest was done before actual data collection on 5% of the sample size. The questionnaire was modified based on pre-test results. Data were collected by trained data collectors. Two days training was given to the data collector on objectives, relevance of the study, confidentiality, respondent right, informed consent, and on actual data collection procedures. Ethical clearance from Madda Walabu University Research and publication and letter of permission from selected woredas and administrative towns were obtained. After a brief description of the objectives of the study to every study participant oral consent was obtained. Then questionnaires were administered face to face by the data collectors. This is the appropriate approach for people with no formal education. During data collection data collectors gives clarification to the questions misunderstood by respondents. Consistency and completeness of data were checked by investigators every day. After data collection, filled questionnaires were kept carefully.
Variables measurements
Risk perceptions of respondents were assessed by asking six questions adapted from previously conducted studies [11,26,27], Total risk perception score was computed by adding individual responses of these six questions. Then median score was used to categorize the level of risk perception. Respondents those score less than the median were categorized as having low-risk perception, and those scores equal to or above the median were categorized as having highrisk perception regarding the COVID-19 pandemic.
Myths about COVID-19 pandemic were also measured by asking six questions adapted from previous studies [21,22,28]. The total myth score was calculated by adding responses to these six questions. The median score was used to label individuals as with wrong myth and not with wrong myth. Accordingly, those scores less than the median were categorized as having no wrong myth, and those scores equal to or greater than the median were categorized as having wrong myth.
Regarding utilization of preventive practice towards COVID-19; respondents were asked twelve questions adapted from World Health Organization advice to the public and previous studies [29][30][31][32]. The total practice scores was computed by adding responses to these questions. And median score was used to categorize practice of participants. Accordingly practice of respondents regarding utilization of COVID-19 pandemic preventive measures was categorized as having poor practice and good practice based on their median score computed from these twelve asked questions. Those with a score below the median were categorized as having poor practice and those with a score greater than or equal with median were categorized as having good practice. Refer to appendix one for the questionnaire. The data about sociodemographic variables, access to health care, and source of information were collected by adapting tools from EDHS 2016 and previous studies after some modification.
Data processing and analysis
Data were checked for completeness and entered to Epidata version 3.1 and were exported to SPSS version 25 for analysis. Data cleaning was done using frequency distribution and descriptive statistics. The scores for risk perception, community myth and practice in utilization of COVID-19 were computed from their respective individual questions responses. Sociodemographic characteristics, access to health care and source of information were summarized using frequency distribution. Average values of all outcomes were calculated and reported. The scores of risk perception, community myth, and preventive practice were compared across different sociodemographic characteristics of respondents. A generalized linear model was used to examine factors associated with risk perception and practice regarding the utilization of available COVID-19 preventive measures. Adjusted odds ratio with a 95% confidence interval was computed and those with a p-value less than 0.05 were reported as significant factors associated with risk perception and practice. Cluster analysis by using a linear mixed model was performed to identify factors associated with community myth. Variables with a p-value less than 0.05 in the linear mixed model were reported as significant factors associated with community myth.
Sociodemographic characteristics of respondents
As shown in " Table 1" below 854 respondents participated in this study yielding 100% response rate. Out of the total 854 respondents, 547 (64.1%) were male and 845 (98.9%) were Oromo ethnicity, 611 (71.5%) were rural residents. Regarding occupational status, 499 (58.4%) were farmers, the roles of 660 (77.3%) were father/mother and the majority of them attends primary level of education 335 (39.2%). Concerning their marital status, 645 (75.5%) were married earning the median monthly income of 1675.27 ETB. Around two-thirds of the respondents 534 (62.5%) got information about COVID-19 from TV/radio, 591 (69.2%) of them live near health facility and 265 (30.8%) have a history of substance use mostly khat 228 (26.7%) and 100 (11.7%) of them have a history of chronic illness. Finally, more than twothirds of the total participants 604 (70.7%) live in their own house.
Distribution of risk perception, community myth and preventive practices towards COVID-19 in communities, 2020 (n = 854)
Risk perception towards COVID-19 was computed from six questions. Its median score is 19 ranging from six to thirty. Those who have risk perception greater than the median score were classified as having high-risk perception. Accordingly around half of the study participants (415 (48.6%) have high-risk perceptions. Risk perception is the same across gender and residence but comparably higher among those who live near health facilities, non-governmental workers, and have a history of chronic illness.
Community myth was assessed by asking six questions. Based on this 428 (50.1%) of the study participants have the wrong myth about the COVID-19 pandemic. The median community myth was higher among females and urban residents. Concerning its distribution across occupation types, it was higher among NGO workers and the lowest among farmers. Also, the lower community myth was scored among less educated and those with a history of substance use. It was also relatively higher among those with a history of chronic illness.
The scores for practice towards utilization of COVID-19 pandemic was computed from 12 questions and the median score was used to categorize participants. Of all participants, 366 (42.9%) of them had low utilization of the stated preventive practice.
There is no gender difference in using practice to prevent COVID-19. Using these preventive practices was higher among urban residents and those near to health facilities but relatively lower among farmers, those without formal education, and substance users. See " Table 2" below.
Factors associated with risk perception, community myth and practices towards COVID-19 in communities, 2020 (n = 854) 1. Factors associated with risk perception towards COVID-19 pandemic. Seven variables were found to be eligible for multivariable generalized linear model analysis based the results from bivariate output. These are residence, level of education, distance from the health facility, history of chronic illness, knowledge about COVID-19, myth in the community, and status of utilizing COVID-19 preventive practice.
In the final multivariable generalized linear model three factors (variables) were found to be significantly associated with risk perception towards the COVID-19 pandemic. Accordingly Table 3" below.
Factors associated with community myth regarding COVID-19 pandemic.
Cluster analysis by using linear mixed model was used to identify factors associated with community myth. Accordingly seven variables were selected for final cluster analysis using a linear mixed model based on bivariate analysis results. These were gender, occupation, residence, distance from the health facility, knowledge regarding COVID-19, level of risk perception, and status of practice regarding utilization of COVID-19 preventive techniques.
In the final multivariable model three variables were found to be significantly associated with community myth. Accordingly being NGO employees, knowledge regarding COVID-19 and status of utilization of COVID-19 preventive techniques were significantly associated with community myth. Being an NGO employee positively related to community myth while poor knowledge regarding COVID-19 and poor utilization of available COVID-19 preventive techniques were negatively associated with the average score of community myth after controlling for the effects of other variables in the model. See " Table 4".
Factors associated with practice towards utilization of COVID-19 preventive measures.
Generalized linear model was used to identify factors associated with practice towards utilization of COVID-19 preventive measures. Nine variables were found to be eligible for multivariable generalized linear model analysis by using results from the bivariate analysis.
These are gender, age, education, distance from the health facility, substance use, knowledge about COVID-19, underlining myth, the existing level of risk perception, and monthly
PLOS ONE
Risk perception, community myth, and practices towards COVID-19 pandemic
Discussion
Epidemics and pandemics are unexpected periodic phenomena. They can happen at any time.
Peoples face several challenges during such conditions. The effects and impacts of pandemics are multiple. It can affect every aspect of life physically, mentally, and emotionally. Hence in this study, we have investigated risk perception, community myth, and practices towards COVID-19 pandemic and factors associated with them. The risk perception was assessed by giving due attention to emotional and knowledge aspects. This study found as an around half of or 415 (48.6%) of populations have high-risk perceptions. This finding was the same as the finding from one study conducted in Iran [33] but is
PLOS ONE
Risk perception, community myth, and practices towards COVID-19 pandemic higher than the finding from the study conducted in Germany [34] and lower than the finding from the studies conducted in China and Ghana [35,36]. The disagreement between the current study and studies mentioned could be due to differences in sociodemographic factors like age, residence, educational level and it also might be due to differences in access to information. Also, there is a time difference between when these studies were conducted. The abovementioned studies were conducted in the early phase of the pandemic. These indicate as level of risk perception in different communities around the world can be different. This study also found as a rural residence, living near a health facility and having with the wrong myth were significantly and positively associated with risk perception towards the COVID-19 pandemic. This finding conflicts with the studies conducted in Jordan in which urban residence was positively associated with risk perception [37] and it was also not consistent with the other study conducted in China where residence was not significantly associated with risk perception [35,36]. But this finding was in line with the study conducted in Iran [38].
The reason for the difference could be due to the difference in characteristics of study participants and the speed at which information reach these population based on the status of usage of social media in these different places.
This study also found the prevalence of wrong myths to be 50.1%. This was higher than the finding from a Hospital-based study conducted in Northwest Ethiopia [39]. The reason for Good 0 � Significant factors at P-Value of <0.05, 0 = Reference Category, Df = degree of freedom disagreement could be the differences in the study setting. Because the current study was community-based while; the study from Northwest Ethiopia was conducted in a health facility which could bring the difference in the finding. The other possible reason could be the time during which these studies were conducted and the difference in the study participants. The study from Northwest Ethiopia was conducted among patients with chronic illness who possibly have regular follow up in the selected Hospital and have a chance to get the right information from health professionals. This study also identified factors significantly associated with community myths. Accordingly community myth was significantly associated with occupation, knowledge regarding COVID-19, and level of practice regarding utilization of COVID-19 preventive measures. This finding was in line with the finding from the study conducted in South Africa [28] in which knowledge regarding COVID-19 was significantly associated with community myth. But the finding from the current study was not in agreement with the finding from the study conducted in China in which those who have good practice have low myth towards COVID-19 [22]. The reason for the discrepancy could be the difference in sociodemographic characteristics of respondents, the difference in access to information, and the value these communities give for tradition and rumors.
Also, we have found the proportion of people with good practice regarding COVID-19 which is 57.1%. This finding was in line with the other study conducted in Ethiopia [39] where the proportion of poor practice was 47.3%. This was lower than the finding from studies conducted in Nepal and China [22,24]. The reason for disagreement could be the difference in sociodemographic characteristics of selected study participants as well as it could be due to the difference in receiving correct information and their access to social media. Higher level of education, living near health facility, with good knowledge about the disease and having wrong myth regarding COVID-19 associated with good practice. This finding was in agreement with other studies conducted in China, Pakistan, and Malaysia [21,22,39,40]. But it was different from what was reported in the study conducted in Sudanese [41]. The reason for the difference between these studies and the current study could be due to the difference in the study setting. Because the current study was conducted in the community and the one from Sudan was an online survey which could bring the difference in study participants. This study identified important findings. But it has limitations. Being a crossectional study this study couldn't identify the direction of association that means whether factors or outcomes come first. It also uses self-report from respondents which could affect the real finding.
Conclusion
This study is an important step towards a better understanding of risk perception, community myth, and practices regarding COVID-19 pandemics and associated factors. Accordingly, relatively high-risk perception, wrong community myth, and poor practice regarding utilization of COVID-19 preventive techniques were reported. Different factors associated with risk perception, community myth, and practices were identified. These findings could be important input for modeling interventional activities in the community. | 2022-10-04T06:17:52.456Z | 2022-10-03T00:00:00.000 | {
"year": 2022,
"sha1": "a8014db6793473bbc20fe9a37ace796719f669d3",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c44c7dfc2d1d2ea666fa5d8e69f8a1ca97bd2ba6",
"s2fieldsofstudy": [
"Medicine",
"Sociology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118497297 | pes2o/s2orc | v3-fos-license | The Non-linear Dynamics of Sociological Reflections
Actors are embedded in networks of communication: the relations of the actors can be represented as the rows of a matrix, while the column vectors represent their communications. The two systems are structurally coupled in the co-variation: each action can be considered as a communication with reference to the network. Co-variation among systems if repeated over time, may lead to co-evolution. Conditions for stabilization of higher-order systems are specifiable: segmentation, stratification, reflection, differentiation, and self-organization can be distinguished in terms of developmental stages of increasingly complex networks. The sociological theory of communication occupies a central position for the clarification of the possibility of a general theory of communication, since it confronts us with the limits of reflexivity in human understanding and reflexive discourse. The implications for modelling the relations among incommensurable discourses (e.g., paradigms) are elaborated.
The position of a communication is latent for the actors involved (Lazarsfeld & Henry 1968), but the human actor can reflect on each uncertainty, and provide it with a meaning. Meaning requires reflection on the position of information in a system (MacKay 1969). In general, while the relational uncertainty has a position at the interface between two systems, meaning can only be specified with reference to a system for which the incoming information makes a difference over time as a third degree of freedom. [1] Information can be attributed different meanings by different systems. Some systems (e.g., human beings) are able to reflect on various possible meanings, and to make choices among alternatives with hindsight. This operation requires an additional (i.e., fourth) degree of freedom. Thus, meaning is itself reflexive (with reference to the dynamics in the first and the second dimension), and it can be made the subject of reflection if it can again be communicated as an uncertainty.
Self-organization can be defined as the ability of a system to select among communicated meanings with reference to the system's identity. While a reflexive meaning can be provisionally stabilized, a further selection among various possible meanings potentially globalizes the system. Thus, the identity of a self-organizing system can be considered as a distributed and changing regime of representations (Hinton et al. 1986). One can observe this system in terms of instantiations (Giddens 1979), by taking a position (Haraway 1988) or by using a geometrical metaphor (Shinn 1987). A system which is able to operate in four dimensions of uncertainty, is no longer expected to exhibit observable stability over time.
Selection
As noted, a communication network is a piecemeal construct. However, the number of possible links in a network increases with the square of the number of nodes, so that an evolutionarily emerging network is by definition selective with respect to the range of its possible shapings.
[2] By repetitive operation, one expects certain linkages to be intensified more than others for stochastic reasons (cf. Arthur 1988). The emerging architecture can be considered the network's structure. [3] While relations are aggregative and hierarchical, the constructed network can be decomposed with hindsight in terms of its structural components, e.g. in terms of network densities. The decomposition, however, may follow another logic than the network's aggregative composition, since the aggregation has introduced a grouping variable as a second degree of freedom. The grouping that prevails is latent for the actors involved in constructing the network, since each new action may change the grouping rule over time.
Stabilization
The relations of the actors can be represented as row-vectors, and the network as a summation of these vectors, i.e., as a two-dimensional probability distribution or a matrix (S pij). In each cycle, the potentially co-varying relations among the actors add up to communications over the columns.
The time dimension adds a third axis to this two-dimensional representation: matrices at different moments in time add up to a cube (S pijk). If one rotates this cube ninety degrees, one can analyze structure in the time dimension, analogously to structure in the matrices at each moment in time. In other words, a communication system contains two structures if it communicates information ---i.e., is contingent ---in the time dimension.
The first structure positions the information in the relations on a second dimension of the probability distribution, and the result of this operation can be reflected on the third dimension. Co-occurrences of co-variations can be analyzed as the system's history.
Figure 1
An observable trajectory of a (potentially complex) system in three dimensions If the two structures can operate as selections on each other, this may lead to stabilization, e.g., into an observable trajectory (see Figure One; cf. Dosi 1982). Thus, stabilization is the result of a second-order selection: which selections are selected for stabilization? Although the two selections are formally equivalent, their orthogonality implies that their operation is substantively independent. I shall argue below that this substantive difference leads to a different semantics in the theoretical understanding.
Segmentation, stratification and differentiation
If one represents communicating actors as the row vectors (S pi) of a matrix, communication finds its origin in the co-occurrence of these vectors along the column dimension (j). If originally the rows are just stacked upon one another, the result is a segmented communication system. If the communications are ranked in the vertical dimension, one gets a stratified communication system, and grouping of the rows leads to a differentiated communication system. As noted, grouping implies the addition of another dimension to the probability distribution, i.e., a grouping variable k. Ranking is the special case that the grouping variable only counts the rank. For example, in a stratified social system a person is allowed to say something if it is his or her turn.
The development pattern is well-known from biology: once the (segmented) morula grows so large that there is a need for synchronization among cells no longer directly adjacent, the original symmetry is broken, and order is induced at the system's level. The event of a cell-cleavage is asymmetrically communicated to neighbouring cells, and triggers there a further cleavage. At first, this order is only rank-order, i.e., stratification or, in biological terms, 'polarization' (gastrula). The next stage (blastula) can be defined as the phase after which undifferentiated cells have ceased to occur.
As long as the windowing of communicating systems on each other remains direct, there is no evolutionary order. The sequencing at the interface induces order. In a stratified system, the communications are ranked at a single centre of reflection, but not yet grouped. Reflection on the distinction between this centre and the periphery using the second (spatial) dimension of the uncertainty induces differentiation. In a self-organizing system, the different meanings can be reflexively communicated, and therefore the differentiations can be adjusted functionally to the development of the system.
In summary: segmentation requires co-variation in two dimensions; stratification requires stabilization in three dimensions, and consequently a difference between the reflecting instance and the reflected substance; self-organization generalizes the possibility of reflection using a fourth degree of freedom.
Self-organization
In terms of the above spatial metaphor of a cube, one may think of a self-organizing system in terms of alternative cylinders in this cube which the system has available as representations of its identity ( Figure Two). Each cylinder leads to a different expectation for the composition in the next round. The operation of the complex system is uncertain in a fourth dimension with reference to its three-dimensional representations. A selforganizing system is expected to select three-dimensional representations which are functional for its further development. In order to maintain identity in a self-organizing system, both the communications and their co-occurrences have to be selected by the system, and to be interpreted self-referentially as information about the current state of the system.
Figure 2
Selection among representations of the past using a fourth degree of freedom As noted, observable stabilization can be considered as second-order selection. A nextorder selection leads to the potentially global regime of a self-organizing system. 'Variation', 'selection', and 'stabilization' at lower levels can then be considered as subdynamics of the complex system, which performs an operation in a 'hyper-cycle', including time and space. Note how the higher-level system rests as a selective feedback on top of the lower-level ones; it controls by repressing the possible selections at the lower level. [4] In the fourth degree of freedom specific resonances between the first-order cybernetics of variation and selection and the second-order cybernetics of variation and stabilization become possible. Simon (1969) has introduced the metaphor of 'locking into resonances' for understanding this evolutionary construction of complex systems. If one of the resonances is entrained through stochastic variation, the functionality of the differentiation spreads, since the remainder of the system consists of groupings that are not yet functional (cf. Smolensky 1986;Kampmann et al. 1994). For example, in functionally differentiated organisms, undifferentiated cells cease to occur. Analogously, in the late Middle Ages, the Investiture Controversy between the Pope and the Emperor led to a differentiation among hierarchies in the stratified organization of society. Among other things, this induced the transition from a stratified high-culture to a functionally differentiated society (Luhmann 1989).
Translations
If the non-differentiated communication is indicated with j (see above), the differentiated medium must be indicated with jk, since a grouping variable k is added. If the one functionally differentiated subsystem, which for example communicates in j and k1, communicates with another functionally differentiated subsystem (in j and k2) of the same system, this does not imply de-differentiation and thus communication in only j, but the emergence of communication among j, k1, and k2. De-differentiation among subsystems can occur only locally, when k1 and k2 cancel one another like in patterns of interference. In general, evolutionary integration means an increase in complexity; only local specimens can be found that are not yet differentiated, and therefore are able to carry the next generation.
For example, if a human being wishes to move, he or she needs an interface which not only makes the organs involved (nerves, muscles, bones, etc.) recognize one another as tissue of the same animal (j), but which also structures the communication between, for example, the nervous system (jk1) and the motoric apparatus (jk2). This operational coupling requires the coupling into an interfacing system (e.g., a synapse) which 'knows' how to translate input into output; by structurally doing so, the interfacing system composes a three-dimensional system. Only a three-dimensional communication system (jkl) can contain sufficient complexity to perform translations between differentiated systems.
Analogously, language can be considered as the yet undifferentiated medium of communication in the social system (j). Differentiation attaches a suffix k to all usages of language. Following this differentiation one is no longer able to compare two communications in terms of a single medium of communication, e.g., a common language, without (initially local) reflection on the contextual meaning of the communication. In a stratified social system, social communication can still be integrated, since only the stable center is eventually allowed to reflect on the meaning of a communication. In this case, the differentiation between the function (k) and the meaning (l) of a communication (j) remains repressed. However, as the social system becomes functionally differentiated in its organization, three-dimensional subsystems of communication (jkl) allow the carrying agents to operate in terms of translations of (a priori) input into (a posteriori) output. A differentiated communication system needs reflexive agency among its subsystems, since it would otherwise fall apart.
A translation is formally equivalent to a reflection: one can 'fix' the system as a communication channel and consider it in terms of relations between input and output (Shannon 1948) or deconstruct the same system as a reflector that uses three degrees of freedom. If the communication channel, however, is no longer fixed, it is expected to change, among other things, its reflexive function (if only by wear and tear). Selforganization is an analytical consequence of replacing the assumption about fixed channels that can transmit with more or less noise, with communication systems that themselves may change when disturbing the transmission. The declaration of the additional context of the communication provides us with a dual perspective: one can focus on input/output relations or consider the input as contextual disturbances of an evolving system that provides its environment with an output by exhibiting change (see Figure Three).
Figure 3
A "fixed" communication channel and an evolving communication system
The Duality of Social Communication
Reflexive systems are able to communicate among themselves, since they can bounce information back and forth if they relate to the same medium of communication (Maturana 1978). Self-organizing systems are in principle competent to communicate in two dimensions at the same time, since they have one more degree of freedom for the reconstruction. (One degree of freedom in a translation has to be used by the receiving system for declaring the noise generated by the transmitting system.) Human language is the evolutionary achievement that allows for the communication of information and the meaning of the uncertainty in the same communication.
In other words, information and meaning can be considered as dimensions of human communication. The difference between the information and meaning in human communication has been codified in language: language can hold information, and therefore translation allows us to redirect the information into a next communication, but reflexively. Note that a reflexive communication cannot be unambiguously communicated in the same act as the substantive communication on which it reflects, but only with hindsight. The analytical reason for this duality in social communication is the need to declare one dimension of the communication as noise. Given two channels of a three-dimensional translation (see above), one can either combine the substance of the communication with its function (context) or the substance with its meaning (over time).
Two messages in three dimensions using the same medium contain sufficient redundancy for a four-dimensional human mind in order to reconstruct (i) the expected information content, (ii) the transformation by the media through which it passed (i.e., the contextual information), (iii) the intended meaning of the communication. Obviously, the necessary filtering of the noise requires a memory function that is structurally allowed the freedom to relate the various reflections internally.
Thus, a reflexive memory function at the address of an actor is needed for the social translation, since the social system has to operate at least twice before it unambiguously communicates both substantively and reflexively (cf. Luhmann 1990).
[5] But the reflexive actors are distributed, and therefore the various reflections can be communicated. A global system of translations can be organized as a cultural evolution on top of stratified processes of social communication when the reflexive layer of communications is differentiated as a degree of freedom at the level of the social system. This presumes a certain complexity (like in bourgeois cities and Protestant churches) so that the different roles in the communication can be distinguished in terms of selections (as opposed to preordained roles).
The Regime of Modernity
When the reflection is no longer fixed (like in a stratified society), the translation becomes a historical variable. Thus, the transition to modernity affects the nature of social communication: the unit of social communication, which may contain both information and meaning (jk), is extended with the historical context (l). Henceforth, communications are expected to translate among systems of translations. No common language tends to be left among the differentiated (sub-)systems to which one may hope to recur for the system's integration (cf. Habermas 1981).
From this perspective, the translation of the Bible into the various national languages may have been a crucial step in the formation of 'modernity'. Historically, the reflexive function, i.e., the attribution of meaning, could be uncoupled from the hierarchical centre of the stratified system ('Rome') because of the emphasis in the Gospel on personal, i.e., decentralized, salvation.
[6] Whereas in the Middle Ages a personal or local interpretation of the imitatio Christi might easily lead to 'ex-communication', God's Word could in the long run provide semantic leverage for breaking the hierarchical fixation of the Roman system in the reflexive dimension. In Protestantism each individual is equal before God; the World is given to people as a latent structure in their network of relations.
It goes beyond the framework of this paper to elaborate on this evolutionarily recent transition from a stratified high-culture into the modern regime that is based on rewarding translations. When the communication in the social system can be stabilized in an hierarchy, a high-culture first develops. However, the social system can be developed in the fourth dimension whenever communicating agents, reflexively aware of their different positions in the social system, begin to systematically use the degree of freedom between differentiation and reflexivity as another dimension in their communication. The evolutionary achievement is the freedom to internally adjust the reflections in relation to local exigencies of differentiated developments.
Protestant ethics sanctioned local reflexivity in our colloquial command of the world. If reflexive functions can subsequently be ascribed as degrees of freedom to subsystems of communication (e.g., the free market, the freedom of religion, etc.), the gradual transition into a self-organizing regime of translations is only a matter of time: global adjustments are based on the recursive selection of lower-level variations.
The Endogenous Character of Technological Change
A system can deconstruct a signal of one-lower dimensionality than it has available, since it needs the additional dimension in order to provide the signal with a value, and thus to estimate the noise. In the second dimension of the uncertainty we have called this selection the positioning of the relational information; in the third dimension reflection; and in the fourth dimension it has occasionally been called reflexivity, but the operation can be clearly distinguished from reflection by calling it self-organization (see Table One above). The underlying operation among the various dimensions, however, is identical: the incoming information is always mutual information with reference to the expected information content of a communication system. The receiving system reconstructs the signal by normalizing the incoming information in its own terms. This presumes an internal representation, and hence the projection onto another dimension of the system's operation (Leydesdorff 1992).
A communication system that operates in three dimensions can reconstruct input into output; a four-dimensional system can reconstruct in three dimensions. For example, a bird can build a nest in three dimensions. It does so instinctively, but not reflexively. Technological artifacts, however, are based on discursive reconstructions, and should thus be attributed to the social communication system. As noted, the reflexive system therefore can carry a cultural evolution on top of the natural one.
Given the above definitions, the system of reference for the cultural evolution of a modern social system, is no longer an individual or a community as an aggregate of individuals, but the reflexive interaction system or discourse among individuals. The interaction terms are based on the grouping as another dimension than the grouped one (see above). While a stratified system can be organized in terms of aggregated communities, this system can be evaluated as contingent as soon as the interaction terms between different groups account for a larger part of the uncertainty that prevails than the sum of the within-group variances. [7] The capacity to communicate reflexively about the grouping, and then reflexively to reorganize the communication locally into another (provisional) stabilization emerged during the reformation, but it was institutionalized during the scientific revolution of the 17th century. The experimental condition is almost paradigmatic for the local reconstruction (cf. Latour 1983;Shapin & Schaffer 1985). The reflexive recombination ---i.e. (re)attribution of grouping ---endogenizes technological change in the social system. While pre-modern societies had a set of institutions and techniques available which were specific and provisionally stabilized, technological change is a characteristic of a selforganizing social system. Translations among systems of translation enable the reflexive carriers to explore possible recombinations. As noted, the differentiated system tends to build structurally on those instantiations which are functional for its further development (cf. Simon 1969;Swenson 1989). As functional differentiation is increasingly inscribed into the system, redistributions among asynchronous developments force the system to higher-order innovations, periodically (cf. Schumpeter 1943).
Remember that the social system is not a given (like a biological body), but it remains a construct that can be reconstructed, in principle. Therefore, the evolutionarily advantageous reorganizations can be rapidly recognized. These innovative dynamics are nowadays a sub-cybernetics of the social system; cultural evolution has become sustainable only as far as it is innovative. (Otherwise, the social system would have to 'die' like 'natural' systems.) In a technological culture, however, 'nature' is continuously reconstructed in terms of changing patterns of communication. Furthermore, cultural evolution reinforces itself by producing a continuous stream of technological artifacts, since it is based on recursive optimization. Thus, the transformation of 'nature' has become a functional part of the cultural system. From this perspective, it might make an 'environmental' difference at the global level if the criteria for these optimizations could be made the subject of higher-order theoretical reflections (cf. Freese 1988).
Differentiation among Reflexive Discourses
Would we not need a fifth dimension for a meta-theoretical reflection? How can one explain that we are able to understand self-organization, while being self-organizing systems ourselves? One expects that a four-dimensional system is able to reconstruct a three-dimensional system to the extent that it can design one. However, a fourdimensional mind can only reconstruct a four-dimensional system to the extent that it can develop and improve its mental mappings of it, given the specification of a perspective (Hinton et al. 1986). Thus, we are not able to construct a dynamic social system like we are able to engineer an electronic circuit, but we are able to reflexively understand the dynamics of the social system by taking a point of reference. As Luhmann (1984) argued, this point of reference for the reflection is to be considered as the analyst's 'blind spot'.
Can one recursively reflect on one's blind spot? Kuhn's (1962) metaphor of paradigms has been particularly helpful in understanding changes in perspectives on mental mappings of complex systems. Paradigm changes enable us to change back-and foreground like in a Gestaltswitch, but at the supra-individual level. Each paradigm allows the participants to communicate reflexively, since a perspective is provisionally stabilized. Problems which cannot be clearly communicated in one paradigm may be solvable after a paradigm switch. Kuhn, however, considered the communication among paradigms as virtually impossible.
Like in a belief system, a paradigm provisionally fixes the fourth degree of freedom in the communication by making one preferential selection among the various possible perspectives on the complex system(s) under study. The various paradigms take other axes for the reflection, and therefore they can be considered as 'incommensurable'. However, the paradigms compete in their efforts to understand the subject of study. Thus, competing theories constitute a layer of reflexive systems of communication on top of the complex dynamic systems under study.
A reflexive analyst is able to understand a paradigm switch, because a four-dimensional mind is able to translate among translation systems, in principle. Thus, the various paradigms remain only 'nearly incommensurable' in the sense that an evolutionary system remains nearly decomposable (Simon 1973). The more the main axes of the reflection are orthogonal, the more the paradigms are expected to have grown incommensurable. The significantly less frequent interaction terms between these differentiated theoretical systems, however, are expected to organize the functionality of the theoretical discourses for the evolutionary transformation of the systems under study.
The 'Duality' in the Sociological Understanding
How can this reasoning help us in clarifying the differences among paradigms in sociology? Let us assume that the social system is indeed a complex and dynamic system. In order to capture as much complexity of the system under study as possible, comprehensive theories are expected to develop increasingly along orthogonal dimensions. In general, a four-dimensional system has four orthogonal projections in three-dimensional spaces. Thus, one should expect three fully fledged sociologies and one meta-theoretical reflection to become dominant metaphors for reconstructing the dynamics of the social system. The three (ideal-typical) sociologies are expected not to take into consideration either (i) the structural dimension of differentiation, (ii) reflexivity in the time dimension, or (iii) the re-attribution of meaning in the fourth dimension. Meta-theoretical discourse abstracts from the substance of these sociologies (in the primary dimension of variation), and postulates a mechanism for their integration.
The three expected sociologies can be identified as (i) historical approaches that tend to consider agency as a source of yet undifferentiated variation; (ii) systems-theoretical approaches that tend to focus on the invariants of the system, and thereby neglect the dynamics among the dimensions over time (e.g., Parsons' structural-functionalism); and (iii) symbolic interactionism that neglects the difference between substantive and reflexive communications by considering all social communications under the single perspective of 'meaning'. In their ideal-typical forms these sociologies exclude one another (cf. Grathoff 1978). For evolutionary reasons, however, one expects near decomposability in practice. Consequently, the discursive translations among these three discourses are expected to develop theoretical perspectives with a significantly lower frequency than within each of the paradigmatic discourses (Simon 1973).
In other words, the fourth (meta-theoretical) discourse is evolutionarily 'later'. It has hitherto been developed by using mainly an anthropomorphic metaphor: the reflexive analyst is supposed to be able to integrate the various discourses in a meta-theoretical reflection. However, this formulation reduces the problem to a psychological or a philosophical one (e.g., Woolgar 1988). The sociological problem is again how this reflexivity can be communicated.
Recently, parallel and distributed processing has provided us with the mental model to understand this operation (e.g., Rumelhart et al. 1986;Leydesdorff 1993). Each sociology can be considered as an independent processor of a discourse, but the 'intertextuality' is generated by the program that runs in the network among these discourses. The texts are expected to appear with different frequencies in the (hyper-cyclic) intertextuality (cf. Kristeva 1980). Furthermore, the spectra of frequencies are expected to change historically as a result of the interactions. The specific sociologies provide us with theories about how the various loops operate; the study of the intertextuality among the various descriptions should provide us with an expectation about their relative weights.
Is one able to specify the production rule for 'intertextuality' among the three sociologies specified above? Is it possible to achieve a higher-order dimensionality in the understanding, i.e., one that allows for a duality (Giddens) or perhaps a higher-order plurality in the discursive representation of a four-dimensional system without generating confusion? The crucial point is that one would need a representation which is dynamic in terms of choosing a perspective (as in a movie). By using a spatial depiction or a geometrical metaphor in the narrative one is not able to represent change in the data and in the relevant categories for organizing the data in the same pass. In general, the scholarly discourse tends to become confused without a clear distinction between a = non-a as a categorical contradiction and a permitted change in the value of a variable.
Algebra enables us to distinguish dynamically between a change in a variable and its expected value, since the variable (x) can be redefined as a flux (dx/dt). More specifically, the algorithmic simulation provides us with the dynamic representation (on the screen) which visualizes change both in structure and in the data like in a movie, while at the same time we can keep track of the cause of the observable effects in terms of the underlying computer code. In other words, the recursive algorithm allows us to distinguish between the observable dynamics of the macro-system and the microvariation at lower levels in nested layers of selective conditions without becoming confused. The behaviour of the model system is more complex than the composing subdynamics, while only the latter can be made the subject of substantive theorizing (Langton 1989). The computer code can be considered as the genotypical specification of the phenotypical behaviour of a model on the screen; the resulting model captures the interaction terms 'in between' the discursive representations.
Without theoretical assumptions the problems are non-computable, since each additional context introduces an infinite number of possible interactions. Substantive specification operates as a selection device among other selection devices; the various specifications condition one another in the model system. As noted, the social system is not a biological given but a reflexive reconstruction, and thus social communications allow for complexities in the communication with a complexity of even higher than four. For example, one can consider three or more communications as a single operation of the social system, analogously to the above specified possibility to operate in iterations of two communications. Cultural evolution can develop increasingly complex patterns to the extent that (some) actors are able to carry the reconstructive communications.
In summary, common languages allow for one layer of reflexivity without confusion. Codification in specialist languages allows for the (provisional) stabilization of meaning in nearly decomposable reflexive layers of communication (e.g., 'paradigms'). Codified computer code allows for higher-order recursion. Of course, an interpreter may need a language for the theoretical appreciation of the simulation results, but this should not obscure the evolutionary constraints on 'human' understanding in 'natural' languages. [8]
Implications for Sociological Theorizing
As noted, a reflexively specified translation requires at least two communications. By accepting methodological assumptions about the organization of the inference, reflexive discourses therefore can become codified. Given the implied blind spot, all inferences thus generated remain necessarily uncertain with respect to the relative relevance of the perspective for understanding the system(s) under study.
Hitherto, the issue of the selective perspective has been elaborated in the social sciences primarily as a methodological and self-referential concern about bias. For the algebraic transformation of theorizing, however, it should be discussed as a choice of a relative perspective in relation to a space of other possible perspectives. First-order theoretical results can then be formulated as conditional statements ('selection') about probability distributions ('variation') that can be specified in algorithmic code as sub-routines. By relating the mechanisms and translations in specific orders, one is able to specify the consequences of their interactions in terms of ranges of expectations (cf. Hanneman 1988).
Substantive theories provide us with the specification of an uncertainty that cannot be provided by the mathematics. One needs discursive theorizing to specify the nature of the mechanisms of variation and selection. These mechanisms can be considered as the building blocks of the higher-order cybernetics. In general, the specification of the subdynamics generates a multi-dimensional problem space. Each theoretical reconstruction is partial, and must be positioned with reference to this problem space. The mathematical model, however, can be studied without a relation to specific instantiations in the system under study. Therefore, it allows one to generalize from the specifications to the problem space, and in principle to suggest states other than those which are intuitively accessible. Thus, the system of theoretical representations is able to bootstrap from specification to generalization.
In my opinion, the possibility of understanding the dynamics among theories as another layer of complex (reflexive) systems on top of the complex systems under study, has recently emerged on the basis of the evolution of those sciences that can be reconstructed in terms of computer code (e.g., Langley et al. 1987;Hanneman 1988;Freese 1988;Anderson et al. 1988;Langton 1989;Andersen 1992 andLeydesdorff 1995b;Langton et al. 1992). As far as these meta-theoretical reconstructions can again be communicated, they indicate the possibility of a general theory of communication.
Let us first focus on the sociological relevance of this understanding, and postpone the discussion of the epistemological status of a general theory of communication to the next section. By scanning the range of possibilities using a computer model, the 'unintended consequences' of specifications can be made visible in terms of expectations. This option generalizes Giddens' (1979 and notion of 'unintended consequences'. The 'unintended consequences' are phenotypical outcomes, while the specifications remain genotypical. Any observable system under study will systematically generate 'unintended consequences' if looked upon reflexively, since a system 'in the making' contains an emerging dimension.
One is now able to generalize this insight with reference to theorizing: the dual perspective of the participant and the analyst or ---in other words ---the relations between substantive specification and formal modelling generate a tension between language and meta-language that drives the scientific research process. More specifically, the theoretical specifications are discursive reflections on the resonances that can be observed historically. The algebraic understanding in terms of formal models and fluxes can be developed into a discourse of a different order than the substantive theories that went into its construction. The hyper-cyclic model then may guide us in the search for alternative development patterns, since it recursively recombines these reflexive insights in other possible orders.
In summary, by specifying theoretical expectations about the relevant dimensions and the how of the interactions, the analyst is able to reduce the computational complexity. Special theories make new problems computable, in principle. The substantive elaboration challenges the further development of the mathematics involved, since it limits the number of relevant interaction terms. Simulations can help us to define more precisely the complexity in terms of interacting procedures; simulation results challenge the appreciative understanding of the dynamics at a subsequent stage.
Conclusions
The recursivity of the selection in communication has been crucial to the argument. This assumption has enabled us to clarify long-standing problems like the difference between differentiation and stratification, information and meaning, or reflection and selforganization in terms of a single principle. (See Table One for a summary of the various concepts.) The non-linear dynamics of self-organizing systems are applicable to (i) social systems, (ii) to how these systems can reflexively be understood in terms of sociologies, and (iii) to the meta-theoretical understanding of the dynamics among the relevant sociologies.
The concept of reflection in terms of hierarchical layers should be replaced with 'reflection' as an orthogonal dimension of the complex construct. The operationalization of reflection as a recursion of the selection allows us epistemologically to formalize 'reflexivity' without ontologically reifying it as a substantively higher-level. Thus, this approach enables us thoroughly to solve the so-called reflexivity problem in post-modern sociology, i.e. the problem that one cannot claim priority for a specific reflection concerning reflexive actions. If reflection is a contingent property of the communication, it is possible to ask for the quality of a reflection. The formalization, however, remains in need of substantive specification, for example, in terms of reflexive discourses.
What kind of theory of communication might the implied meta-sociological understanding provide? One would expect a general theory of communication to encompass the mathematical theory of communication and the special theories of communication at the hyper-cyclic level. However, would one be able to understand this theory in substantive terms or only as a formal possibility? In my opinion, the recursivity of the operation provides us with a metaphor for the understanding.
Remember that social structure is not a given, but an expectation on the basis of a theoretical assumption. Accordingly, the operationalization of the social interaction can change with the perspective or with the further development of sociological theory. Analogously, sociological discourses can be modified reflexively by a next-order interaction. In other words, the simulations inform us about 'unintended consequences' of earlier specifications, and thus, they can help us to recursively improve these specifications (like in the case of an update).
The simulation results provide us with a representation of the super-system. On the one hand, the algorithmic specification in terms of sub-routines allows us to backtrack from the simulated phenomena to the theoretical specifications. On the other hand, the model reconstructs the theoretical understanding. The operation is so recursive that the analyst would have to sacrifice explanatory power in order to stabilize the theoretical appreciation of the results in the same discourses as the a priori expectations. From this perspective, a theoretical explanation has the status of a translation of the hypotheses.
Notes
1. Some authors (e.g., Brillouin 1962;Bailey 1990) have defined this difference ("negentropy") as information. When the focus is no longer on a fixed communication channel, but on an evolving communication system, one should distinguish between the expected information content of the receiving system, and observed information that is positioned by this system in a subsequent update. The observing system can meaningfully position the incoming information with reference to its previous state. It can be shown that with reference to the a priori system the probabilistic entropy of the interaction may sometimes have a negative value, and therefore add to the redundancy of the receiving system (cf. Leydesdorff 1992 and1995a). With reference to the a posteriori situation, however, the uncertainty has always to increase (Georgescu-Roegen 1971: 410ff.;cf. Hayles 1990).
2. The number of possible states of a network system increases with the number of nodes in the exponent. Thus, the number of possible states is non-polynomial complete; it becomes rapidly uncomputable with an increasing number of nodes.
3. If all possible links would be used to the same extent and at the same time, the network would be completely uncertain. As an entropical system, the network would then have "died".
4. For example, if a muscle is denervated, control by the higher-level system is released, and the sensitivity of the lower-level system for disturbances is increased. 5. A communication system among animals would have to operate at least three times before a community (e.g. a population of insects) can be generated. The assumption of higher-order hyper-cycles is more common in modelling biological systems (cf. Langton et al. 1992). However, the insects themselves are not expected to carry higher-order memory functions.
6. The duality of the communication is paradigmatically entailed in the intertextuality between the Old and New Testament. The New Testament reflected the Personal meaning of the substantive Communication in the Old one. Christianity, however, initially adapted to the format of the Roman system, which spanned the whole world (kat' holen gen or Catholic), but in terms of a stable Empire.
7. Although differently defined, 'variance' and 'information content' are both measures of the uncertainty, and therefore semantically equivalent (Theil 1972).
8. One expects that a five-dimensional system is able to reconstruct and to design a fourdimensional one. Thus, the project of artificial evolution is tractable, but the problem is currently uncomputable given the availability of hard-and software (cf. Leydesdorff 1994). | 2019-04-12T20:58:06.039Z | 1997-03-01T00:00:00.000 | {
"year": 2010,
"sha1": "5c9d46b4ad505b512ca44dfa21ae8be44b66190a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1003.2887",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e5a28be703d1a1e03213932127374622241482f0",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology",
"Physics"
]
} |
236431426 | pes2o/s2orc | v3-fos-license | Elucidating the Role of Topological Constraint on the Structure of Overstretched DNA Using Fluorescence Polarization Microscopy
The combination of DNA force spectroscopy and polarization microscopy of fluorescent DNA intercalator dyes can provide valuable insights into the structure of DNA under tension. These techniques have previously been used to characterize S-DNA—an elongated DNA conformation that forms when DNA overstretches at forces ≥ 65 pN. In this way, it was deduced that the base pairs of S-DNA are highly inclined, relative to those in relaxed (B-form) DNA. However, it is unclear whether and how topological constraints on the DNA may influence the base-pair inclinations under tension. Here, we apply polarization microscopy to investigate the impact of DNA pulling geometry, torsional constraint, and negative supercoiling on the orientations of intercalated dyes during overstretching. In contrast to earlier predictions, the pulling geometry (namely, whether the DNA molecule is stretched via opposite strands or the same strand) is found to have little influence. However, torsional constraint leads to a substantial reduction in intercalator tilting in overstretched DNA, particularly in AT-rich sequences. Surprisingly, the extent of intercalator tilting is similarly reduced when the DNA molecule is negatively supercoiled up to a critical supercoiling density (corresponding to ∼70% reduction in the linking number). We attribute these observations to the presence of P-DNA (an overwound DNA conformation). Our results suggest that intercalated DNA preferentially flanks regions of P-DNA rather than those of S-DNA and also substantiate previous suggestions that P-DNA forms predominantly in AT-rich sequences.
INTRODUCTION
Detailed knowledge of the elastic and mechanical properties of DNA is essential for obtaining a complete understanding of nucleic-acid processing in vivo. Under physiological conditions, and in the absence of force, double-stranded DNA exists in the so-called B-form, which exhibits a length of 0.34 nm/bp. However, when stretched to a force of ∼65 pN, B-DNA undergoes a structural transition known as overstretching. 1,2 The overstretching transition (OST) is characterized by a 70% elongation of the DNA at nearly constant force, in which the double helix unwinds cooperatively, resulting in either basepair-melted DNA or an underwound, base-paired structure termed S-DNA. Whether S-DNA or melted DNA forms during overstretching depends on factors such as the base-pair sequence, ionic strength of the buffer, and the temperature. 3−8 The case described above assumes that the DNA is torsionally unconstrained (UC) and thus that the molecule is topologically free to unwind under the applied tension. The OST changes markedly when the DNA molecule is torsionally constrained (TC) due to the fact that the overall linking number (Lk, defined as the sum of the twist and writhe in the molecule) must remain constant. Consequently, the OST in TC-DNA occurs at much higher forces (∼110 pN) than that in UC-DNA. This is because the presence of underwound structures (such as S-DNA) must be compensated for by a corresponding overwinding in other sections of the molecule. 9 It has been proposed that this local overwinding arises via the formation of a structure known as Pauling (P)-DNA, in which the phosphate backbones wrap around one another with a helicity of ∼2.5 bases per turn. The bases in P-DNA are thought to be unpaired and point outward. 10 Based on the measured helicity of S-DNA (∼37.5 bp/turn), and the proposed structure of P-DNA, these two conformations are estimated to coexist in a ratio of approximately 4:1 S-DNA:P-DNA to ensure a constant Lk. 9 This finding was supported by subsequent fluorescence microscopy studies of overstretched TC-DNA. 11 Owing to the increased stability of GC base pairs relative to AT base pairs, it was suggested that P-DNA (which requires base-pair melting) primarily forms in AT-rich sequences of the DNA molecule, while S-DNA dominates in GC-rich sequences. 11 When torsional stress is applied to TC-DNA, the overall Lk will either increase or decrease depending on the direction of the applied torque, resulting in DNA overwinding (positive supercoiling) or underwinding (negative supercoiling), respectively. The extent of supercoiling is defined by the relative change in Lk, parameterized by the supercoiling density, σ = (Lk − Lk 0 )/Lk 0 . Here, Lk 0 refers to the value of Lk in relaxed (B-form) DNA. In the case of negatively supercoiled DNA, less (overwound) P-DNA is required to offset (underwound) S-DNA during overstretching. At a critical supercoiling density of σ ∼ −0.7, no P-DNA is required during overstretching, and thus, at the end of the OST, the DNA will consist entirely of S-DNA. 9 In order to characterize local changes in the structure of overstretched DNA, fluorescence imaging and force spectroscopy of intercalated DNA have proved highly informative. 4,7,12−14 DNA intercalators are small planar dye molecules that bind to DNA by sliding between adjacent base pairs, which results in local unwinding and elongation of the double helix. 15 Upon binding to DNA, intercalators align parallel to neighboring base pairs and undergo a fluorescence enhancement of two orders of magnitude or more. 16−18 While intercalators strongly favor B-DNA binding, the absence of intercalator fluorescence has been used frequently to deduce the presence of non-B-form overstretched structures, such as S-DNA. 4,7,11 −14 Polarized excitation and fluorescence imaging of intercalators can additionally reveal the orientations of the intercalated dyes' transition dipole moments relative to the axis of the DNA molecule. Previous studies have demonstrated that intercalated dyes orient perpendicular to the DNA axis at forces < 65 pN and also for partially overstretched DNA (i.e., where less than 100% of the molecule has been overstretched). 19−22 In these cases, B-DNA neighboring the intercalation site acts to align the intercalators in the same orientation as the base pairs in Bform DNA. Recently, 23,24 we demonstrated that at extensions beyond the end of the OST, intercalated dyes tilt to an orientation of θ ∼ 54°relative to the DNA axis. At these extensions, most of the B-DNA has been converted to S-DNA due to the unwinding of the molecule. Although it is energetically unfavorable for intercalators to bind directly to S-DNA, our data suggested that dye molecules assume a tilted configuration when an intercalation site was neighbored by (unintercalated) S-DNA. Intercalator tilting was only observed at DNA extensions beyond the end of the OST, rather than during the OST (where S-DNA and B-DNA coexist) because it is energetically unfavorable for S-DNA (relative to B-DNA) to neighbor intercalated DNA. 14 Taken together, these measurements led us to conclude that S-DNA exhibits highly inclined base pairs relative to B-DNA 24 and thus induces the tilting of flanking intercalated DNA. While this work provided valuable insights into the structure of S-DNA, these experiments considered only a single stretching geometry in which the tension was applied via opposing 3′ ends. However, it has been suggested that different base-paired overstretched structures may form depending upon whether tension is applied on the same or opposite DNA strands. 25−30 Moreover, the influence of torsional constraint and supercoiling (and thus the effect of P-DNA) on S-DNA-induced intercalator tilting is unknown. Figure 1. Overview of the experimental method. (A) Imaging protocol: an intercalated λ-DNA molecule is stretched to a desired extension along the X-axis using dual-trap optical tweezers (transition dipole moments of intercalated dyes are drawn as red bidirectional arrows). The construct is first imaged using a reduced-power Y-polarized excitation beam, efficiently exciting dyes oriented close to the plane perpendicular to the DNA axis (YZ-plane). (B) A second camera frame is recorded using X-polarized light at increased power. This polarization will inefficiently excite dyes oriented in the YZ-plane but provide suitable fluorescence signals for LD measurement. (C) Representative fluorescence image data recorded using a TC-DNA molecule using Y-polarized (top) and X-polarized (bottom) excitation. (D) Representative force-extension curves for TC-DNA (blue) and UC-DNA (red). These curves were collected on DNA in the absence of intercalated dyes. The inset illustrates the attachment geometries for TC-DNA and UC-DNA constructs.
Here, we apply intercalator polarization microscopy to explore the influence of pulling geometry, torsional constraint, and negative supercoiling on the structure of overstretched DNA. We show that intercalator tilting beyond the OST in UC-DNA is independent of whether the construct is pulled via the 3′ ends of opposite strands or the 5′ and 3′ ends of the same strand. This indicates that the tilted orientation of intercalated dyes is caused by the inclined base pairs within the S-DNA conformation and is not a consequence of a specific pulling geometry. We next demonstrate that at forces beyond the OST, intercalated dyes tilt significantly less in TC-DNA than in UC-DNA. Moreover, the extent of intercalator tilting in TC-DNA displays a strong sequence dependence, whereby dyes are tilted more strongly in GC-rich sequences compared with those in AT-rich DNA regions. This contrasts with the case of UC-DNA. Finally, we conducted similar fluorescence polarization measurements on negatively supercoiled TC-DNA using the recently reported technique of optical DNA supercoiling (ODS). 31 These experiments reveal that over a narrow range of negative supercoiling densities (−0.6 < σ < −0.7), the intercalated dye orientations beyond the OST exhibit a sudden transition from a sequence-dependent, slightly tilted state to a sequence-independent, highly tilted state. From these results, we draw two main conclusions: first, we confirm that P-DNA forms predominantly within AT-rich sections of overstretched TC-DNA. Second, we argue that it is more favorable for P-DNA, rather than S-DNA, to flank intercalator binding sites.
METHODS
2.1. Polarization Microscopy. Figure 1A,B provides an overview of the experimental technique. A setup combining dual-trap optical tweezers and fluorescence microscopy was used to manipulate and extend single DNA molecules, tethered between two optically trapped beads. 32 As described in refs 23 and 24, polarized laser illumination was used for fluorescence excitation. However, in a departure from our previous work, fluorescence emission was not resolved into its X-/Y-polarized components (see our rationale behind this change in the paragraph below). As in our previous experiments, 23,24 we used bacteriophage λ DNA (λ-DNA), which has a contour length of ∼16.5 μm. We used the bis-intercalator YOYO-1 to label the DNA. A collimated 488 nm laser beam was used for widefield epifluorescence excitation. Laser intensities of ∼1−5 W/cm 2 were used for imaging experiments (estimated by measuring transmitted laser power by placing a power meter above the microscope objective and dividing by the size of the illuminated area at the sample). The polarization of the excitation laser was controlled using an EOM (model 350−80, Conoptics) placed in the beam path. By synchronizing the input voltage to the EOM with the camera exposure TTL signal, the excitation laser was rapidly toggled between X-and Y-polarization during the read-out period after each successive camera frame was recorded. Additionally, the intensity of the polarized excitation laser was controlled using a liquid crystal beam attenuator (ThorLabs LCC 1620) placed before the EOM. This device was modulated after each camera exposure to compensate for polarization-dependent losses incurred as the beam propagated through the excitation pathway and achieve different intensities for X-/Y-polarized excitation ( Figure 1A,B). The degree of polarization of the collimated excitation beam was measured to be >98%, and within ∼2°of the X-and Y-axes of the experimental reference frame, using a polarimeter (ThorLabs PAX 1000VIS) placed above the microscope objective. Raw fluorescence data ( Figure 1C) were acquired using an Andor EMCCD (iXON 897) set to an electron multiplication gain of 100 using integration times of 1 s per camera frame for both X-and Y-polarized excitation. A complete description of other aspects of the experimental apparatus may be found in the previous work. 24 Here, we detail some key changes to our experimental approach that facilitated the study of TC-DNA and supercoiled DNA. A single nick in the DNA backbone will cause the entire construct to become unconstrained, irreversibly altering the force-extension behavior ( Figure 1D) and limiting our ability to investigate conformations that require a fixed Lk before and during overstretching. Fluorescence imaging therefore had to be carried out under conditions that minimize the risk of photoinduced nicking upon the bleaching of intercalated dyes. Thus, it was paramount to limit the fluorescence excitation laser intensity and reduce the fluorescence excitation of intercalated DNA. Due to the need to conserve a dim fluorescence signal as much as possible, we departed from our previous approach by opting to not use a polarizing beam splitter in the imaging pathway of our microscope. In this imaging configuration, we no longer resolve the polarization of light emitted by intercalated dye molecules and instead rely on changes in total fluorescence intensity as a function of excitation polarization to gather information about the overall alignments of intercalated dyes. 33 In eq 1, x,y I are the background-subtracted fluorescence signals obtained using X-/Y-polarized excitation, respectively (to maintain consistency with our earlier work, we use prescripts to denote excitation polarization and omit postscripts since the emission polarization is not resolved). ε is the correction factor that accounts for the difference in laser intensity used when alternately exciting with X-/Y-polarized illumination, as well as the polarization-dependent transmission efficiency of the filters placed in the imaging pathway. The LD alone does not enable the absolute orientation of the intercalated dyes to be determined (as this would require the combined excitation/emission polarization-resolved measurements, which are difficult to perform without nicking the DNA, as discussed above). Nevertheless, the LD provided a highly streamlined means of inferring relative changes in dye orientation. Specifically, the LD can be used to detect when dyes are no longer aligned in the plane perpendicular to the DNA axis. Furthermore, the use of a single (unpolarized) imaging channel enabled LD values to be calculated on a pixelwise basis, without the need for image registrationallowing spatial variations in LD along the axis of the DNA to be readily identified.
An additional modification to our experimental technique involved using substantially reduced laser power (ratio of 1:5.42) when exciting with Y-polarized in contrast to Xpolarized illumination ( Figure 1A,B). The precise ratio of excitation power was determined by imaging a solution containing freely diffusing eGFP and measuring changes in The Journal of Physical Chemistry B pubs.acs.org/JPCB Article fluorescence emission upon toggling the laser power/polarization. In conventional LD experiments, care is often taken to use near-equal excitation power for both excitation polarizations, leading to drastically different output intensities for well-ordered samples. In our case, however, increased output fluorescent signals went hand-in-hand with accelerated photo-nickingit was therefore preferable to adjust the laser power such that the fluorescent signal was uniformly low (yet still detectable above the background) for both input polarizations and profit from a sufficiently low photo-nicking probability (allowing greater than ∼8 s of imaging time in some cases).
Since the TC-DNA constructs primarily exhibited negative LD (absorption dipole moments located primarily in the plane perpendicular to the DNA axis), we reasoned that the laser power should be attenuated when using Y-polarized light to image an intercalated DNA molecule aligned along the Y-axis of the experimental system. This approach of using combined amplitude/polarization modulation had the additional benefit of amplifying small changes in the fluorescence signal resulting from X-polarized excitation (which would cause unnecessary photo-nicking or even the saturation of the camera sensor had identical power been used for the complementary Y-polarized excitation). The higher laser power used for X-polarized excitation was also necessary to obtain a sufficient fluorescent signal above the background to make an accurate estimate of LD. For all fluorescence imaging experiments, a ∼25−50 nM concentration of the bis-intercalating dye YOYO-1 was used. YOYO-1 was selected because in our previous experiments, 23,24 intercalator tilting was most readily observed under a wide range of experimental conditions due to the dye's slow unbinding kinetics at forces beyond the OST. A high-salt (1 M NaCl) buffer was always used for imaging. This imaging buffer ensured that overstretching the DNA constructs would favor S-DNA formation in contrast to base-pair melting. 7,11 2.2. DNA Construct Design and Generation of Negatively Supercoiled DNA. All experiments were performed using a linearized λ-DNA construct (∼48.5 kb) which was end-labeled with biotin and tethered between streptavidin-coated microspheres (∼4.5 μm, Spherotech). The topological state of these constructs varied depending on the biotin-labeling strategy. For opposite-stranded constructs, biotin labels were added to the 3′ ends of each strand using Klenow DNA polymerase exo to fill in the 5′ overhangs of λphage DNA with biotin-labeled nucleotides, as described previously. 36 For the same-stranded UC-DNA constructs, biotins were positioned on the 3′ and 5′ ends of only one strand by ligating biotin-labeled oligonucleotides to each end of λ-phage DNA using the protocol detailed in ref 37. TC-DNA constructs were prepared by ligating a hairpin "end-cap" to each end of linearized λ-DNA, such that both ends of the molecule were closed. Each end-cap was labeled with four biotins near the tip and, in most cases, the end-closed construct was tethered to streptavidin-coated beads via at least two biotins on each end of the molecule. 11 This prohibited the rotation of the DNA molecule with respect to the beads and thus rendered the molecule TC (note that the beads do not rotate in the optical traps).
The above-mentioned end-capped TC-DNA construct was also used to generate negatively supercoiled molecules using ODS. 31 In brief, ODS exploits the mechanical properties of end-capped TC-DNA to induce a fixed reduction in the overall Lk. This is achieved as follows: first, an end-capped TC-DNA molecule is stretched to forces > 150 pN. After a period of time (typically 5−20 s) at these high forces, one or more biotin− streptavidin linkages are transiently broken, leaving one end of the DNA molecule tethered to the beads via only a single biotin−streptavidin tether. During this time, the molecule is in a UC overstretched state. As a result, the Lk is reduced via swivelling of the DNA molecule around the single tether. When the broken biotin−streptavidin linkages re-form, the DNA molecule becomes TC again but with a lower Lk than that of B-form DNA. The reduced Lk is retained even when the tension is released, and therefore, the molecule is negatively supercoiled. The extent to which Lk is reduced depends on the duration that the biotin−streptavidin bonds are broken for. Through repeated stretch−release cycles, the Lk can be reduced by between <5 and 70%. The supercoiling density can be quantitatively determined, based on the extension at which 70 pN of force is applied to the DNA molecule, using a look-up table that relates the DNA extension at 70 pN to the supercoiling density (see Figure 2B of ref 31). In this way, the supercoiling density can be estimated with a precision of ±0.05 (however, factors such as the ionic strength of the imaging buffer and dye coverage can slightly bias these measurements). The maximum reduction in Lk is achieved when the entire DNA molecule is converted to S-DNA at high force through transient biotin−streptavidin ruptures of TC-DNA.
2.3. Data Analysis. LD measurements were performed on sequences of camera exposures at least 2 s long (one complete frame for both X and Y polarizations) and for some sequences as long as 8 s (in which the four interleaved excitations using a given polarization were averaged together to yield an image with superior signal-to-noise). To perform LD measurement, the following procedure was used. A region of interest (ROI) was specified using a custom Matlab GUI by selecting the endpoints of a given DNA image (the locations where the DNA meets the trapped bead). The ROI was then defined as a line 5 camera pixels in diameter running between the two DNA endpoints. Two regions of identical size containing only fluorescent background were automatically defined by offsetting the ROI by ±5 camera pixels and smoothing out noise using a three-pixel Gaussian convolutional filter (sigma = 1 pixel). These two background regions were then averaged together to yield a pixel-wise map for background subtraction. Pixels containing background-subtracted signals less than a (heuristically chosen) threshold of 100 ADC counts were excluded from further analysis. This process was performed for both images corresponding to the X-/Y-polarized signal using an identical ROIpermitting LD to be calculated on a pixelwise basis without the need for image registration using eq 1. LD images were generated using a custom HSV colormap in which hue was used to denote LD, and the value was linearly scaled according to the fluorescence intensity ( x I + ε y I).
Regions of the image not containing the ROI were desaturated (made black-and-white) to avoid distraction. To plot LD as a function of DNA position, the "fractional DNA position" was defined as the position of a pixel relative to the endpoints of an ROI, with 0 corresponding to the leftmost point and 1 corresponding to the rightmost. Individual pixels were plotted as dots with respect to their fractional position. To aid the viewer, smoothed data were also plotted as a solid line using a Savitsky−Golay filter using default settings. 38 In this analysis, pixels having a fractional DNA position less than 0.05 or greater than 0.95 were excluded since it was reasoned that the The Journal of Physical Chemistry B pubs.acs.org/JPCB Article LD would be corrupted by the strong background signal emanating from the trapped beads. To compute the averaged LD across a portion of a DNA molecule, the mean LD from multiple pixels was determined, and confidence bars were drawn as the standard deviation, divided by the square root of the number of pixels used to compute the average. A given DNA molecule image generally contained ∼1000 camera pixels used for LD calculation. To compute averaged LD in alternately GC-rich or AT-rich regions of the DNA, 39 pixels with a fractional DNA position between 0.05 and 0.55 were designated as AT-rich and those with a fractional position between 0.55 and 0.95 were designated as GC-rich. The orientation of the λ-DNA could be deduced by inspecting raw fluorescence images of TC-DNA obtained using X-polarized laser excitation and noting that one side of the λ-DNA (∼45% of the entire construct) exhibited brighter emission (corresponding to the GC-rich region).
RESULTS
3.1. Same-Stranded Pulling Geometry Does Not Affect Intercalator Tilting. To assess the influence of attachment geometry (see Figure 2A) on the base-pair inclination in UC-DNA beyond the OST, we prepared λ-DNA constructs in which only one strand of the duplex DNA molecule was tethered to the beads by labeling the 3′ and 5′ ends of only one of the two complementary DNA strands with biotin (same-stranded pulling geometry). This contrasted with our earlier measurements, where the DNA molecule was labeled with biotin on the 3′ ends of opposing strands (opposite-stranded pulling geometry). 23, 24 We then stretched these constructs to beyond the OST in a high-salt buffer containing YOYO-1 and measured the LD as a function of the applied force ( Figure 2B). To ensure that the same-stranded pulling geometry was maintained throughout the LD measurements, it was important that the photoinduced nicking of the tethered strand did not occur. Therefore, the DNA molecule was transferred to a low-salt (15 mM NaCl) buffer (in the absence of dye) after each fluorescence imaging experiment and then restretched. Under these conditions, strand peeling during overstretching is strongly favored over S-DNA formation, and thus for extensions beyond the OST (L/L c > 1.7), the nontethered strand will peel away from the tethered strand, yielding a purely single-stranded DNA construct ( Figure 2C). However, if a nick was present in the tethered strand, the molecule would break during strand peeling. LD measurements were only considered for molecules where no nick was detected in the tethered strand. For nick-free molecules, LD measurements were collected under identical imaging conditions as for DNA attached using the oppositestranded pulling geometry. To ensure that data sets were fully comparable, we recorded a new set of opposite-stranded stretching curves rather than using the data reported in ref 24. Note that the nick-screening protocol described above for the same-stranded pulling geometry cannot be used for oppositestranded pulling. However, in our current and previous measurements, 23,24 we have observed no indication that photo-nicking of a DNA molecule initially prepared for opposite-stranded pulling leads to a detectable change in fluorescence polarization or LD. Here, we observed very similar LD measurements for both pulling geometries, and in each case, we observe fluorescence depolarization at the end of the OST, as previously reported for the opposite-stranded pulling geometry. 23,24 ANOVA analysis was performed by fitting a linear model to the plotted data, and the null hypothesis was confirmed (p-value of 0.27). 40 Thus, the observed LD values at extensions beyond the OST indicate a strong inclination of the DNA base pairs, independent of the specific attachment geometry.
3.2. Torsional Constraint Inhibits Intercalator Tilting. We next applied our combined fluorescence polarization and force spectroscopy approach to probe how the tilt angle in overstretched DNA is affected by the presence of torsional constraint. In these experiments, a TC-DNA molecule was generated by tethering an end-capped (i.e., topologically closed) λ-DNA molecule between two optically trapped beads, via at least two biotin moieties on each end, as described previously. 11 The presence of torsional constraint was verified based on the measured force-extension curve. In contrast to UC-DNA, where the OST is characterized by a plateau at ∼65 pN, the OST in TC-DNA occurs at much higher forces, ranging from ∼110 to 130 pN ( Figure 1D).
We first performed LD measurements on TC-DNA at extensions within the OST (L/L c = 1.2−1.55), yielding LD measurements between −0.76 and −0.81, consistent with An opposite-stranded attachment geometry was prepared, whereby a λ-DNA molecule was tethered to the optically trapped beads via the 3′-ends on opposite strands. The same-stranded attachment was prepared by tethering the DNA molecule to the beads via the 5′ and 3′ ends of only one strand. (B) LD as a function of the applied force for both opposite-and same-stranded attachment geometries (red circles and blue x's, respectively). (C) Verification procedure for ensuring the same-stranded attachment: after fluorescence imaging, the DNA molecule was moved to a low-salt buffer and overstretched to L/L C > 1.7, resulting in the peeling of the nontethered DNA strand from the tethered strand. The red forceextension trace shows the expected saw-tooth pattern during overstretching associated with the peeling process, while the blue trace shows the corresponding curve for single-stranded DNA obtained after retracting the end-to-end length.
The Journal of Physical Chemistry B pubs.acs.org/JPCB Article intercalated dyes oriented nearly perpendicular to the DNA axis. We then investigated whether intercalator tilting could be detected in TC-DNA at extensions beyond the OST. To this end, we stretched TC-DNA to extensions ranging from L/L c = 1.7 to 1.84 (corresponding to the forces between 130 and 215 pN). In contrast to UC-DNA, the LD values for TC-DNA beyond the OST exhibited a large magnitude and remained negative, even at extensions well beyond L/L c = 1.7. This indicates that the intercalated dye molecules retained orientations along (or close to) the plane perpendicular to the DNA axis even when the molecule was extended beyond the end of the OST. Nonetheless, when the DNA was stretched further, to extensions greater than L/L c ∼ 1.75 and forces > 130 pN (see Figure 3A), the LD did eventually begin to decrease in magnitude (i.e., become less negative). However, these LD values were still significantly lower than those obtained from UC-DNA at a similar relative extension beyond the OST. For TC-DNA, the highest LD values were between −0.7 and −0.3 (depending upon the sequencesee below), with a typical precision of ±0.03 LD units ( Figure 3A). This contrasts with the UC-DNA results ( Figure 2B), where the maximum LD reached values between −0.1 and 0.05, with similar measurement precision. A close inspection of the fluorescence data revealed that the LD values for TC-DNA exhibited reproducible patterns along the length of the DNA construct, in which one-half of the DNA molecule displayed a noticeably higher LD than the other half. This pattern correlated well with the known AT/ GC content of λ-DNA, which varies strongly between the two halves of the molecule (see the top panel, Figure 3D). In contrast, the LD in the case of UC-DNA varied over a much narrower range and showed no significant sequence dependence. To quantify the observed sequence dependence of the LD data in the case of TC-DNA, the transition point between low and high LD regions in each fluorescence image was determined by the eye, and the mean LD on either side of the transition point was calculated. These data are shown as a function of force in Figure 3A and reveal that AT-rich TC-DNA retains a large magnitude, negative LD of ∼−0.6 even at forces far beyond those encountered within the OST (up to 215 pN), while a more gradual decrease in the LD magnitude occurs as a function of force in the GC-rich portion of the molecule up to the values of ∼−0. 35.
In order to confirm that the above-mentioned results were indeed a consequence of torsional constraint, the following control experiment was performed. Here, a single intercalated TC-DNA molecule was stretched to a relative extension of L/L c = 1.78 (corresponding to a tension of 172 pN). This molecule was repeatedly imaged until a photoinduced nick was generated, and the construct promptly became UC, resulting in a rapid force drop of ∼20 pN ( Figure 3B). LD images collected immediately before and after formation of the photonick are shown in Figure 3C. Before nicking occurred, highly negative LD values were observed, and the magnitude of the LD was strongly correlated with the relative AT/GC content. However, from the fluorescence images taken immediately after nicking occurred, the LD increased in magnitude to roughly zero (very similar to the values obtained from UC-DNA previously, Figure 2B) and no longer exhibited a strong correlation with AT-/GC-rich sequences. An additional The Journal of Physical Chemistry B pubs.acs.org/JPCB Article example of sequence-dependent LD is shown in Figure S2 in the Supporting Information. Note that in this additional example, the orientation of the construct is flipped (i.e., the GC-rich portion is on the left side of the molecule). 3.3. Influence of Negative Supercoiling on Intercalator Tilting. We next investigated if, and to what degree, torsional stress influences the orientation of intercalators in TC-DNA. To this end, we generated underwound (negatively supercoiled) DNA, up to a maximum of σ = −0.7, using the recently developed technique of ODS. 31 As the supercoiling density increases toward σ = −0.7, an increasingly small fraction of P-DNA is required to offset the formation of S-DNA during overstretching; 9 beyond σ = −0.7, no P-DNA forms. 9,31 This raises an intriguing question: how does the presence or absence of P-DNA in negatively supercoiled DNA alter the observed LD? To answer this, we measured the LD over a range of supercoiling densities from σ = 0 to −0.7 at extensions beyond the OST (corresponding to forces of ∼150 pN). Surprisingly, no significant difference in intercalator tilting was observed up until at least σ = −0.6, even though only a small fraction of P-DNA remains at this high level of supercoiling (∼3% of the entire construct).
The force-extension curves for DNA with σ = −0.6 and −0.7 are shown in the left and middle plots of Figure 4A, respectively. The corresponding fluorescence polarization data are displayed in the upper two images of Figure 4B. At σ = −0.6, we obtained LD values between approximately −0.6 and −0.4 throughout the DNA molecule, largely consistent with the measurements obtained from nonsupercoiled TC-DNA under identical conditions ( Figure 3D). Similar to the case of nonsupercoiled TC-DNA, a notable sequence dependence of the LD values was observed for the LD associated with σ = −0.6. The most negative values of LD were obtained in the most AT-rich regions of the molecule (as shown in Figure 4B, upper panel, by the left white arrow at the fractional DNA position of approximately 0.55). The LD increases (becomes less negative) in GC-rich regions of the same construct (as indicated in Figure 4B, upper panel, by the right white arrow at the fractional DNA position of approximately 0.65). Our LD measurements at a supercoiling density of σ = −0.6 thus yield similar results to those obtained for nonsupercoiled TC-DNA and indicate that even a small fraction of P-DNA causes the intercalated dyes to assume orientations nearly perpendicular to the DNA axis, even at forces beyond the OST. An additional example, showing a DNA molecule with a supercoiling density of σ = −0.25, is shown in Figure S3. This DNA molecule also exhibits LD values similar to those obtained for TC-DNA.
When the supercoiling density was increased to σ = −0.7 (where no P-DNA was expected), a sudden and drastic change in the measured LD beyond the OST was observed. Here, the LD values were close to 0 throughout the molecule and the LD showed little or no sequence dependence, similar to the results observed for UC-DNA. This similarity was highlighted when the molecule with σ = −0.7 became (accidentally) nicked (due to photodamage). The occurrence of a nick was determined based on the change in force (yielding a force-extension curve The Journal of Physical Chemistry B pubs.acs.org/JPCB Article consistent with that of UC-DNA, as shown in the right panel of Figure 4A). Upon switching from supercoiled TC-DNA (with σ = −0.7) to UC-DNA, no significant change in LD beyond the OST was observed (see the lower two images in Figure 4B,C). This suggests that in these cases, highly inclined DNA base pairs exist uniformly throughout the entire construct, in contrast to both nonsupercoiled TC-DNA and supercoiled TC-DNA with supercoiling densities of at least σ = −0.6. An additional example of a DNA molecule with a supercoiling density of σ = −0.7 is shown in Figure S4. This DNA molecule also exhibits LD values indicative of pronounced dipole tilting.
DISCUSSION AND CONCLUSIONS
In this study, we have used fluorescence polarization microscopy of DNA intercalators to characterize the structural features of overstretched DNA arising from pulling geometry, torsional constraint, and negative supercoiling. We first reveal that for intercalated UC-DNA, the LD values at extensions beyond the OST (e.g., at forces > 100 pN) are close to zero, independent of whether the molecule is overstretched via the same or opposite strands. This result demonstrates that the pronounced (θ ∼ 54°) intercalator tilting previously identified beyond the OST for UC-DNA does not depend on the specific pulling geometry but is most likely a direct consequence of the presence of S-DNA. This may, at first, seem surprising, given that several publications have predicted that the pulling geometry can alter the DNA structure under tension. 29,30 Nevertheless, many of these predictions were based on molecular simulations that only considered very short dsDNA constructs (<50 bp). Thus, our results could indicate that on much longer constructs (≫10,000 bp), at least, any influence of the pulling geometry is no longer significant. We note that we cannot exclude the possibility that the pulling geometry might have an influence on DNA structural transitions during the OST; however, any effects would have to be sufficiently subtle that we cannot detect them with our approach. Moreover, we have demonstrated here that intercalated TC-DNA exhibits significantly lower (greater magnitude) LD values beyond the OST than UC-DNA. This indicates that the intercalated dyes remain oriented near the plane perpendicular to the DNA axis, even beyond the OST. We also established that at high forces (>130 pN), the LD values for TC-DNA are sequence-dependent: although the dyes are largely perpendicular to the DNA axis, they are more tilted in GC-rich regions than in AT-rich regions. Furthermore, we explored the influence of negative supercoiling on intercalator tilting, revealing an intriguing observation: under conditions where no P-DNA is present during overstretching (σ = −0.7), we observe essentially the same LD behavior as obtained for UC-DNA, namely, intercalator tilting at the end of the OST, with no detectable sequence dependence. Strikingly, however, only a slight decrease in the supercoiling density (from σ = −0.7 to −0.6) was sufficient to restore the LD features observed for TC-DNA, that is, a greatly reduced intercalator tilting that is sequence-dependent.
To explain the observed LD measurements on TC-DNA, we first recapitulate one of the conclusions from our earlier work. 24 There, it was noted that a significant depolarization of intercalator fluorescence was observed for UC-DNA only at the very end of the OST. That observation was explained by the assumption that (a) the observed depolarization (ascribed to intercalator tilting) is caused by the strongly inclined base pairs associated with the S-DNA conformation and (b) there is an energy penalty associated with S-DNA flanking intercalated B-DNA, as has been demonstrated previously. 14 Hence, intercalated DNA is forced to neighbor S-DNA only when B-DNA has disappeared, that is, at extensions beyond the end of the OST, causing the intercalated dyes to assume tilted orientations.
In the case of TC-DNA, there are three relevant DNA conformations to be considered during the OST: B-DNA, S-DNA, and P-DNA. While the existence of (overwound) P-DNA is necessary to allow for the formation of (underwound) S-DNA, P-DNA is only present in a small fraction of the molecule. In the absence of supercoiling, the ratio of S-DNA:P-DNA is roughly 4:1. 9,11 Since P-DNA is thought to exhibit a base-pair-melted structure, in which the bases are flipped outward, 10 we assume that P-DNA will not influence the dye tilt angle at adjacent intercalation sites and that any neighboring intercalated dyes will have orientations perpendicular to the DNA substrate. This hypothesis is supported by the observation that beyond the OST, intercalators adopt a less tilted orientation in AT-rich (compared with GC-rich) regions of TC-DNA. This is consistent with previous suggestions that P-DNA is preferentially formed in AT-rich sequences. 11 While the above-mentioned assumption that P-DNA will not cause neighboring intercalated dyes to tilt would qualitatively explain a relative decrease in LD (increase in magnitude), this assumption alone does not completely account for our observed results. If intercalated dyes were randomly distributed throughout an overstretched TC-DNA construct containing 20% P-DNA (perpendicular dyes) and 80% S-DNA (tilted dyes), this would lead to an estimated LD of approximately −0.12. This calculation is based on our previously developed mathematical model and experimentally measured tilt angles and "wobble" cones associated with intercalated UC-DNA extended within and beyond the OST (see the Supporting Information of ref 24 and refs 41−46). If we additionally assume that S-DNA must neighbor both sides of an intercalation site in order to cause significant dipole tilting, this would lead to a construct containing roughly 32% perpendicular dyes and 64% tilted dyesyielding an estimated LD of only −0.21. In contrast, our measured LD values are significantly lower than this: at the highest forces and extensions for which data were obtained, we recorded LDs of approximately −0.35 and −0.6 for GC-rich DNA and ATrich DNA, respectively.
We therefore propose that the most plausible explanation for the greatly reduced tilting measured in TC-DNA beyond the OST is that intercalated dyes are predominantly flanked by P-DNA rather than by S-DNA. This hypothesis is further reinforced by the observed dependence of the LD on the supercoiling density. As the density of negative supercoiling is increased, the proportion of P-DNA in the molecules becomes increasingly small. However, even at σ = −0.6, where only a ∼3% total fraction of P-DNA is assumed to be present, 31 we still observed a largely inhibited dye tilting beyond the OST. Only at a supercoiling density of σ = −0.7, that is, under conditions where P-DNA is expected to have completely disappeared, does the measured LD values approach the values obtained for UC-DNA. We propose that these observations can be explained by an assumption of a nonstochastic distribution of DNA states, in which intercalated DNA is strongly biased toward neighboring P-DNA rather than S-DNA. This is consistent with the large energy penalty that was The Journal of Physical Chemistry B pubs.acs.org/JPCB Article found previously for intercalated DNA neighboring S-DNA, 14 whereas we expect no such penalty for P-DNA. This proposed model is summarized in Figure 5A,B.
We note that in all our experiments conducted using TC-DNA, we utilized higher dye concentrations and lower laser illumination powers than those used in our previous experiments utilizing single YOYO-1 dye molecules. This was a concession made to reduce the probability of photo-nicking over imaging periods lasting on the order of ∼2−8 s. However, under these conditions, we were unable to spatially resolve single intercalated dye molecules. As a result, our reported LD measurements were local ensemble averages over many dyes, and we were unable to detect whether individual intercalators assumed a discrete set of distinct (tilted/untilted) orientations, as was found to be the case in ref 24. Despite the increased dye concentration, the dye coverage is below saturation and, at the forces of interest, low enough that we do not expect significant influence on the overstretching characteristics of DNA. In this work, a YOYO-1 concentration of ∼25−50 nM was used. In comparison, the experiments in ref 24 used YOYO-1 concentrations of ∼1 nM in order to give a coverage sufficient to observe discrete single dye molecules (dye distance ≫ 1 μm, corresponding to a coverage of <1 dye molecule for every ∼3000 base pairs). Thus, despite the increased YOYO-1 concentration used in the current work, our dye coverage is still very low (expected ≪ 1 dye molecule for every 10 base pairs). Furthermore, we note that when the dye coverage is increased further, significant distortions in the force-extension curves are observed (see Figure S5 in the Supporting Information). Under the conditions used in our current work, we do observe distortions to the force-extension curve, but these occur at significantly greater relative extensions within the OST. Due to photo-nicking, we were unable to obtain (emission) polarization-resolved images of TC-DNA stretched along different directions within the image plane.
This limitation prevented us from quantitatively estimating probe tilt (θ) without also making assumptions about the extent of probe "wobble" (α). See ref 24 for a precise definition of these parameters. Nevertheless, the large magnitude, negative LD values that we obtain for TC-DNA constructs are unambiguous, in which they clearly demonstrate that intercalated dyes must align closer to the plane perpendicular to the DNA axis than was observed in the case of UC-DNA stretched to similar forces above the OST.
To further substantiate our conclusions, we performed molecular dynamics simulations of bare (unintercalated) UC-DNA and TC-DNA stretched beyond the OST. These constructs were 200 base pairs in length containing AT-rich and GC-rich subregions and solvated by a continuum model (see the Supporting Information). Here, we sought to determine whether the structural motifs featuring inclined base pairs would form in either UC-DNA or TC-DNA upon stretching beyond the OST. In both of these simulated constructs, base pairing was present in GC-rich regions and exhibited inclinations of θ ∼ 40−50°(in comparison, our previous work 24 determined that intercalated dyes in UC-DNA are tilted at θ ∼ 54°). However, we caution that our simulations did not recover exact canonical forms of S-DNA and P-DNA. Specifically, base-paired regions in the simulated DNA constructs were overwound relative to the measured helicity of S-DNA (∼37.5 bp/turn), and nonbase-paired regions were accordingly underwound relative to P-DNA (∼2.5 bp/turn). Additionally, more base-pair melting occurred (predominantly in AT-rich DNA) upon overstretching in both the simulated TC-DNA and UC-DNA than would be experimentally expected for our high-ionic-strength buffer (1 M NaCl). 7 It is not unexpected that we failed to observe purely canonical S-and P-DNA structures from simulations since the stretching forces are higher in the simulations and the construct is shorter than that in experiments. These constraints are an inevitable consequence of enabling computational tractability for atomic-level dynamic precision. Nevertheless, these simulations support our claim that S-DNA contains inclined base pairs and exists in both TC-DNA and UC-DNA. Building on these promising preliminary findings, we plan to develop further simulations in the future to include the effect of the solvent, loading rate, and intercalation.
In conclusion, we have shown that combined fluorescence polarization microscopy and DNA force spectroscopy provide a powerful means of investigating the nanoscale structural features of overstretched DNA. Our results provide a tantalizing glimpse of the different conformations that form in TC-DNA under mechanical strain and support previous predictions that these structures are highly sequence-dependent. 11,14,47 It is our hope that the new data presented here motivate further inquiry and discussion. More broadly, our current work, alongside emerging techniques in polarization microscopy, 48,49 single-molecule orientation microscopy, 50−54 and spatio-angular image analysis, 55,56 provides a methodological blueprint to nondestructively investigate a wide variety of biological and material systems.
* sı Supporting Information
The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.jpcb.1c02708. Figure 5. Proposed mechanisms by which P-DNA inhibits the tilting of intercalated dyes during overstretching. P-DNA is depicted in blue and S-DNA in green. Transition dipole moments associated with intercalated dyes are represented as red arrows. For simplicity, DNA bases are not drawn. (A) Due to the increased stability of GC relative to AT base pairing, P-DNA predominantly forms in AT-rich DNA. In AT-rich sections, there is sufficient P-DNA density such that intercalators are allowed to retain the energetically favored perpendicular orientation. Nevertheless, in GC-rich regions, S-DNA is intermittently "forced" to flank intercalated DNA, leading to sequence-dependent LD measurements and intercalated dipole tilting. (B) Negative supercoiling will reduce the total amount of P-DNA present in overstretched TC-DNA. However, due to the energetic penalty associated with S-DNA flanking intercalated DNA, any remaining P-DNA will preferentially flank intercalation sites. Once a construct approaches a sufficiently negative supercoiling density that P-DNA no longer forms during overstretching, the intercalated dipoles will be tilted (due to being flanked only by S-DNA) and the LD will approach the values recorded for UC-DNA. | 2021-07-27T06:23:23.982Z | 2021-07-26T00:00:00.000 | {
"year": 2021,
"sha1": "e88ed0d657f17377291b5410c1d392d200f4709d",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.jpcb.1c02708",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "18bab500842e7ad58b4acde2ed3118e833117c6f",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234075143 | pes2o/s2orc | v3-fos-license | Conceptual Basis of Cognitive Mimetics for Information Engineering
Intelligent information processing is topical in modern technology design and development. The fundamental idea was developed by Turing as he made the first conceptual models of information-processing computers. Though it has practically never been noticed, Turing’s work was a model of how to mimic human intelligent information processes and generate technologies, which can carry out intelligent tasks. The design method can be called cognitive mimetics as it imitates human information processes to design technologies and their applications. One can use cognitive mimetics even in solving techno-ethical problems. This is why we think that cognitive mimetics are vital as a method to generate intelligent information processes.
Introduction
Alan Turing [1][2][3] invented a model of the mathematical machine. To analyze the problem, he designed a conceptual, abstract, and mathematical machine for making computations of all kinds very practical. As is well known, this M-machine was to become a model for a computer, and thus it has had an enormous influence on our society, everyday life, and culture. One of the most promising consequences of Turing's work is the development of modern robotics, AI, and autonomous technologies [4].
During the Second World War, Turing applied his thinking and created a physical computational machine that was able to decipher German military codes [5]. In doing so, he showed that mathematical thinking could be physically realizable, and consequently, Turing's theoretical thinking acquired a technological form. Such physical computing machines are naturally a necessary precondition for all further developments.
Turing [2] was very aware of the similarities between his machine and human thinking. Defiantly, in one of his papers, he asked if machines could think (like human beings). In a puzzling manner, Turing then called the very question of whether "machines can think" a meaningless one, only to predict that by the end of the century, public opinion would shift so that the idea of machines "thinking" would not seem contradictory to an educated person [2]. Turing's challenging idea that computers could think like people was accepted by Newell and Simon [6,7], and they made a very important addition to Turing's [1][2] way of thinking. They studied empirically how people such as chess players solve problems and modelled the outcomes of their experiments with computers. The contribution of Newell and Simon [6] was not only to think about how people process information but also how to objectively and empirically analyze these processes [6,8]. Their models were simultaneously implementations. Their approach has been called cognitive simulation [6,9]. However, the consequences of Turing's [1,2] brilliant insights did not end with cognitive simulation. From a practical standpoint, it has never been acknowledged that Turing's [1] thinking itself was an ideal model for a constructive design thinking process. Being himself a mathematician, Turing [1] empathically but introspectively analyzed how mathematicians think when they compute. Turing's machine was designed on the basis of his analysis [1]. This means that Turing studied human thinkingor in more modern terms, human information processing-and used his analysis in designing the M-Machine. Turing did not only create an information processing machine based on his analysis; he analyzed an aspect of the human mind and designed a model of a mental process to be realized by the machine.
The idea that people design new technologies by imitating existing natural phenomena is not new. Clothes, for example, were presumably made to imitate animal furs. Airplanes were developed by the Wright brothers and other pioneers to imitate how birds fly. Finally, the forms of modern streamlined locomotives imitate birds such as the kingfisher. Such imitations of the structural properties of nature are normally called biomimetics.
However, the Turing machine is different from biomimetics. It does not imitate any biological structures. Rather, it imitates how mathematicians process information. Later models of the mind, whether theoretical or practical, have their origins in imitating how people process information. Cognitive information processing models describe how people process information. Thus, they are models of information processing minds.
What is the difference between a model of an information process and a system that realizes the information process? Turing demonstrated two things: first, the Turing machine shows the possibility of multiple realizability (of some mathematical thinking). This means that in some sense the process and the system can be decoupled. Second, it also shows the necessity of bringing the two together. Indeed, the Turing machine is an abstract description of both an information process as well as the operating principles of the physical system which implements it. From a mimetic perspective, the key idea is that a Turing machine can be realized with many different implementations, thus showing a proof of concept that the type of information processing Turing was outlining is multiply realizable. This is the ground concept of cognitive mimetics [10]. The power of the Turing machine lies in its generality. Why? Because the generality of the information process Turing modelled is the most abstract form of information-mathematics-further reduced to an atomic binary form that can be realized on a simple physical principle, namely, on and off.
Some time ago, we began to call upon the active attention of designers and suggested that they could use human information processes in designing intelligent technologies. The name of this design paradigm could be called cognitive mimetics to differentiate it from biomimetics. The idea of human information processing and its empirical analysis provides much support to the mimetic design. Nevertheless, it is one thing to scientifically analyze and model in a simulative manner how people think, but it is another matter to use human thinking as a model for development. The former refers to showing how things are and the latter to how things should or could be. Here, we outline cognitive mimetics and its basic concepts for further development.
Information Processing
In the late fifties, psychologists began to reconsider their basic concepts and procedures. For half a century, they had wanted to consider people as causal objects. They studied how people behave, i.e., how earlier events and the properties of those events affected subsequent behavior. This research generated a good understanding of learning, but in time the research went past behaviorism. For example, Rosenbleuth, Wiener, and Bigelow [11] differentiated behavioristic from functional analysis, the latter seeking to describe the intrinsic properties of the entity studied. Turing's [1,2] insights changed the game. Internal processes became important to understand, and the new interest in human information processing (i.e., cognition) opened revolutionary new visions to researchers. The psychology of human information processing, later called cognitive psychology, took over behaviorism [12,13]. In the early fifties, Turing [2] began to call attention to the similarities between machine and human information processes. At the same time, Claude Shannon worked with the amount of information that could be moved across the Atlantic by wire. Thus, he was able to raise the idea of information and information processing with Turing [2] in focus [14]. Interestingly, both called attention to the intimate connection between information and thinking as they suggested that human and machine thinking could be studied using chess players and chess playing. The first ideas about people as information processing systems were created by these two important researchers. Later, these ideas redesigned all of psychology and human research.
People could be seen as information processing systems, i.e., computers. The immediate idea was to study the capacity of systems as Shannon had studied the capacity of telephone cable [12][13][14]. Human beings could be seen as limited-capacity channels. This new way of thinking has since been very important in engineering and HTI psychology [15], but capacity is not a content-specific ground concept, and for this reason, it is not a very effective tool when human thinking is studied [16].
Herbert Simon with his collaborators took on the challenge of thinking machines in their research [6]. They focused on the analysis of human problem solving and the conceptual analysis of this process. The output of their research was partly theoretical and included different computational ways to model how people solve problems [6]. Several researchers have developed this approach further [9,17] The core concept in the psychology of human information processing is information. This has been a difficult problem in research. Wiener [18] for example, defined it by arguing that information is neither matter nor energy. Information is thus not a physical concept. Or, rather more accurately, information is not expressible in physical concepts alone. However, information is a multisided phenomenon and can be considered under different perspectives.
As mentioned above, the most popular analysis of information is based on quantity. People have asked how much information or how much new information some systems or messages have. One can call these quantitative approaches to information originating directly or indirectly with Shannon and Weaver's [14] analysis 'capacity-based analysis' [16]. A capacity-based analysis is not the only possible approach. One could also focus on what the messages say, i.e., the information content. Alan Allport [19] for example, analyzed the mind as a collection of neuro-modular information processing systems which are content specific. Color vision would be a good example of such a system as it is neurally hardwired and content specific. Thus, content-specific neural systems such as color processing generate some content aspects of human experience.
However, we focus here on an even stronger concept of information than the one committed to specific neural systems. We want to look at information as mental content [20]. In our thinking, mental content is seen as representational information content. All analysis and explanation in this approach is focused on the information contents of thought. The focus is not on schemas, or representations at the abstract level, but on contents as contents. By relying on such a strong concept of contents, it is possible to mimic human thinking.
Modeling the Mind -Methods for Mimetics
One cannot mimic human information processing and thinking without an idea of what happens in the minds of thinking people. There are several ways that designers can get an idea about what happens when people solve some concrete intelligence-demanding tasks. The main goal of the analysis is a picture of what happens in the mind. The main conceptualization is a description of human mental representations and manipulation processes which alter the representations [6]. The knowledge of mental representations and their transformation has been collected in several ways. Here we present philosophical and phenomenological analysis, thinking aloud, and other techniques, such as focus groups. All of the methods have their strengths and limitations, and for this reason it is wise to study them briefly. The human experience is a core phenomenon when we investigate human thinking. People observe their thinking and derive their design ideas on the grounds of information collected in this way. This method is traditional and perhaps the most widely used. Such giants of the philosophical analysis of human thinking as Plato, Aristotle, Descartes, Locke, and Kant have used this practice. The analysis of phenomena or how people experience in their minds the world, physical or mental, is the foundation of analysis and argumentation.
Philosophical and Phenomenological Analysis
During the last century, phenomenologists such as applied phenomenological analysis in their thinking. However, the development of the Turing machine, which we see as the first example of cognitive mimetics, was based on introspective phenomenological work in how mathematicians think. Consequently, one should not underestimate the importance of intuitive phenomenological work. As an additional example, formal logic began with how people experienced their thought processes and their information structures [21].
The problem with philosophical analysis lies in its subjective and difficult-to-verify nature. The experiences of one person may be also difficult to generalize. The latter problem can be solved to some degree, as in linguistics, in that when ideas such as the laws of logic have generally been accepted, they can be seen as argumentatively confirmed. Thus, accepting the weaknesses, one can apply phenomenological analysis in cognitive mimetics.
Thinking Aloud
An alternative to phenomenological analysis can be thinking aloud. In this method, people are asked to relate aloud everything that comes to mind when they solve problems and other thinking-related tasks. This method has been used quite frequently in the psychology of thinking [22,6]. The strength of thinking aloud is that researchers and designers can move from internalism and introspection to a more objective methodology.
In a thinking-aloud analysis, generalizability and confirmation processes are clearer than in phenomenological analysis. The results are also more objective than for introspective methods as they provide researchers with an idea about subconscious processes involved in human information processing. The analysis and comparison of objective data make it possible to study-as clinical psychologists do-the vast information processing undertaken by the conscious and also the subconscious mind [23].
On the other hand, it should be noted that in design, a single thinking-aloud protocol that shows, for example, a technically useful information processing method, can be more valuable than some more common patterns of thinking. Thus, in cognitive mimetics, the effectiveness of a human information process is measured also against its applicability in the computational domain. Copying and pasting information processes is not likely to yield optimal results, and thus the designer also becomes implicated in how they can implement ideas gleaned through mimetic means.
The Document, Social Research Knowledge, and Focus Groups
Thinking-aloud protocols are not the only sources of information on human thinking. Surveys, interviews, and documentary analyses can also yield objective knowledge about what happens in human minds when they think. The outputs such as documents, laws, machines, and histories of real-life thought processes can thus provide a rich source of knowledge.
In practice, it may be difficult to test models of mental processes generated on the basis of documents. One cannot make any experimental designs which could alternate documentary information for purposes of testing. A way to look for further confirmation is to use focus groups. These are groups of discussing experts who can analyze the presented interpretations of given empirical data.
Using the Knowledge of Thinking in Mimicking
None of the presented methods is absolute. There is no way to confirm the given knowledge [24,16]. On the other hand, in mimetics, the analysis of human information processing is not the end but the beginning. The end of the process can be seen in the actual intelligent applications. Even slightly [2], for example, made a partly false analysis of human thinking as he neglected the information contents of thought and concentrated only on the formal aspects of the mind [25]. Nevertheless, the Turing machine opened new ways for mankind to develop human life.
The ultimate goal of mimetics is to develop intelligent technological applications. Their validity is tested in practice. The criterion is how the new intelligent technologies can improve the quality of human life, and thus the validation should be derived not from the absolute truth of the analysis of thinking but from the function of the analysis in designing intelligent applications.
Mimetics are not built on the idea of precise similarity between model and product. Airplanes mimic bird wings; however, airplanes do not fly like birds. But airplanes and birds obey the same laws of aerodynamics. Similarly, mimicking human information processes and thinking does not mean that intelligent applications should be identical to human processes. For example, chess-playing computers do not process information similarly to people. Yet, they can follow the laws and regularities of information processing and thus free people from many practical tasks [6].
Ethical Thinking -A Practical Model
An important question in developing intelligent technologies is whether ethical machines are possible. Machines shall take over many tasks in our lives, and many of these tasks have ethical dimensions. One may also ask whether machines can develop new ethical principles by mining large portions of data [26]. A major obstacle of ethical technologies appears to be the well-known ethical principle called Hume's guillotine, which states that values cannot be derived from facts. Thus, computing machines operating with facts could not be used to process ethical information.
Cognitive mimetics could be used to solve such principal issues. One can use philosophical analysis to study how one could construct computational models with ethical capacities. Such analysis presupposes solving Hume's aporia. Here, the argumentation can be based on the analysis of the basic conceptual properties of human ethical actions.
Ethics is concerned with actions. Ethics define how people should act in order to act morally. The grounding concept is thus human action. Human actions are controlled by mental representations, which have both emotional and cognitive aspects. The importance of the two aspects of the mind was also noticed by Hume. However, he argued that emotions could not dictate what facts should be, and thus values, which are closely linked with emotions, could not be derived from facts (i.e., one could not say whether things are how they should be).
However, in humans, actions are controlled by the human mind. In action, control of emotions is intimately linked with both emotions and cognitive representations of situations and the actions which have led to these situations. Emotional valence tells which situations are negative and which are positive, and it marks the actions leading to critical situations with positive and negative valences. In the human mind, the actions leading to positive situations are worth aspiring to, and situations with negative emotional valences are worth avoiding. For example, people avoid pain and pursue pleasure. Consequently, actions leading to positive situations are valuable, and actions leading to negative situations are avoided. Representations of the emotional consequences of an action can be termed as elementary ethical experience [26].
On the basis of individuals' ethical experiences, they form guiding rules for their actions. For example, people avoid situations such as close relations with people, work communities, or tasks which they find negative. The systems of elementary ethical rules that individuals adopt have been discussed with other people on parliamentary and world levels. Finally, the discourse of right and wrong will become historical. The process of socially analyzing and unifying primitive ethical norms into socioethical principles can be called discourse ethics [27].
From a technological point of view, emotional and cognitive analyses of actions mustn't be separated as in Hume's work. Actions always have emotional and cognitive aspects, and similarly, human perception always encodes borderline, figure and background, and movement simultaneously. There is no obstacle to incorporating both emotional and cognitive aspects into intelligent applications. This general notion should be extended to other acts of cognition as well. Many philosophical and practical mistakes occur when elements that are intimately encoded together are separated in the abstract and then reduced to only one dimension [28]. This may be persuasive, as the reduced dimension is indeed present in the cognition, such as facts in ethical thinking. However, it is important to seek an analysis of what is given [28] rather than starting from a constrained box-surely not always an easy task.
There are numerous ways to encode the emotional valence of people in models. Social media uses thumbs, for example. Such evaluations can be attached to respective action models. Consequently, designers have primitive ethical assessments of actions in their use. A typical example of using emotional preferences is provided by recommender systems. Grahn et al. [29] showed that a prospective thinking-aloud method applied to driving instructors can indicate information requirements and corresponding safety-relevant actions in prototypical traffic situations to enhance automated driving technologies. In any case, by unifying the emotional and cognitive aspects of action information, one can create machines with ethical properties.
As another example of using cognitive mimetics to generate ideas, Karvonen [30] examined tacit knowledge, action ontologies, and problem restructuring in the context of AI ethics. Take problem restructuring, for example. Intrinsic machine ethics, where AI is capable of some ethical information processing, easily slips into the development of an ethical calculus. This is an important problem, but it also easily distracts from a more powerful ethical thinking tool, which is illustrated in human problem restructuring. In the famous trolley problem [31] the very idea is to narrow the ethical problem into a no-win situation and to examine the consequences. In reality, most people would prefer to seek a winwin situation or a no-lose situation by restructuring the problem or solving the dilemma in a new way. An AI capable of intrinsic ethical processing should first seek to restructure problems as they occur such that no ethical calculus is needed. The calculus is rather an alarm process which calls upon the need for restructuring. This is something humans excel in, and the mimetic approach would take this very ability or the various empirically discovered restructurings as cue and content for AI development. Here, again, the key is representations and mental content.
The Challenge of Information Processing Machines
In this paper, we have outlined the foundations of cognitive mimetics. We call this approach cognitive mimetics. This method is intended to aid designers in working with intelligent applications. The practical example illustrates that mimetics can be used even when working with ethical information, which topic shall be important in future developments of an intelligent society.
Autonomous systems, such as autonomous machines, vehicles, and devices, require an understanding of intelligent actions and intelligent information processing. Half a century of working with human information processing has demonstrated that human information processing can be mimicked in developing new technologies. It is sufficient for designing mechanical tools such as spades or caterpillars to have a good comprehension of biomechanical processes of animals and people. A spade can be designed by mimicking how the human hand operates. The principles by which digging operates are similar to the principles a spade must follow when used.
However, if new technologies operate in the world of information, the traditional principles of biomimetics are insufficient. They cannot provide much help when machines should, for example, design tools for creative processes. To aid such processes, it is essential to understand how people think and how they create new information.
New information processing artifacts need not operate like people, but they have to follow the laws and regularities of intelligent information processing developed by human cultural evolution. This is why the analysis of human information processes can be highly valuable for developing intelligent technologies and intelligent society. Artifacts operating with information can be faster, they can have more capacity than people have, and they do not easily exhaust. However, they need to follow the laws of rational and intelligent information. They have to follow principles of formal logic, to take an example, if they are to be of real use. Nevertheless, there are numerous other principles the human mind follows. Ethical information is a good example of such information. Analyzing and implementing the many yet unknown information processes of the human mind in machines will be a challenge of our time. In this process, we think that cognitive mimetics has a role to play. | 2021-05-10T00:03:29.050Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "1faf4f74196d9dbbaad5e2ca1ffd3cf1c69067b9",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1828/1/012004",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ecc7d5fda8dd801549377369059c88e722f664d4",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Philosophy"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
230108363 | pes2o/s2orc | v3-fos-license | Uremic toxin indoxyl sulfate promotes proinflammatory macrophage activation by regulation of β-catenin and YAP pathways
Evidence has been shown that indoxyl sulfate (IS) could impair kidney and cardiac functions. Moreover, macrophage polarization played important roles in chronic kidney disease and cardiovascular disease. IS acts as a nephron-vascular toxin, whereas its effect on macrophage polarization during inflammation is still not fully elucidated. In this study, we aimed to investigate the effect of IS on macrophage polarization during lipopolysaccharide (LPS) challenge. THP-1 monocytes were incubated with phorbol 12-myristate-13-acetate (PMA) to differentiate into macrophages, and then incubated with LPS and IS for 24 h. ELISA was used to detect the levels of TNFα, IL-6, IL-1β in THP-1-derived macrophages. Western blot assay was used to detect the levels of arginase1 and iNOS in THP-1-derived macrophages. Percentages of HLA-DR-positive cells (M1 macrophages) and CD206-positive cells (M2 macrophages) were detected by flow cytometry. IS markedly increased the production of the pro-inflammatory factors TNFα, IL-6, IL-1β in LPS-stimulated THP-1-derived macrophages. In addition, IS induced M1 macrophage polarization in response to LPS, as evidenced by the increased expression of iNOS and the increased proportion of HLA-DR+ macrophages. Moreover, IS downregulated the level of β-catenin, and upregulated the level of YAP in LPS-stimulated macrophages. Activating β-catenin signaling or inhibiting YAP signaling suppressed the IS-induced inflammatory response in LPS-stimulated macrophages by inhibiting M1 polarization. IS induced M1 macrophage polarization in LPS-stimulated macrophages via inhibiting β-catenin and activating YAP signaling. In addition, this study provided evidences that activation of β-catenin or inhibition of YAP could alleviate IS-induced inflammatory response in LPS-stimulated macrophages. This finding may contribute to the understanding of immune dysfunction observed in chronic kidney disease and cardiovascular disease.
Introduction
Chronic kidney disease (CKD) is defined as functional abnormalities of the kidney, or decreased glomerular filtration rate (GFR, < 60 mL/min/1.73 m 2 ) for more than 3 months (Chala et al. 2019;Shiba and Shimokawa 2011). In addition, cardiovascular disease (CVD) is a group of problems of the heart or blood vessels, and is a serious complication of CKD (Weiner 2009). CKD is a serious risk factor for CVD, indicating that kidney disease and CVD are closely interconnected (Yang et al. 2010). Kidney damage cause dysfunction of heart tissue, eventually leading to dysfunction of both organs (Liu et al. 2014).
Previous study indicated that CKD is commonly associated with the inflammation , and macrophages are the main contributors to the inflammatory response to CKD (Guiteras et al. 2016). In addition, macrophages are divided into 2 groups including M1 (classically activated macrophages) and M2 (alternatively activated macrophages) (Zhou et al. 2019b). M1 macrophages primary exerted a pro-inflammatory role, while M2 macrophages mainly exhibited the anti-inflammatory role (Mosser and Edwards 2008). Moreover, macrophage polarization plays a vital role in the progression of CKD . In the early stage, renal injury activated the inflammation response pathway, and promoted M1 macrophage polarization. However, at the later stage, a number of antiinflammatory cytokines stimulated the production of M2 macrophages, which contribute to kidney repair .
Indoxyl sulfate (IS) is an important uremic solute, which normally excreted into urine (Yang et al. 2015). Previous study indicated that the level of IS was significantly increased in patients with CKD (Adijiang et al. 2011). Decreased GFR in patients with CKD could result in a reduction in IS excretion, and then the concentration of IS was gradually accumulated in uremic serum (Niwa et al. 1988). In that situation, IS could accelerate the progression of CKD (Miyazaki et al. 1997). In addition, IS has been shown to be involved in the development of CVD (Watanabe et al. 2019). Tan et al. indicated that IS could induced cardiomyocyte toxicity (Tan et al. 2018). Evidence has been shown that IS could affect kidney and cardiac functions (Lekawanvijit et al. 2010). However, the role of IS in macrophage polarization in LPS-induced inflammatory conditions remain unclear. Thus, in this study, we aimed to investigate the effect of IS on macrophage polarization during LPS challenge.
ELISA
THP-1 cells were exposed to PMA (160 nM) for 48 h, and then incubated in PMA-free medium for 24 h, following by different concentrations of LPS (0, 10 or 100 μg/mL) and IS (0, 0.25, 0.5, 1 or 2) for 24 or 48 h. The concentrations of TNFα, IL-6, IL-1β in the supernatant of macrophages were determined by ELISA (ExCellBIO, Shanghai China) according to the manufacturer's procedures.
Western blot
The protein concentration was detected using the BCA protein assay kit (Thermo Fisher Scientific). Protein samples (20 μg) were separated on a 10% sodium dodecyl sulfate-polyacrylamide (SDS-PAGE) gels, and then electro-transferred onto polyvinylidene fluoride (PVDF, Millipore, Billerica, MA, USA) membranes. After that, membranes were blocked with 5% skim milk in TBST for 1 h at room temperature, and then incubated overnight at 4 °C with the following antibodies: Arginase1 (1:1000, Abcam Cambridge, MA, USA), iNOS (1:1000, Abcam), GAPDH (1:1000, Abcam). Later on, the membranes were incubated with the corresponding secondary antibodies (1:5000, Abcam) at room temperature for 2 h. The ECL detection kit (Thermo Fisher Scientific) was used to analyze the protein bands.
Flow cytometry
Cells were incubated with anti-HLA-DR (M1, macrophage cell subpopulation marker, Abcam) or anti-CD206 (M2, macrophage cell subpopulation marker, Abcam) for 20 min at 4 °C according to the manufacturer's procedures. After washing twice with PBS, cells were resuspended in fluorescence-activated cell sorting (FACS) buffer. Then, fluorescence activated cell sorting was performed by using a FACSAria II instrument (BD Biosciences, Franklin Lake, NJ, USA), and the data were analyzed using FACSDiva 6.1.1 software (BD).
Immunofluorescence assay
Cells were fixed in 4% paraformaldehyde, and then permeabilized with 0.1% Triton X-100 for 20 min. After that, the cells were blocked with 10% goat serum at room temperature for 1 h. Later on, the cells were incubated with primary antibodies anti-β-catenin (1:1000, Abcam) and Yes-associated protein (YAP, 1:1000, ProteinTECH group Inc., Chicago, Illinois, USA) overnight at 4 °C. Subsequently, the specimens were stained with Goat Anti-Rabbit IgG H&L secondary antibody (Cy3) (1:100, Boster biological Technology Co. Ltd, Pleasanton, CA, USA) on a second day for 2 h at room temperature. Cell nuclei were counterstained with DAPI for 5 min, and then cells were imaged with a laser scanning confocal microscope (LSM, Carl Zeiss).
Statistical analysis
GraphPad Prism 7 (GraphPad Software, Inc., La Jolla, CA, USA) was performed for statistical analysis. Data were represented as mean ± standard deviation (SD). All experiments were repeated at least in three times. The comparisons among multiple groups were made with one-way analysis of variance (ANOVA) followed by Tukey's test. P < 0.05 was accepted as a statistically significant difference.
IS enhanced LPS-induced inflammatory response in THP-1-derived macrophages
To investigate the role of IS in inflammatory response in LPS-stimulated macrophages, ELISA assay was applied. LPS significantly induced the production of pro-inflammatory cytokines TNFα, IL-6, IL-1β in THP-1-derived macrophages ( Fig. 1a-f). Meanwhile, IS (from 0.25 to 2 mM) markedly increased the production of TNFα, IL-6, IL-1β in LPS-stimulated macrophages when cells were incubated for 24 h compared to macrophages treated with LPS alone (Fig. 1a-f). Similar effects were observed after 48 h of IS treatment ( Fig. 1a-f). Therefore, in the following experiments, THP-1-derived macrophages were treated with IS for 24 h. These data indicated that IS could enhance LPS-induced inflammatory response in THP-1 derived macrophages.
IS promoted M1 macrophage polarization in LPS-stimulated macrophages
To investigate the effect of IS on macrophage polarization under inflammatory condition, western blot assay was used. As shown in Fig. 2a, b, 2 mM IS had no effect on the expression of M2 macrophage biomarker arginase1 in LPS-stimulated macrophages. Additionally, 2 mM IS notably increased the level of M1 macrophage biomarker iNOS in macrophages in the presence of LPS (10 or 100 μg/mL) (Fig. 2a, c). Meanwhile, no difference in the expression of iNOS were detected between 10 μg/mL LPS alone and 100 μg/mL LPS alone treatment group (Fig. 2a, c). Therefore, 10 μg/mL LPS was utilized in the following experiments.
Next, to further investigate the effect of IS on macrophage polarization in LPS-stimulated macrophages, flow cytometry analysis was used to analyze the proportion of HLA-DR + (M1 macrophage marker) and CD206 (M2 macrophage marker) macrophages. As shown in Fig. 3a, b, 2 mM IS markedly increased the proportion of HLA-DR + cells in LPS-stimulated macrophages compared to macrophages treated with LPS alone; however, 2 mM IS caused no major change in the proportion of CD206 + cells during LPS challenge (Fig. 3c, d). These data indicated that IS could promote M1 macrophage polarization in LPS-stimulated macrophages. Fig. 1 IS enhanced LPS-induced inflammatory response in THP-1-derived macrophages. THP-1 cells were exposed to PMA (160 nM) for 48 h, and then incubated in PMA-free medium for 24 h, following by different concentrations of IS (0, 0.25, 0.5, 1, or 2 mM) and LPS (10 or 100 μg/mL) for 24 h. ELISA assay was used to detect the levels of a TNFα, b IL-6, c IL-1β in macrophages. THP-1-derived macrophages were incubated with different concentrations of IS (0, 0.25, 0.5, 1, or 2 mM) and LPS (10 or 100 μg/mL) for 48 h. ELISA assay was used to detect the levels of d TNFα, e IL-6, f IL-1β in macrophages. ## P < 0.01 vs. 0 μg/mL LPS + 0 mM IS group. *P < 0.05, **P < 0.01 vs. 10 μg/mL LPS + 0 mM IS group. ^P < 0.05, ^^P < 0.01 vs. 100 μg/mL LPS + 0 mM IS group
IS inhibited β-catenin signaling in LPS-stimulated macrophages
It has been shown that pro-inflammatory cytokines released during inflammation could trigger some molecular signaling cascades, including Wnt/β-catenin signaling (Qu et al. 2018). Feng et al. indicated that activation of Wnt/βcatenin signaling could induce M2 macrophage polarization (Feng et al. 2018). In order to investigate whether IS affects the β-catenin signaling pathway in LPS-stimulated macrophages, immunofluorescence assay was performed. As shown in Fig. 4a, b, IS significantly decreased the nuclear protein level of β-catenin in LPS-stimulated macrophages compared to macrophages treated with LPS alone; however, that effect was reversed by β-catenin signaling activator LiCl or YAP signaling inhibitor verteporfin (Fig. 4a, b). All these results suggested that IS could inhibit β-catenin signaling in LPS-stimulated macrophages.
IS activated YAP signaling in LPS-stimulated macrophages
Evidence has been shown that YAP is a core component of the Hippo pathway, which could promote inflammation response in hepatocytes (Mooring et al. 2019). Zhou et al. indicated that YAP could aggravate inflammatory bowel disease via promoting M1 macrophage polarization (Zhou et al. 2019a). As indicated in Fig. 5a, b, IS obviously increased the nuclear protein level of YAP in LPS-stimulated macrophages; however, this phenomena was reversed by verteporfin. These results indicated that IS could activate YAP signaling in LPS-stimulated macrophages.
IS enhanced inflammatory response in LPS-stimulated macrophages via regulating β-catenin and YAP signaling pathways
We next explore whether activation of β-catenin or inhibition of YAP could affect IS-induced inflammatory response in LPS-stimulated macrophages. As shown in Fig. 6a, b, LiCl or verteporfin treatment significantly increased the expression of arginase1 and decreased the level of iNOS in LPS and IS co-treated macrophages. Moreover, IS increased the production of TNFα, IL-6, IL-1β in LPS-stimulated macrophages; however, these effects were markedly reversed by LiCl or verteporfin treatment (Fig. 6c). These data illustrated that activating β-catenin signaling or inhibiting YAP signaling could suppress the IS-induced inflammatory response in LPSstimulated macrophages by inhibiting M1 macrophage polarization. Fig. 2 IS upregulated the level of M1 macrophage marker in LPSstimulated macrophages. a THP-1 cells were exposed to PMA (160 nM) for 48 h, and then incubated in PMA-free medium for 24 h, following by different concentrations of IS (0, 0.25, 0.5, 1, or 2 mM) and LPS (10 or 100 μg/mL) for 24 h. Expression levels of Argin-ase1 and iNOS in macrophages were detected with western blotting.
GAPDH was used as an internal control. b, c The relative expressions of Arginase1 and iNOS in cells were normalized to GAPDH. ## P < 0.01 vs. 0 μg/mL LPS + 0 mM IS group. *P < 0.05, **P < 0.01 vs. 10 μg/mL LPS + 0 mM IS group. ^P < 0.05, ^^P < 0.01 vs. 100 μg/ mL LPS + 0 mM IS group THP-1 cells were exposed to PMA (160 nM) for 48 h, and then incubated in PMA-free medium for 24 h, following treated with 2 mM IS + 10 μg/mL LPS, plus LiCl (or verteporfin) for 24 h. a, b Relative fluorescence expression levels were quantified by YAP and DAPI staining in macrophages. **P < 0.01
Discussion
Evidences have shown that the concentration of uremic toxin IS in patients are positive corrected with the severity of CKD and CVD (Vanholder et al. 2014;Watanabe et al. 2019). Zhao et al. indicated that IS could be used as a potential biomarker for the diagnosis and treatment of renal fibrosis (Zhao et al. 2016). In addition, IS exhibited a proinflammatory effect in macrophage in CKD, which could function as an indicator of kidney function and a marker of inflammation status (Kaminski et al. 2019;Nakano et al. 2019). In this study, we found that IS could increase the production of TNFα, IL-6, IL-1β in LPS-stimulated macrophages. Adesso et al. indicated that IS markedly increased the production of TNF-α and IL-6 in LPS-stimulated macrophages, which was consistent with our results (Adesso et al. 2013). These data suggested that IS could enhance inflammatory response during LPS challenge.
Macrophages have been recognized as a key factor in the progression of renal fibrosis (Chen et al. 2017). Cao et al. indicated that M1 macrophage is the predominant macrophage phenotype in the early stages of kidney disease, inducing renal inflammation by production proinflammatory mediators (Cao et al. 2013). Meanwhile, macrophage polarization from M1 to M2 was observed in the late stage of kidney disease, and accumulated M2 macrophages promoted kidney fibrosis (Feng et al. 2018). In this study, we found that IS could promote M1 macrophage polarization in LPS-stimulated macrophages. Li et al. indicated that macrophages in CKD displayed enhanced M1 and impaired M2 polarization in response to LPS, which was consistent with our results (Li et al. 2015). Our finding illustrated that IS could enhance inflammatory response LPS-stimulated in macrophages via promoting M1 macrophage polarization. β-catenin signaling plays an important role in the development of CKD (Li et al. 2017). Activation of β-catenin aggravated kidney dysfunction and promoted renal inflammation (Li et al. 2017). In contrast, manoharan et al. found that activation of the β-catenin pathway could suppress chronic inflammation in DCs (Manoharan et al. 2014). These reports suggested that β-catenin plays a dual role in the regulation of inflammatory response. In this study, we found that β-catenin activator LiCl significantly suppressed the IS-induced inflammatory response in ISstimulated macrophages, suggesting that β-catenin plays an anti-inflammatory role in IS-induced inflammatory Fig. 6 IS enhanced inflammatory response in LPS-stimulated macrophages via regulating β-catenin and YAP signaling pathway. THP-1 cells were exposed to PMA (160 nM) for 48 h, and then incubated in PMA-free medium for 24 h, following treated with 2 mM IS + 10 μg/ mL LPS, plus LiCl (or verteporfin) for 24 h. a Expression levels of Arginase1 and iNOS in macrophages were detected with western blotting. GAPDH was used as an internal control. b The relative expressions of Arginase1 and iNOS in cells were quantified via normalization to GAPDH. c ELISA assay was used to detect the levels of TNFα, IL-6, IL-1β in macrophages. **P < 0.01 response. In addition, β-catenin activator LiCl markedly upregulated the level of Arginase1, and downregulated the level of iNOS in IS-stimulated macrophages, indicating that β-catenin could promote M2 macrophage polarization but inhibit M1 macrophage polarization. Feng et al. indicated that activation of β-catenin signaling promotes kidney fibrosis through promoting M2 macrophage polarization, which was consistent with our results (Feng et al. 2018). These data indicated that IS could promote M1 macrophage polarization via downregulating β-catenin signaling. Meanwhile, activating β-catenin signaling could suppress the IS-induced inflammatory response in LPSstimulated macrophages by inhibiting M1 polarization.
YAP is a key transcription coactivator of the Hippo pathway (Murakami et al. 2017). Previous study indicated that YAP is related to inflammation-related diseases (Murakami et al. 2017). Zhou et al. found that YAP could aggravate inflammatory bowel disease, indicating that YAP could inhibit M2 macrophage polarization, and promote M1 macrophage polarization (Zhou et al. 2019a). In this study, IS significantly increased the level of YAP in LPS-stimulated macrophages. In addition, verteporfin markedly decreased the expression of iNOS in IS-stimulated macrophages, indicating that YAP downregulation could inhibit M1 macrophage polarization. Moreover, downregulation of YAP alleviated IS-induced inflammatory response in LPSstimulated macrophages. These data indicated that IS could promote M1 macrophage polarization via upregulating YAP signaling. Meanwhile, inhibiting YAP signaling could inhibit the IS-induced inflammatory response in LPS-stimulated macrophages by inhibiting M1 polarization.
Conclusion
In this study, the results indicated that IS could enhance inflammatory response in LPS-stimulated macrophages. Meanwhile, IS could induce M1 macrophage polarization in LPS-stimulated macrophages via inhibiting β-catenin signaling and activating YAP signaling. This study provided evidences that activation of β-catenin or inhibition of YAP could alleviate IS-induced inflammatory response in LPSstimulated macrophages. This finding may contribute to the understanding of immune dysfunction observed in chronic kidney disease and cardiovascular disease.
Compliance with ethical standards
Conflict of interest The authors declare no competing financial interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 2021-01-03T06:16:01.040Z | 2021-01-02T00:00:00.000 | {
"year": 2021,
"sha1": "1dc547c778a2f491e77d698b1587ef8ecb513d57",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10735-020-09936-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a6bee126e4f024ba2064f3ab58f9a55b49b71a05",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235168316 | pes2o/s2orc | v3-fos-license | Upregulated PPARG2 facilitates interaction with demethylated AKAP12 gene promoter and suppresses proliferation in prostate cancer
Prostate cancer (PCA) is one of the most common male genitourinary tumors. However, the molecular mechanisms involved in the occurrence and progression of PCA have not been fully clarified. The present study aimed to investigate the biological function and molecular mechanism of the nuclear receptor peroxisome proliferator-activated receptor gamma 2 (PPARG2) in PCA. Our results revealed that PPARG2 was downregulated in PCA, and overexpression of PPARG2 inhibited cell migration, colony formation, invasion and induced cell cycle arrest of PCA cells in vitro. In addition, PPARG2 overexpression modulated the activation of the Akt signaling pathway, as well as inhibited tumor growth in vivo. Moreover, mechanistic analysis revealed that PPARG2 overexpression induced increased expression level of miR-200b-3p, which targeted 3′ UTR of the downstream targets DNMT3A/3B, and facilitated interaction with demethylated AKAP12 gene promoter and suppressed cell proliferation in PCA. Our findings provided the first evidence for a novel PPARG2-AKAP12 axis mediated epigenetic regulatory network. The study identified a molecular mechanism involving an epigenetic modification that could be possibly targeted as an antitumoral strategy against prostate cancer.
Introduction
Prostate cancer (PCA) is one of the most common male genitourinary tumors with its high fatality rate in the western world 1 . Reports revealed that the United States is expected to spend more than US$8 billion annually on the screening and treatment of PCA 2 . Incidence of PCA in China is lower than in the United States; however, in the past two decades, numbers of PCA patients have increased significantly due to environmental pollution, westernized change in diet, and aging of the population 3 . The prostate-specific antigen (PSA) is currently recognized as a useful tool for early screening of PCA 4 . However, screening based on serum PSA is still largely debated 5 , and up to 22% of newly diagnosed patients with PCA are advanced or metastatic ones 6 . Once diagnosed, it is usually treated by active surveillance, prostatectomy, radiation therapy, hormone therapy, or chemotherapy 7 . So far, the complex molecular mechanisms involved in the occurrence and progression of PCA have not been fully clarified. We believe it is necessary to explore the pathological mechanism and look for new molecular therapeutic targets of PCA.
The nuclear receptor peroxisome proliferator-activated receptor-γ (PPARG) is a ligand-dependent transcription factor (TF) that plays a vital role in regulating the differentiation of adipocytes and the transcription of multiple genes [8][9][10][11] . The human PPARG gene was found to be located on the short arm of chromosome 3 (3p25) in 1995. PPARG exists in two protein isoforms, PPARG1 and PPARG2 11 . Compared to PPARG1, PPARG2 contains 30 additional amino acids at the N terminus and the ligandindependent activation activity is 5-10 times than that of PPARG1 12 . It has been reported that PPARG plays a role in a variety of chronic diseases including tumors 3,13 , diabetes 14 , inflammation 15 , atherosclerosis 16 , and so on. As far as PPARG2 is concerned, its role in PCA has not been clarified.
DNA methylation is a common type of epigenetic modification. The existence of CpG islands in human genome is always closely related to the methylation status and it is also related to a majority of the coding genes in the human genome 17 . DNA methylation plays an important role in the of gene expression, cell proliferation, differentiation, and development, and is also closely related to human development and tumors [18][19][20] . When one gene promoter region is methylated, its transcription is often inactivated, whereas demethylation is usually manifested as a transcriptional activation.
In the present study, we showed that the downregulation of PPARG2 expression in PCA acted as a tumor suppressor in suppressing malignancy of PCA cells in vitro and in vivo. Moreover, mechanistic analysis revealed that upregulated PPARG2 facilitated interaction with demethylated A-Kinase anchoring protein 12 (AKAP12) gene promoter and suppressed cell proliferation in PCA. Our present results provide the first evidence for a novel PPARG2-AKAP12 axis-mediated epigenetic regulatory network. The study identified a molecular mechanism involving an epigenetic modification that could be possibly targeted as an antitumoral strategy against PCA.
Tissue samples
Eight human PCA samples and eight benign prostatic hyperplasia tissues were obtained from the Urology Department of Affiliated Hospital of Nantong University, between 2017 and 2018. The samples were frozen quickly in liquid nitrogen and were stored at −80°C. The study was in accordance with the International Ethical Guidelines for Biomedical Research Involving Human Subjects. The protocol was approved by the Ethics Committee of the Affiliated Hospital of Nantong University. All subjects obtained informed consent to participate in this study.
Construction of lentivirus vectors
The PPARG2 overexpression lentivirus vector-GV358 containing the human PPARG2 wild-type full-length sequence (PPARG2) for gain-of-function and the lentivirus empty vector (EV) as a control were constructed by GeneChem Co., Ltd (Shanghai, China). In brief, the successful construction of the plasmid was first verified by restriction enzyme digestion, PCR identification, and sequencing. Then, the constructed plasmid and lentivirus packaging plasmids phelper 1.0 and phelper 2.0 were cotransfected into the cultured HEK 293T cells. Lentiviral particles were obtained by collecting supernatant using the kit for ultracentrifugation concentration and purification of lentiviral particles, and combined with fluorescent titer assay and enzyme-linked immunosorbent assay. Virus titer was determined as 1 × 10 8 transducing U ml −1 .
Gene expression profiling and miRNA-seq analysis
The gene expression profiles of PC3-PPARG2 cDNA (PPARG2) and PC3-EV were compared using Agilent SurePrint G3 Human Gene Expression 8 × 60K Microarray (Agilent Technologies, Santa Clara, CA) (Gene Expression Omnibus database accession number GSE108309). High-throughput miRNA sequencing (miRNA-seq) between PPARG2 and EV group was conducted using the single-ended 50 bp sequencing mode of the Illumina Hiseq3000 sequencing platform (Genergy Bio-technology, Shanghai, China). The Sequence Read Archive (SRA) accession number was PRJNA719139. Differential expression genes (DEGs) between PPARG2 and EV were screened, based on a t-test of linear models for microarray analysis package in R (Version 3.3, http:// www.bioconductor.org) 23 . DEGs fold change of gene expression was calculated with a threshold of fold change > 1 and P-value < 0.05 for DEG selection.
GO and KEGG pathway enrichment
The Database for Annotation, Visualization, and Integrated Discovery (DAVID, Version 6.8, http://david.abcc. ncifcrf.gov/) can provide a comprehensive set of functional annotation tools for investigators to understand biological meaning behind a large list of genes 24 . Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis were performed using DAVID online tool, to analyze the DEGs at the functional level. P < 0.05 was considered statistically significant.
Real-time quantitative PCR
Real-time quantitative PCR (qPCR) was used to detect the expression levels of mRNAs and miRNAs that were involved in this study. Total RNA was extracted using Trizol reagent (Invitrogen) according to the manufacturer's protocol. The primer sequences (AKAP12, PPARG2, miR-200b-3p, U6, and GAPDH) were listed in Supplementary
Western blotting
Total proteins of the tissues and cells were extracted using a protein extraction kit (Beyotime Biotechnology, Shanghai, China) according to the manufacturer's instructions. The extracted protein concentrations were detected using a bicinchoninic acid kit (Sigma-Aldrich, St. Louis, USA). The protein samples (each 40 μg) were separated using polyacrylamide gel electrophoresis (10% concentration) and transferred to a polyvinylidene fluoride membrane. The membrane was then blocked with 5% fat-free milk at room temperature for 1 h and incubated with primary antibody at 4°C overnight. The next day, the membrane was washed with Tris-buffered Saline Tween 20 (TBST) for four times (3 min/time) and then incubated with the corresponding secondary antibody (1 : 2000; catalog number 7056 or 7054, Cell Signaling Technology, Inc.) for 1 h at room temperature. Again, the membrane was washed with TBST for four times. The target bands were scanned and visualized using chemiluminescence method with Bio-Rad Gel Doc EZ imager (Life Science Research, CA, USA). Image J software (National Institutes of Health, MD) was applied to analyze the intensity of the target bands. The primary antibodies used were as follows: PPARG2 (
Cell proliferation, colony formation, migration, and invasion
For cell proliferation, cells (3 × 10 3 per well) were seeded to a 96-well plate and cultured for 0, 24, 48, 72, and 96 h. Then, 100 μl of medium containing 10 μl Cell Counting Kit-8 (CCK-8) reagent (Beyotime Biotechnology, Shanghai, China) was added to each well for incubation of another 2 h at 37°C. The absorbance was then measured at 450 nm according to the manufacturer's instruction.
For colony formation, cells were seeded to a six-well plate and cultured for 2 weeks. Then the cells were fixed with 4% paraformaldehyde for half an hour and stained with crystal violet (Sigma-C3886) for 10 min, and then the colonies comprising over 50 cells were counted.
For migration of wound-healing assay, in brief, the cell monolayers of each well in a six-well plate were scratched using a 100 µl pipette tip and photographed at 0 and 24 h, respectively.
For cell invasion assay, the diluted Matrigel (catalog number 356234; BD Biosciences) was added to each Transwell upper chamber. Then, cells (1 × 10 5 ) that cultured in serum-free medium were added to the upper chambers and the complete medium was added to the lower chambers. After 36 h, the cells were fixed with 4% paraformaldehyde for half an hour and stained with 0.5% crystal violet for 10 min at room temperature. Cells were counted under a microscope (Leica DM2500, Leica Microsystems, Inc.) at ×200 magnification.
Flow cytometry
Cells (1 × 10 6 per well) were seeded to a six-well plate. After 24 h, cells were trypsinized and washed with precooled phosphate-buffered saline (PBS) and fixed with 70% ethyl alcohol at 4°C overnight. Then, the cell suspension was incubated with propidium iodide (0.5 mg/ml) (Beyotime Biotech, Shanghai, China) for 15 min. DNA content was analyzed using a flow cytometer (BD Biosciences, San Jose, CA).
Tumor xenograft models
The experiments were approved by the Research Ethics Committee of Nantong University according to Council on Animal Care Guidelines of Nantong University. A total of 12 BALB/c 5-week-old male nude mice were randomly divided into EV and PPARG2 groups (6 per group). EV or PPARG2transfected PC3 cells were injected subcutaneously into the flanks of the mice (1 × 10 6 cells/100 μL per flank). Tumor growth of mice was observed every 7 days, using a caliper for the tumor volumes. Thirty-five days later, all mice were killed and the tumors were weighted and photographed. Then, tumor tissues were used for hematoxylin and eosin (H&E) staining and immunohistochemical analysis of Ki67 protein expression. The tumor volumes were measured and calculated using the following formula: volume (V) = width (W) 2 × length (L)/2.
Methylation-specific and bisulfite-sequencing PCR
For methylation-specific PCR (MSP), the methylation status of CpG islands in AKAP12 gene promoter region were screened using MSP method initially in PCA cells. In brief, DNA samples that modified with bisulfite were extracted according to instructions of the manufacturer (Zymo Research, Orange, CA) first. Then, a total of 40 ng of bisulfite-modified DNA was used for PCR amplification. After that, 10 µl of the amplificated product was taken to analyze using Agarose gel electrophoresis. For interpretation of the MSP results, the methylated and unmethylated state is represented by the methylation (M) and unmethylation (U) band, respectively. Occasionally, if the site is methylated partially, the two bands may appear.
Bisulfite-sequencing PCR (BSP) is a sequencing method to detect the methylation status of CpG islands. Briefly, DNA samples were treated with bisulfite and amplified by PCR. Then, the PCR products were purified using a TIANgel Midi Purification Kit (Tiangen Biotech, Beijing, China). After that, the purified products were cloned into a pGEM-T easy vector (Promega, Madison, WI, USA). Nine colonies were randomly chosen for plasmid DNA extraction using a Promega Spin Mini kit (Promega) and then sequenced by an ABI 3130 Genetic Analyzer (Applied Biosystems, Foster City, CA, USA).
For AKAP12 gene promoter analysis, pGL3-Basic vector was selected for construction. Relative luciferase activity was detected by the Dual Luciferase Assay system (Promega). The phRL-TK vectors (Promega) were used as the internal reference.
Chromatin immunoprecipitation
Chromatin immunoprecipitation (ChIP) assay was performed using ChIP Enzymatic Chromatin IP Kit (Magnetic beads, Cell Signaling, Danvers, MA) according to the manufacturer's instructions. Briefly, the cells were crosslinked with formaldehyde of 1% final concentration first. Then, they were washed with pre-cold PBS and collected, followed by sonication crush. The solution complexes were immunoprecipitated using the anti-PPARG2 antibody (1 : 100; catalog number sc-166731, Santa Cruz, CA) or rabbit immunoglobulin G (IgG, negative control). After that, the immunoprecipitated complexes were collected using protein G-agarose beads. The precipitates were eluted from the beads and the DNA-protein complexes were de-crosslinked at last. The DNA samples were recollected and used for PCR analysis. The PCR conditions were as follows: the holding stage keeps 95°C for 5 min (1 cycle), the cycling stage holds 95°C for 30 s, 55°C for 30 s, and 72°C for 30 s (35 cycles), and 72°C for 10 min (1 cycle). ChIP primers for detailed sequences were shown in Supplementary Table S1.
Data analysis
Statistical analysis was performed using the SPSS 17.0 statistical package (Chicago, IL, USA). Data were expressed as the mean ± SD. The Student's t-test was used for comparison of two groups. Differences of multiple groups were compared using one-way analysis of variance (ANOVA). When ANOVA detects significant differences, the data were then compared using a Tukey's test as post hoc test. Correlation coefficient (r) and P-values were calculated by Pearson's correlation analysis. P < 0.05 was considered statistically significant.
PPARG2 is downregulated in PCA cell lines and tissues
To determine the expression levels of PPARG2 in PCA cell lines and tissues, we first detected the relative mRNA expression levels in different PCA cell lines (LNCap, PC3, and DU145) and a normal prostate epithelial progenitor cell line NHPrE1 by qRT-PCR. We found it was significantly downregulated in the three PCA cell lines compared with the normal cell line (Fig. 1A). Then, the data extracted from The Cancer Genome Atlas (TCGA) showed that PPARG2 was downregulated in PCA tissue samples (496 cases) compared with the normal tissues (50 cases) (Fig. 1B).
To examine the protein expression level of PPARG2 in clinical PCA specimens, eight PCA tissues (T) and eight prostate hyperplasia tissues (N) were collected and analyzed. The results revealed lower PPARG2 protein expression was in the Tumor group than that in the Normal group (Fig. 1C, D). In addition, to obtain further insight into the function of PPARG2, the gene set enrichment analysis 26 of TCGA profiles based on PPARG2 single gene expression was performed. The results indicated that PPARG2 expression levels were correlated negatively with cell proliferation by affecting genes in cell cycle regulation (Fig. 1E, F). Taken together, the above results reveal obviously that PPARG2 is downregulated in PCA.
PPARG2 suppresses cell migration, colony formation, invasion, and induces cell cycle arrest of PCA cells in vitro
To study the effects of PPARG2 on the biological behaviors, PC3 and LNCaP cell lines were selected as research represents of PCA cells in the following studies. The PPARG2-overexpressing lentivirus vector-GV358 containing the human PPARG2 wild-type full-length sequence (PPARG2) for gain-of-function and the lentivirus EV as a control were transfected respectively into the PCA cells. Results from wound-healing assays indicated that overexpression of PPARG2 suppressed cell migration significantly in PC3 and LNCaP cell lines ( Fig. 2A, B). Overexpression of PPARG2 also significantly inhibited the colony numbers of PPARG2 group in the colony formation assay compared to those of the EV group (Fig. 2C, D). At the same time, Transwell cell invasion tests showed that the cells' abilities of the PPARG2 group to migrate and penetrate Matrigel was significantly reduced compared with those of the EV group (Fig. 2E, F). These results indicate that PPARG2 may inhibit the proliferation and tumorigenicity of PCA cells.
To further evaluate the potential suppressing effects of PPARG2 on cell proliferation, CCK-8 assay was performed in 1, 2, 3, and 4 days after Lv-PPARG2 and Lv-EV transfection. Compared with the EV group, a significant decrease of cell viability was detected in PC3 and LNCaP cells in the PPARG2 group (Fig. 3A). Then, EdU retention assays were performed to assess the inhibiting effect of PPARG2 on DNA replication. Following transfection with Lv-PPARG2, the percentage of EdU-positive cells was decreased significantly in PC3 and LNCaP cells compared to EV group (Fig. 3B, C).
Moreover, to identify the mechanism through which PPARG2 overexpression inhibits the proliferation of PCA cells, we checked the cell cycle distribution in PC3 and LNCaP cells transfected with Lv-PPARG2 or Lv-EV using flow cytometry (Fig. 3D, E). The results showed cell cycle of G1 arrest that the cell populations in the G1 phase of the cell cycle increased significantly in the PPARG2 group compared with EV-transfected controls of PC3 and LNCaP cells. However, the cell populations were reduced in the S stage of the cell cycle compared to the EV controls in the two cell lines. The results indicated that progression of G1-S cell cycle was inhibited by PPARG2 overexpression in PC3 and LNCaP cells. In addition, the protein expression levels of cyclinD1, cyclinB1, p21 Cip1 and p27 Kip1 , Bcl-2, p-AKT, and AKT were also analyzed between the two groups in PC3 and LNCaP cells (Fig. 3F-H). Western blotting analysis revealed that cyclinD1, Bcl-2, and p-AKT were decreased significantly in PPARG2-transfected cells. Conversely, cell cycle inhibitors p21 Cip1 and p27 Kip1 were upregulated in PPARG2transfected cells. Moreover, altering expression of PPARG2 in the two groups had no effect on the protein expression of AKT and cyclinB1.
PPARG2 inhibits tumor growth in vivo
To further study the anticancer effects of PPARG2 on PCA progression in vivo, xenograft models were established via injection subcutaneously of PC3 cells treated with PPARG2 or EV into BALB/c nude mice. Compared with the EV group, the xenograft tumor volumes and weights in the PPARG2 group were all markedly decreased ( Fig. 4A-C). Moreover, H&E and proliferating cell-associated antigen Ki67 staining were performed to study the proliferation level of the subcutaneous tumors. H&E-staining results showed the nuclei in both groups were large and deeply stained, whereas Ki67-staining results in tumor xenografts indicated that the Ki67positive rate was decreased markedly in the PPARG2 group, suggesting that PPARG2 can inhibit tumorigenicity of PCA cells in vivo (Fig. 4D, E).
PPARG2-mediated induction of AKAP12 mRNA upregulation in vitro
To further explore the potential mechanisms of PPARG2-suppressing cell proliferation of PCA, gene expression microarray was performed using PC3 cells between overexpressing PPARG2 cDNA (PPARG2) and EV. The clustering heat map of differentially expressed genes between sample groups was shown in Supplementary Fig. S1A. Then, differentially expressed genes were screened to meet fold change > 1 and P-value < 0.05 between the two groups, of which 716 genes were upregulated and 822 genes were downregulated (Supplementary Fig. S1B). From the upregulated gene set, we screened the AKAP12 gene (Fig. 5A) and the expression levels were significantly lower in PCA tissues than those in normal ones from the TCGA database (Fig. 5B). Moreover, correlation analysis from Gene Expression Profiling Interactive Analysis (GEPIA) (Fig. 5C) and TCGA (Fig. 5D) database indicated that the expressions between the two genes was positively correlated significantly. In addition, PPARG2 also showed a positive correlation in mRNA expression with AKAP12 in most cancer and normal tissues or cell lines (Fig. 5E, F). Then, functional analysis of GO enrichment and KEGG pathway were performed to investigate the functions and processes of the target gene set using the online software DAVID. The results revealed that GO enrichment involved in (i) biological process (BP) (Supplementary Fig. S2A), such as positive regulation of transcription, positive regulation of apoptotic process, and DNA methylation; (ii) involved in molecular function ( Supplementary Fig. S2B), such as protein binding, TF binding, and poly(A) RNA binding; and (iii) involved in cellular component ( Supplementary Fig. S2C), such as nucleoplasm, nucleus, and cytoplasm. The KEGG pathway analysis suggested that the differentially expressed genes of PC3-PPARG2 cells were involved in cell cycle, PI3K-Akt signaling pathway, and transcriptional misregulation in cancer, etc. (Supplementary Fig. S2D).
Upregulated PPARG2 induces demethylation of the AKAP12 promoter region in vitro
From the above analysis results of BP, which is under GO classification, we found the target gene set were involved in DNA methylation. Then, expression levels of the three DNA methyltransferase (DNMT1, DNMT3A, and DNMT3B) were extracted from the microarray data. The results indicated that the expression levels of DNMT3A and DNMT3B were all downregulated markedly in the PPARG2 group compared with those of the EV group ( Supplementary Fig. S3B, C), whereas To further investigate the intrinsic mechanism of PPARG2-induced upregulation of AKAP12 and downregulation of DNMT3A/3B, we then conducted the following experiments to clarify it from the perspective of epigenetics. We first scanned the AKAP12 promoter for potential regions of DNA methylation and found obvious CpG islands existed in the promoter region near the transcription start site (TSS) (Fig. 6A). Then, the CpG island demethylation statuses were detected between the PPARG2 and EV group by using MSP. The demethylation levels of the AKAP12 promoter region were significantly increased in the PPARG2 group (Fig. 6B), which could be the result of downregulated DNMT3A/3B expression in the PPARG2 group. Moreover, to obtain further demethylation status details of specific CpG sites near the promoter region of AKAP12 between the PPARG2 and EV group, a 294 bp length of PCR product (−315 to −22 bp) was analyzed following sodium bisulfite treatment using BSP method (Fig. 6C). The results revealed lower methylation frequencies in the PPARG2 group than those in the EV group (Fig. 6D, E).
In addition, to determine which CpG sites were responsible for the demethylation-related activation of the AKAP12 gene under the condition of PPARG2 upregulation in PCA cells, two AKAP12 gene promoter regions (PGL3-180 and PGL3-315) were constructed and treated with SssI methylase in vitro before being transfected into the PC3 cell line (Fig. 6F). In comparison with the untreated promoter construct, the SssI methylase-treated construct showed a significant suppression of promoter activity. Although there were no significant differences in the promoter activity between the region of PGL3-180 and PGL3-315 with or without SssI methylase treatment, it indicated that the promoter region at −180 bp may play an important role in regulating AKAP12 gene transcription. Finally, relative AKAP12 mRNA expression levels were detected after PC3 cells were treated with different concentrations (0, 1, 2, and 3 μM) of DNMT inhibitor 5-Aza-2′-deoxycytidine (5-Aza-dc). The detection results showed a 5-Aza-dc dose-dependent significant upregulation of AKAP12 mRNA expression levels (Fig. 6G), which indicated that methylation statuses of the related region in AKAP12 promoter were involved in regulation of its expression.
Upregulation of miR-200b-3p regulates expression of the downstream target genes DNMT3A/3B in PPARG2overexpressed PCA cells To further explore the mechanism of upregulated PPARG2 causing downregulation of DNMT3A/3B, one of the possibilities we thought was that miRNAs may be involved in the regulatory function of the genes. Then, miRNA-seq was performed between PPARG2 and EV group of the PC3 cell line. The clustering heat map of differentially expressed miRNAs between sample groups were shown in Supplementary Fig. S4A. Differentially expressed miRNAs were screened, and 18 miRNAs were upregulated and 44 miRNAs were downregulated (Supplementary Fig. S4B). From the upregulated 18 miRNAs, we identified miR-200b-3p as the study target, which was upregulated significantly in the PPARG2 group compared with the EV group extracted from miRNA-seq results (Fig. 7A), and it has been then confirmed by experimental verification (Fig. 7B).
The next step, the regulatory relationship between miR-200b-3p and DNMT3A/3B, was confirmed via experimental verification. Bioinformatics revealed that DNMT3A/3B 3′-region contained one conserved target site of miR-200b-3p, respectively (Fig. 7C). To evaluate this prediction, the wild-type 3′-UTR sequence of DNMT3A/3B (wild type) or its mutant sequence (Mut) (Fig. 7D, E) was subcloned into the pmiR luciferase reporter and then co-transfected with miR-CON or miR- Fig. 5 Positive correlation of AKAP12 in mRNA expression with PPARG2 gene in most cancer or normal tissues. A AKAP12 mRNA expression extracted from the microarray data (EV (con) = 3, PPARG2 (treat) = 3, P = 0.0039). B AKAP12 mRNA expression in PCA patients extracted from TCGA database (normal = 50, tumor = 496, P < 0.001). C, D Correlation of AKAP12 with PPARG2 in expression from GEPIA database (r = 0.51, P < 0.001) and TCGA database (r = 0.58, P = 0). E, F Correlation of AKAP12 with PPARG2 in expression in cancer samples from The Cancer Genome Atlas (TCGA database) and normal tissues from Genotype Tissue Expression (GTEx database), respectively. It is noteworthy that every dot represents one cancer type (E) in which the red dot represents PCA tissues (r = 0.58, P < 0.001) or one tissue type (F) in which the red dot represents prostate tissues (r = 0.57, P < 0.001).
200b-3p mimic into 293T cells. The results indicated that the relative luciferase activity of the pmiR wild type was decreased significantly by 45.0% (Fig. 7D) and 51.3% (Fig. 7E), respectively, when miR-200b-3p mimic was cotransfected into the cells. However, no differences of relative luciferase activity of pmiR-Mut showed when cotransfected with miR-CON or miR-200b-3p mimic into PC3 cells.
Moreover, to further verify the efficiency of the binding sites from ChIP assay results, PGL3 plasmids containing serially truncated and mutated AKAP12 gene promoter were constructed and co-transfected with siControl or siPPARG2 into 293T cells, to determine the most effective functional binging site that caused PPARG2-regulated AKAP12 gene promoter activation (Fig. 8C). Luciferase reporter activity detection results showed a significant reduction of promoter activity in the cells that cotransfected with siPPARG2 of both PGL3-WT and PGL3-MT groups. At the same time, the reduced promoter activity caused by sequence truncation was not very obvious. From the luciferase results of serially mutated AKAP12 gene promoters (pGL3-MT1, pGL3-MT2,and pGL3-MT3), a more significant reduction in AKAP12 gene promoter activity was found in pGL3-MT2 (site 1, −160 to −141 bp) or pGL3-MT3 (two sites mutation simultaneously). The results suggested that the PPARG2binding site 2 may play a more important role in AKAP12 transcriptional activation.
Discussion
Peroxisome proliferator-activated receptors (PPARs) are involved in many diseases such as cancer 27,28 , diabetes 29 , and inflammation 30 . Studies have revealed that PPARG acts as a tumor suppressor and plays an important role in tumorigenesis 31,32 . Additional studies showed PPARG as an oncogene in the development of tumors 33,34 . Regarding the dual role of PPARG gene played in the occurrence and development of tumors, we believe it may be related to multiple factors involved in tumor types and tumor progression in a specific environment. In this study, our research target PPARG2, one of the PPARG protein isoforms, was found to be downregulated in PCA and inhibited cell migration, colony formation, invasion, and induced cell cycle arrest of PCA cells in vitro. Moreover, PPARG2 overexpression modulates the activation of the Akt signaling pathway, as well as inhibit tumor growth in vivo. This is our first report to elaborate the functional significance of PPARG2 expression in human PCA and experiment Fig. 8 Enhanced binding of PPARG2 to AKAP12 promoter region due to DNA demethylation when PPARG2 is upregulated. A Scheme for binding sequence and sites of PPARG2 binding to the transcription factor-binding site near the AKAP12 gene promoter region. TSS, transcription start site. B ChIP assay revealing the direct binding of PPARG2 to the AKAP12 gene promoter in PC3 cells between the EV and PPARG2 group. The enriched DNA fragments within the AKAP12 gene promoter using IgG and an anti-PPARG2 antibody were amplified by PCR. Total input was used as a positive control. *P < 0.05, **P < 0.01 vs. EV group. C Serially truncated and mutated AKAP12 gene promoter constructs were co-transfected with siPPARG2 or siControl into PC3 cells, and the relative luciferase activities were determined. *P < 0.05 vs. siControl group. D Proposed graphic model for PPARG2-AKAP12 axis-mediated epigenetic regulation mechanism of proliferation, migration, and invasion in PCA cells.
results indicated that PPARG2 functioned as a tumor suppressor and inhibits PCA malignant progression. Thus, PPARG2 holds great prospects as a new therapeutic target for PCA.
As TFs, most of members of PPARs can bind to the specific sites of the target gene promoter region to regulate their expression 35 . Therefore, for the next step for mechanism exploration, we screened differentially expressed genes via gene expression microarray from the PPARG2 and EV groups in vitro first. We found that AKAP12 mRNA was upregulated, mediated by PPARG2 overexpression. Reports have revealed that AKAP12 is a protein kinase C substrate and a potential tumor suppressor in many types of cancers including hepatocellular carcinoma (HCC) 36 , colorectal cancer 37 , and breast cancers 38 . Moreover, interestingly, GO enrichment results indicated PPARG2 overexpression involved in DNA methylation of BP. Accordingly, we then associated AKAP12 gene with DNA methylation. We found that obvious CpG islands existed in its promoter region near the TSS and confirmed that upregulated PPARG2 induced demethylation of the AKAP12 promoter region via MSP and BSP experiments. Moreover, relative AKAP12 mRNA expression levels increased in a 5-Aza-dc dose-dependent concentration, which indicated that methylation statuses of AKAP12 promoter were involved in regulation of its expression. The above results indicated PPARG2-AKAP12 axis-mediated epigenetic regulatory network may play a role in PCA.
Enzymes that catalyze CpG methylation in DNA sequence include DNA methyltransferase 1 (DNMT1), DNMT3A, and DNMT3B. These DNA methyltransferases are essential for mammalian tissue development and homeostasis, and also related to human developmental disorders and cancer, which supports that DNA methylation plays a key role in the specification and maintenance of cell fate 39,40 . In this study, we found that expression levels of DNMT3A and DNMT3B were all downregulated markedly in the PPARG2 group compared with those in EV the group. This data coincided with the upregulation of AKAP12 due to its promoter demethylation.
The next step, we explored the potential mechanism of DNMT3A/3B downregulation induced by PPARG2 overexpression in PCA. It may be two ways: a direct regulation or an indirect effect through a mediator. Regarding the possible indirect influence mechanism, miRNAs could be suitable mediators, which belong to noncoding RNAs family. They mainly cause mRNA translational repression or degradation by targeting to the 3′-UTR of mRNAs 41 . Thus, we screened out miR-200b-3p using miRNA-seq and the DNMT3A/3B 3′-UTR region contains conserved target sites of miR-200b-3p. Studies have shown that miR-200b-3p was downregulated in androgen-independent PCA 42 , HCC 43 , and glioma 44 . Our experiments confirmed the regulatory relationship between miR-200b-3p and DNMT3A/3B. The data indicated that miR-200b-3p participated indirectly in PPARG2-AKAP12 axis-mediated epigenetic regulatory network in PCA.
In addition, the ChIP results demonstrated the enhanced binding of PPARG2 to AKAP12 gene promoter in PPARG2-overexpressed PC3 cells and the sequential deletion and mutation analyses revealed that binding site 2 (−160 to −141 bp) may play a more important role in the AKAP12 gene promoter activity. Coincidentally, the binding site 2 is exactly where CG island is located. It is obvious that increased affinity of PPARG2 to the specific binding site is due to the CG island demethylation. The data revealed that DNA demethylation enhanced the binding of PPARG2 to the AKAP12 gene promoter and participated in regulation of the AKAP12 gene expression.
In summary, the present work elucidates a PPARG2-AKAP12 axis-mediated epigenetic regulatory network in PCA (Fig. 8D). PPARG2 acts as a tumor suppressor in suppressing malignancy of PCA cells in vitro and in vivo. As a transcriptional factor, PPARG2 overexpression induces the increased expression level of miR-200b-3p, which targeted 3′-UTR of the downstream targets DNMT3A/3B, facilitates interaction with demethylated AKAP12 gene promoter and suppresses cell proliferation in PCA via AKT signaling pathway. Of course, we recognize that upregulated PPARG2 potentially regulate a cluster of miRNAs, while one miRNA can target lots of genes, and involves in cross-talk pathways in the complex and elaborate epigenetic regulatory network in PCA. Therefore, we need to make more efforts to better explore the sophisticated mechanisms of PPARG2 regulation in the progression of PCA. In brief, our findings provided the first evidence for a novel PPARG2-AKAP12 axis-mediated epigenetic regulatory network. The study identified a molecular mechanism involving an epigenetic modification that could be possibly targeted as an antitumoral strategy against prostate PCA.
Ethics statement
The protocol was approved by the Ethics Committee of the Affiliated Hospital of Nantong University. All patients obtained informed consent to participate in this study.
Consent for publication
All the authors agree to publish this paper. | 2021-05-25T06:16:13.626Z | 2021-05-22T00:00:00.000 | {
"year": 2021,
"sha1": "855b8be28ed550808719a4cc88ddda0fdc8c7133",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41419-021-03820-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d28240094fc06b1f328b6ba69aa6154ef274ffcb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
153847269 | pes2o/s2orc | v3-fos-license | Diaspora as a Communicating Actor in Public Diplomacy Compared to Public Government Diplomacy in the Case of Kosovo Resul Sinani
During the past two decades Kosovo has often been a subject of the news, and taken airtime in the most powerful international media through which it became known to the world public opinion. The highest presence was during the 1998-1999 war and especially during the period of the NATO bombing campaign against Serbia when almost everything was broadcast live. This picture of Kosovo still prevails in the world's public opinion and it identifies Kosovo with the war, thus as a post-conflict society. Ever since the proclamation of independence of 17th of February 2008 its governments have tried to change and improve this image through public diplomacy, but despite these attempts the tone of the stories in the world media, which as we know is more attracted to negative than to positive news, especially when it comes to small countries like Kosovo has not changed. This article deals with the attempts of the public diplomacy of Kosovo and the news it has produced during 2014-2015 for the international media and the world public compared to the news and the image that was projected by the members of Kosovar diaspora for the world opinion. These arguments that the potential of the diaspora is many times greater while the image is completely positive when compared to the potential and the image that comes from within Kosovo, which is completely negative.
Introduction
Kosovo is the youngest country on the European continent.Its image is still being connected with the war of 1999 while ever since then the news that comes out of Kosovo speak of various incidents and based upon those the image of Kosovo is still the one of the post-conflict country.This image is completely opposite when compared with the reality, because with all the difficulties in building a stable and democratic society Kosovo has institutions which have proven they can function normally and in a stable manner.
The image of Kosovo in something that the country's institutions have been preoccupied with, but it is also a preoccupation of its citizens who would want this image to be positive.For this reason the Ministry of Foreign affairs of the Republic of Kosovo in cooperation with several organizations from the civil society, including the artistic community, has taken initiatives and organized various activities in different countries of the EU, but also in other parts of the world in order to present the reality, which is completely opposite from the image that the citizens of those countries have about Kosovo, an image they continue to connect with the war of 1999.Unfortunately small countries like Kosovo become a part of the world media only for bad news, and these have not been missing during the post war years and they continue to keep and cultivate the image of Kosovo as an unstable country.Among the bad news that come out of Kosovo and grab the attention of the international media are the stories that deal with corruption within the country, political instability, violent protests, inter-ethnic incidents, limited freedom of expression etc. which seldom leave room for positive and mainly individual news stories like the victories of Majlinda Kelmendi in European and World Judo championships.
Kosovar citizens are happy with any positive news about their country which are mainly connected with the success of our compatriots in the western world.The singer Rita Ora or the football players representing Switzerland like Xherdan Shaqiri, Valon Behrami, Granit Xhaka etc. or the boxer Haxhi Krasniqi, are valued as pride of the nation and positive images of Kosovo in the world.Their successes are met with a lot of emotion by the Kosovars who keep them as a reference point whenever they leave the country.
This article which treats the image of Kosovo in the world is using analyses as a methodology, since in analyses and places against each other the images which come out of Kosovo and those that come from the Kosovar diaspora.It shows that the images coming out of Kosovo are mainly negative while whose coming from the Kosovars in diaspora are 270 mainly positive (despite the fact that there are enough negative cases coming from the diaspora, like the incidents that have to do with family violence or drug trafficking).It also shows that there is no coordination or cooperation between the Kosovar institutions and the successful Kosovars of diaspora, cooperation which if coordinated would have a large impact on improving the image of Kosovo in the world.Therefore it is recommended that this potential of the diaspora would be the best used for such a purpose.The objective if the article is proving that institutions of a small country like Kosovo whatever they may be have difficulties on reaching successes in public diplomacy and improving the countries image.At the same time people with their origins in these places, concretely Kosovo, can be extremely popular in the international arena, therefore if there is a cooperation agreement with these personalities -then public diplomacy will be much more efficient -thus directly influencing the improvement of a country's image.
Literature Review
One description of public diplomacy, in one of its brochures The Center for Public Diplomacy Eduard R. Murrow, says that "The public diplomacy... deals with the influence of public attitudes in forming and executing foreign policies.It includes dimensions of international relations beyond international diplomacy: government cultivating of public opinion in other countries, cooperation of various groups and private interests of a country with those of another; reporting of foreign affairs and their influence of politics; communicating through those whose work is communication as well as between the diplomats and the foreign correspondents as well as inter-cultural processes"1 therefore of public diplomacy but the number of actors is much wider.It is understandable that the concept and the actors of public diplomacy have changed and adapted with the circumstances of the time during the years.Today, with the new media, especially those on the internet all accountability is done differently.Therefore, as Bruce Gregory says that today, by public diplomacy we understand the instrument used by the states, the societies of states and some inter-governmental and non-governmental actors in order to understand the culture, stances and attitudes in order to create and manage relationships and influence opinions as well as mobilize actions in order to forward their interests and values (Gregory, 2011).In the meantime the new media have given great opportunities for smaller countries who wish to develop public diplomacy.Nichollas Cull gives several factors which create what is being called the new public diplomacy.Since the bipolar clash has stopped and the world is more open.Because the public diplomacy is not a monopoly of states and actors in communication have been added, like the regional organizations, EU, NGO-s, interest groups, corporations (including the international broadcasters) as well as non-government actors; the end of the cold war happened at the same time when the new communication technologies allowing global communication in real time appeared; there is a new direction in public diplomacy.The heads of states speak on their byrocratic transmissions and they reach the entire world.People have more information to choose from than ever before.Public diplomacy now cannot be seen as something addressed to only one audience, but it is being addressed to individuals one by one and the government has to make the process easier; there are new challenges.The increase of the speed of communication makes the message travel from one side of the world to the other, at the time when they are asleep and this presents a barrier between inside and outside information; there is a new terminology of public diplomacy for prestige and international image, today we speak of the ever present terminology of Nye, soft power and breeding where values and culture have an integral role (Cull, 2012. Pg. XII).The concept of soft power was created and developed by Harvard professor, Joseph s. Nye Jr. (1990).He starts at 1990 and comes back and develops it further in the study of 2002 while he gives it final shape in 2004.According to Nye soft power of a country comes from the culture of the country, it's values and foreign policies (Nye, 2004. pg 11).Today, public diplomacy, as Jan Melissen (2005) states, is the key instrument of the soft power.
Public Diplomacy Versus Kosovo's Image in the World
Kosovar diplomacy is young, inexperienced and it is still not consolidated enough.Today Kosovo has 23 embassies2 all over the world and they are mainly small embassy with not enough funds to develop diplomatic activities.Therefore in 2011 The Ministry of Foreign Affairs of the Republic of Kosovo signed an agreement with the British Embassy in Prishtina and the British Council in Kosovo which had a purpose of advancing public diplomacy of Kosovo called "Communicating
271
with Europe through public diplomacy"3 .A similar agreement was signed with the Kosovar Fund for Open society (KFOS)4 .Thanks to these agreements during the past years hundreds of activities were developed, like visits to different EU countries where meetings with politicians, diplomats and members of the civil society or the media were organized.Also, expositions were organized as well as media editorials and support was given to the cultural activities of Kosovar artists in the countries that have not recognized the Kosovar independence.In fact this was the main goal of public diplomacy of Kosovo, which was said by the Minister Enver Hoxhaj when the agreement was signed -he said "this project is attempting to include advocating, informational activities as well as creating powerful channels with explanatory character, known personalities of public life, civil society of the media and intellectuals, who will have a chance to help the ministry of foreign Affairs and the Kosovar Government in representing, presenting and promoting our national interest in the concrete field of rising the number of recognitions"5 .But, beyond this attempt to rise the number of recognitions Kosovo needs public diplomacy in order to improve its image in the world.The bad image of Kosovo in the world is the main preoccupation and has been a point of discussion for years within Kosovo.In his editorial, entitled "the politics of images" the editor in chief of Koha Ditore Agron Bajrami while concluding that Kosovo has a negative image in the world says that: "This reputation was created by dozens of critical reports which have often very precisely identified the acute problems of our country, from the failed governing policies and electoral manipulations up to criminal actions and economic theft.
There reports have built a heavy image of Kosovo, where the main negative characters are mainly the same characters who, even today, lead the country.Reports made by NATO, EU, OSCE the UN and many NGO-s have built an image of an insecure, intolerant, un-democratic Kosovo which has a corrupted political class and a system which is slowly but certainly transforming itself into an economic oligarchy."6 So despite the activities of the public diplomacy, which were done in the past few years the image of Kosovo, even today, is not the one that is desired.And the main culprits for this are within the country so things that happen inside Kosovo.In a statement given to Radio Free Europe Shpend Kursani from the Kosovar Institute for Research and Policies Development -KIPRED, estimates that since the proclamation of independence the Serbian diplomacy, or counterdiplomacy has been very harsh towards Kosovo but also quite effective in damaging its image.But he adds that Kosovars themselves have also done damage to that image.
"As well as Serbia damaging Kosovar image with its diplomacy, we, within the country have not lacked far behind Serbia since we continuously managed to prove that we are not capable of fighting organized crime -a thing for which Kosovo is best known in the international arena.The steps we have taken have mainly been cosmetic steps-made to improve the image instead of substantial steps -which would have actually improved the said image"7.
This image, especially during 2014 and the start of 2015, because of developments within the country have not managed to improve, but it has continued being dark and negative which can be illustrated with the main political developments which have generally been negative.An attempt to improve the branding and image of Kosovo in the world was made through the Kosovo Young Europeans8 campaign.This campaign's base was a publicity video which was broadcast on the international media like the CNN and the BBC.But it seems that the influence of the video was not what was unexpected when put against the news these media were giving from Kosovo-which were generally negative.Furthermore it seems that the effect of the video was bigger inside Kosovo than abroad.
The Images of the Main Developments during 2014 -2015 that Were Sent Out of Kosovo
The developments and events that have attracted the international public and media attention have generally been negative, with several exemptions which have given positive images and they were mainly from the field of sports.In this chapter we take into consideration the main events which had the most impact.Among many events and developments which have marked the period 2014-2015,as negative events for Kosovo's image were: a) The over-stretched process of forming institutions after the elections of 8 th of June 2014, b) Kosovar's participation in ISIS, c) Violent protests of 24 th and 27 th of January 2015, d) The exodus of the Kosovars towards the Western European countries.While as positive events for the Kosovo's image in the world we chose: e) General elections of the 8 th of June 2014, f) Friendly football matches of the Kosovar representation, g) Medal victories by the Judoist Majlinda Kelmendi.
The over-stretched process of forming institutions
The general elections for the Kosovar parliament were held of the 8 th of June 2014.They were valued as regular and the results were accepted by all the political parties.The elections were won by the Democratic Party of Kosovo for the third time in a raw.But two days after the elections three opposition parties, Kosovar Democratic Alliance, The Alliance for the future of Kosovo and NISMA for Kosovo, signed an agreement on post-election coalition which had a purpose of making it impossible for the Kosovar Democratic Party (PDK), which had lead the government through two previous terms to form a government, by stating that "There will not be a Thaçi 3 government" 9 .After this, a long political and legal battle went on, where the opinion of the constitutional Court was required twice.First time the president asked explanations about the nomination for the PM, secondly PDK, since it believed that Isa Mustafa was chosen irregularly as the head of the parliament.
"We believe that there have been constitutional as well as procedural breaches and we ask for this procedure to be suspended and start a new seance.We have asked for this procedure to be suspended and the Constitutional court can decide on this" -declared the PDK parliament member Xhavit Haliti, in whose name the case was sent to the Constitutional court.
After about five months of the "political cramp" during which the government functioned only to fulfill its basic functions, during a public forum the US ambassador in Kosovo, sends a harsh criticism towards the political leaders who could not agree on forming a government.ON 6 th of November, during a debate organized by the movement FOL, ambassador, Tracey Ann Jacobson, asked about the role of the international factor and the possibility of its involvement with advising Kosovar politicians on finding a solution for the knot which had paralyzed the institutions of the country, she did not hesitate of telling them bluntly what had to be said: "there is a message that I have given to all the leaders and I can even say it in Albanian: Do not fuck it up!" 10 .Shortly after this statement on 19 th of November 2014, in the office of the president and in the presence of the American Ambassador an agreement for forming a coalition between PDK and LDK was reached.
"Both political leaders have confirmed their devotion to quick foreclosure of the institution building process on the basis of the constitution, the constitutional court and the constitutional principles that preserve the multi-ethnic character of the Republic of Kosovo, fulfill the international obligations of law and order and strengthening the euro-Atlantic road of our country"11 -states the media declaration of president Jahjaga.
This agreement paved the way for creating the new institutions after the 8 th of June elections.On the 9 th of November, Isa Mustafa was chosen as the Prime Minister of the Kosovar government, after the general elections he was criticized and created a negative image for the political class and the country in general.
Kosovar's participation in ISIS
The social media images of the Kosovar Lavdrim Muhaxhiri decapitating a hostage have terrified the Kosovar as well as 273 international public.The media report that about 185 Kosovars have been recruited and have joined the ranks of ISIS fighters12 .This has been seen as negative image for the country.Reports of the deeds of this ISIS member have been transmitted on most of the world media.In the meantime, the Kosovar Police13 during one operation arrested 40 individuals in 60 raids who have been accused of taking part in the fighting in Syria and Iraq.These individuals, according to the police and the State prosecutor were suspected of joining organizations like ISIS and AL Nousra.According to the police the arrested were suspected of committing a crime against the constitutional order of the Republic of Kosovo, an act which is punishable by the Penal Code.In the meantime even the Minister of Foreign Affairs, Hashim Thaçi spoke of the danger that ISIS presents not only for Kosovo but for the entire region: "The EU should not delay the process of admissions, expanding and integrating Kosovo, and the region in the EU and NATO.Every delay in this process is dangerous.It opens up the way for interference and increased influence of Russia in the Balkans, as for the political, economic and military aspect.This delay has also opened up the way for ISIS to penetrate the region and expand its influence in the Balkans."-said the Kosovar foreign Minister."144.3 The violent protests of the 24 th and the 27 th of January 2015 An attempt to visit a church, made by a group of Serbs who have come from Serbia to Gjakova for the orthodox Christmas was stopped by a group of the society "the calls of mothers" -a society of the families whose family members are missing since the war of 1999.On this occasion, the minister for returns and communities of the Kosovar government, who is an ethnic Serb, Aleksandar Jablanovic, offended the mothers of the missing.He stated: "This is a great and holy celebration.The savages who have stopped the Serbs from coming to their torched homes will not serve anyone.We will ask for explanations by the interior minister on how this happened."15 This statement, together with the law for public enterprises which was connected to the faith of the metallurgy giant of Kosovo "Trepça", have made the opposition and the civil society get out and protest.The first protest was on the 24 th of January and the protests have escalated into violence when the Kosovar government building was damaged.After this protest there were many reactions saying that violent protests like those are damaging the image of Kosovo in the entire world.
"The right to gather peacefully and join compatriots voluntarily to demonstrate is essential for a functioning democracy.But violence during protests is illegal and unacceptable.For this reason we condemn the acts of violence by a group of protesters of the principles of the freedom of media"16 -says in the joint statement of the QUINT countries that was published by the German Embassy in Prishtina.
While the protest of the 27 th of January happened to be even more violent, about 170 individuals were hurt, 107 of whom were police officers, 53 protesters and 10 were ordinary citizens.These protests were valued as bad image for Kosovo which was transmitted abroad.The minister of the Interior Affairs, Skënder Hyseni, the day after the protests, among other things said: "Everyone should know that in Kosovo there is law and order for everyone, based on the Kosovar constitution which is the spine of a country, in which the police is the key instrument for the security of the country.The citizens are free to protest, but no one should set the agenda for the police or the institutions of law and order.Images like these have badly damaged the Kosovar image in the world."17
The exodus of Kosovars to the EU countries
The end of 2014 and especially the beginning of 2015 has been marked by a wave of illegal emigration towards the EU countries.Thousands of people went to escape from Kosovo by illegally crossing the border between Serbia and Hungary.There are no exact numbers for those that have left but it is an estimated 100 thousand, and there are even higher estimates f the numbers of people who have ended up pin Austria, France, mainly Germany and other EU countries through Hungary.This phenomena, has troubled the public opinion inside as well as outside of Kosovo while the reasons for the migration are thought to be many.In the analyses called "which Kosovar citizens are more prone to migrate to the EU countries?"which was made and published by the Group for the political and judicial studies of Kosovo it says that "the individuals who have no knowledge about the immigration policies of the EU are mostly interested in migrating.According to this research corruption, the existence of organized crime, non-functioning of the legal state and the high levels of unemployment are the main reasons why the Kosovars want to migrate abroad.
"All of these factors make the migration in Kosovo to not be seen only as a phenomenon connected with the desire of Kosovar citizens to get outside of its borders and settle in the countries of the Schengen zone, or other EU countries but it is a phenomenon which is mainly generated by the factors which are inside the country." 18-stated Dren Doli, of
this institute
The tempo of emigration has slowed down recently but the images portraying the columns of immigrants crossing the border illegally in low temperatures and harsh conditions, which were transmitted through the main European Media, gave a very negative image of Kosovo.
Kosovar General elections of the 8 th of June
Among the positive developments which influenced the improvement of Kosovo's image in the world there is the process of general elections of the 8 th of June.After the accusations about the 2010 elections, which were made about massive voting manipulations, these elections were seen as an important step for the country.The general elections were marked as orderly and according to the international standards.The EU mission for observing the Kosovar elections evaluated them positively.In this mission report says: "The premature elections of the 8th of June 2014 for the Kosovar parliament with 120 seats are the second general elections held after the proclamation of independence in 2008, and the first general elections held in Kosovo according to Kosovar laws, after the Brussels agreement on normalizing the relations between Belgrade and Prishtina.The elections were transparent and they have consolidated the progress made during the local elections of 2013.The legal framework which has regulated the previous local and general elections has remained in power for these elections too.In accordance with the constitution the system of the reserved seats for political subjects representing the minority communities, which has been used as a temporary measure during the two previous Parliaments, has been replaced with the system of granting 20 guaranteed seats.Despite number of attempts to reform the Kosovar election system the flaws identified during the previous elections have not been fixed.Despite that the legislation offers enough bases for democratic elections in accordance with the international instruments for which Kosovo has pledged in its constitution."19
Friendly matches of the Kosovar football representation
On the 5 th of march 2014, for the first time in history, the Kosovar representation in Football played a friendly game.In the stadium Adem Jashari in Mitrovica the Kosovar representation met Haiti.This became possible after FIFA green lighted Kosovo for friendly matches.The game got the interest of the world sporting opinion and this successful organization was valued highly by FIFA and various sporting media.In its report on the game BBC, besides the game itself, also wrote about the history of the past years in Kosovo stating that the best players with Kosovar origins are representing Switzerland. 20This historic game ended with a 0-0 score and the Kosovars were happy with the fact that now the door for international competitions was opened.In the meantime the Kosovar representation played three more matches, with
275
Turkey, Senegal and Oman, and each of them attracted the attention of the international media managing to transmit a positive image of Kosovo abroad.
Judo-ist Majlinda Kelmendi double champion of the world
During 2013 the Judo-ist Majlinda Kelmendi, reached a historic success for the country.On the world championship for seniors, held in Rio de Janeiro in Brasil she got the first place and was awarded the golden medal in the category of up to 53 kg.In the finals she beat the Brazilian Judo-ist, Erika Miranda."I dedicate this success to Kosovo, my coach and my family as well as all Albanians", declared Majlinda Kelmendi for "Koha Ditore", a short while after she was pronounced the champion of the world.She has presented an image to the world of a Kosovar sportswoman who, despite the harsh conditions and isolation manages to achieve high levels and gain historic successes in the international arena.21Majlinda repeated the same success in 2014, by defending the title during a championship held in Russia.On this occasion, during the welcoming that was organized in Prishtina, the Minister of Sports, Memli Krasniqi, said that she is an image and an inspiration for Kosovo: "Majlinda is the face of our country and she is a model to all of us, showing how we can achieve things, how to work for success and a society of values, success and pride."22-stated Krasniqi.
The image of Majlinda Kelmendi, might e the best that Kosovo has managed to send out to the world ever since the end of the war and the proclamation of independence, not only in sports but in general.
The Image of the Kosovar Diaspora during 2014 -2015
Kosovo has a large Diaspora in the countries of the Western Europe and the USA.The history of the Kosovar Diaspora is long, but it became much more massive during the 1990-s, during the time of the Milosevic regime when all the Kosovar Albanians were fired from their jobs and were massively prosecuted y the Serbian police state.Although there is no exact number of the Kosovars living in Diaspora it is generally believed that there are over 700 00023 Kosovars, while there are those who believe that the Kosovar Diaspora goes over 1 million.During the 1990-s the image of this Diaspora has been negative because the Kosovars were connected to criminal activities of various types, like stealing and drug trafficking.But during the last decade this image has changed fundamentally, because a new generation of Kosovars is growing up out there, who were born in other countries or who have left while very young and have managed to integrate within their respective societies.There are countless examples of former Kosovar immigrants who have managed to build strong businesses, have finished or are going to prestigious universities, have gotten integrated in the cultural and political life and the societies of the countries they live in.Many are those who have managed to become an example, and an image, not just for the community they represent but also for their country of origin Kosovo, thus becoming the pride of all Albanians.
There are many examples of successful Albanians in various countries of the world, but we will speak only of those whose origins are in Kosovo.In this article we will present only two of them, those who have reached the height of their success during the past two years: a) Footballers with Kosovar origins that plays for the Swiss national team, b) The singer of Kosovar origin, Rita Ora.
Footballers with Kosovar origins that plays for the Swiss national team
There are many football players in the Swiss national team who have Albanian origins.During the world championship Brazil 201424 , out of 23 players 5 were of Albanian and 3 were of Kosovar origin.This does not include other players with Albanian origins who have played during qualifying and friendly matches.This fact made the international media attracted to the Kosovar footballers, especially since there were other Albanian players in other teams, like Adnan Jonuzaj who plays for Belgium or Shkodran Mustafi with Germany, which became the world champion.For the needs of this article we 276 will take into consideration only one of these players, who was also one of the stars of the world championship, Xherdan Shaqiri.
Xherdan Shaqiri is a star of the Swiss national football team.He was born in Kosovo in 1991 while when he was 1 years old his parents immigrated to Switzerland.During his career he has achieved a lot of successes.With the team Basel of Switzerland he won the Swiss championship three times in the seasons 2009-10, 2010-11 and 2011-12, while with Bayern Munich he got 8 trophies among which we will mention two Bundesliga, 2012-13 and 2013-14, and above all the UEFA Champions League 2012-13.He is now playing for the Italian club.What is of interest to us is the fact that Shaqiri, through his success has managed to get his homeland and his nation mentioned.He has drawn three flags on his sporting shoes, the flags of Kosovo, Albania and Switzerland and this did not go unnoticed.Also remembered is his celebration after the victory in UEFA Champions League, when he celebrated with two flags: Swiss and Kosovar.Also there are many interviews and reports in various media who speak of his story and they all mention Kosovo.During his last interview on the American network CNN25 , Xherdan Shaqiri, besides his successes speaks also of the possibility that in the future he might play for the Kosovar national team.In this long article there is emphasis on his great successes, among which there is the World Championship in Brazil together with two other Kosovars in the Swiss representation, Valon Behrami and Granit Xhaka, as well as the petition that he together with these two players signed asking FIFA to officially recognize the Kosovo representation and allow it to take part in international competitions.The appearances of Xherdan Shaqiri and other Kosovar footballers in the international media as well as the countless articles about them present an excellent example of public diplomacy through sporting activities while greatly contributing in improving the image of Kosovo and Kosovars in the world.
Rita Ora
There is no doubt that the Kosovar who is most famous and mostly appreciated in the world for her art is Rita Ora.She quickly reached the top of the world show biz, and if we concentrate on just the two last years she has achieved unimaginable successes that were never reached by a Kosovar Albanian.A wonderful tale of her journey is made in the last article which was published about her by the Mirror of London, where Rita Ora moved as a refugee together with her family.An article entitled, Rita Ora's journey from refugee to the Voice to Fifty Shades of Grey26 , speaks of her journey from a refugee, when she was 1 year old to a world superstar.Some of the quotes from the article are: "...To e Rita Ora is exhausting.Only during the past 12 months Rita has traveled 130 thousand miles around the globe, being busy with TV shows, photo sessions, recordings and even red carpet appearances... ... Saturday night marks the final night of "The Voice" in Britain and ever since September, when Rita sat in the judge seat, which has previously been the seat of Kylie Minogue, the 24 year old has been the undisputed star of the show.The arrival of Rita in "The Voice UK" marked a powerful return of the audiences taking over even the most watched programs in Britain.Last week's edition had 6.3 million viewers, 400 thousand more than last years.All of these The Mirror attributes to what it calls "the Rita effect"..." 27Further on in this article on the life of this talented artist with Albanian roots the British paper Mirror mentioned the fact that after the launch of her debut album "No 1", she toured Asia to launch her clothing line, performed in Moscow, Istanbul, LA, New York.She has also sang for the American President Barack Obama and during the Oscars and Grammy ceremonies, where she was also nominated."Among the offers for prestigious brands, TV roles etc which Rita receives every week she has the privilege to decide which to take and which to refuse", notes the Mirror."The hardships of Kosovo are 2 decades a 1.600 miles away, writes Mirror, but metaphorically speaking this is the galactic space that should be traveled to reach Rita Ora" -ends the article of this daily.This and countless other articles that speak of Rita Ora always write about her Albanian Origins and her birthplace, Kosovo.There is no doubt that Rita Ora is the best and most valued image that has ever come out of Kosovo to the world.
Discusion
As this article arguments and proves Kosovo, as the youngest country in Europe which is still not a part of the UN and many other international political and non-political bodies, 15 years after the end of the 1999 war and 7 years after the proclamation of independence in 2008 still does not have a positive image in the international scene .Government institutions are aware of the bad image, and they have undertaken some modest attempts to improve it through media campaigns and other forms of public diplomacy.But despite these attempts, despite the media and cultural marketing the images which follow political actions and which appear in the international media are mainly negative.Unlike these images are the examples of Kosovars living in the western European countries, as is the case with sportsman and artists which have become famous not only in the countries in which they live but all over the world.The artist Rita Ora is a typical and the most representative example but there are many other examples too.By looking at the Governments attempts for public diplomacy, and the public diplomacy that is being done by these personalities without even making and effort then the attempts of the government seem modest and inefficient.At the same time if there was a coordinated effort with these personalities then their potential would be many times greater and , as the effect of the cooperation, the image of Kosovo in the world would be much improved .One of these ways to achieve something would be organizing media events which would gather all of these personalities, or declaring the honorable ambassadors of Kosovo in the countries in which they live.But the initiative for something like this would have to come from the government institutions, which with all their attempts can never create a Rita Ora, or Xherdan Shaqiri!
Conclusion
Public diplomacy of the Kosovar Institutions, with all of its desire and commitment will have difficulties in achieving the desired success in improving image of country in the world.Among others the events that happen within the country, and which continue to attract the attention of the world media are mainly the negative ones which keep showing Kosovo as a place with fragile unstable institutions and as a country which has just got out of the war, although there have been more than 15 years since it ended.A great potential which would help in improving the image of Kosovo in the world is the Diaspora in the countries of the Western Europe and the USA.Up to now the cases of successful Kosovars have managed to achieve more results on this issue than the entire Kosovar public diplomacy.Therefore cooperation and coordination of activities with them should have a much higher impact force and would influence the image of Kosovo and Kosovars in the world in a positive manner. | 2019-01-02T17:32:12.391Z | 2015-07-05T00:00:00.000 | {
"year": 2015,
"sha1": "378ad9d3b47e125abc116a91d3b36e8ecb3f68ca",
"oa_license": "CCBYNC",
"oa_url": "https://www.richtmann.org/journal/index.php/ajis/article/download/7167/6869",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "378ad9d3b47e125abc116a91d3b36e8ecb3f68ca",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
259132613 | pes2o/s2orc | v3-fos-license | Combined novel homozygous variants in both SGPL1 and STAT1 presenting with severe combined immune deficiency: case report and literature review
Background Sphingosine phosphate lyase insufficiency syndrome (SPLIS) is associated with biallelic variants in SGPL1, comprising a multisystemic disease characterized by steroid resistant nephrotic syndrome, primary adrenal insufficiency, neurological problems, skin abnormalities and immunodeficiency in described cases. Signal transducer and activator of transcription 1 (STAT1) plays an important role in orchestrating an appropriate immune response through JAK-STAT pathway. Biallelic STAT1 loss of function (LOF) variants lead to STAT1 deficiency with a severe phenotype of immunodeficiency with increased frequency of infections and poor outcome if untreated. Case presentation We report novel homozygous SGPL1 and STAT1 variants in a newborn of Gambian ethnicity with clinical features of SPLIS and severe combined immunodeficiency. The patient presented early in life with nephrotic syndrome, severe respiratory infection requiring ventilation, ichthyosis, and hearing loss, with T-cell lymphopenia. The combination of these two conditions led to severe combined immunodeficiency with inability to clear respiratory tract infections of viral, fungal, and bacterial nature, as well as severe nephrotic syndrome. The child sadly died at 6 weeks of age despite targeted treatments. Conclusion We report the finding of two novel, homozygous variants in SGPL1 and STAT1 in a patient with a severe clinical phenotype and fatal outcome early in life. This case highlights the importance of completing the primary immunodeficiency genetic panel in full to avoid missing a second diagnosis in other patients presenting with similar severe clinical phenotype early in life. For SPLIS no curative treatment is available and more research is needed to investigate different treatment modalities. Hematopoietic stem cell transplantation (HSCT) shows promising results in patients with autosomal recessive STAT1 deficiency. For this patient’s family, identification of the dual diagnosis has important implications for future family planning. In addition, future siblings with the familial STAT1 variant can be offered curative treatment with HSCT.
to investigate different treatment modalities. Hematopoietic stem cell transplantation (HSCT) shows promising results in patients with autosomal recessive STAT1 deficiency. For this patient's family, identification of the dual diagnosis has important implications for future family planning. In addition, future siblings with the familial STAT1 variant can be offered curative treatment with HSCT. KEYWORDS case report, ichthyosis, lymphopenia, steroid-resistant nephrotic syndrome (SRNS), SGPL1 gene mutation, STAT1
Introduction
Sphingosine phosphate lyase insufficiency syndrome (SPLIS) is a multisystemic condition associated with biallelic pathogenic variants in SGPL1 (OMIM no. 603729) (1). This recently described syndrome (2017) comprises a broad phenotype, featuring in most patients a steroid-resistant nephrotic syndrome, endocrine, dermatological, and neurological system involvement (2)(3)(4). In addition, lymphopenia has been previously described (5) but the significance in the clinical presentation of patients still needs to be further elucidated (6).
Sphingosine-1-phosphate lyase (SPL) is an intracellular enzyme involved in the final step for sphingolipid degradation (7). Specifically, SPL catalyses the cleavage of sphingosine-1phosphate (S1P) resulting in the formation of other essential biomolecules (5) needed to mediate biological activities such as cell migration, survival, and proliferation (7). Furthermore, S1P signaling regulates T cell egress from the thymic tissue and other peripheral lymphoid organs (8) which could explain the lymphopenia in some patients.
Signal transducer and activator of transcription 1 (STAT1) is an important member of the STAT family, playing a role in cell growth, differentiation, proliferation, metabolism, and apoptosis through the JAK-STAT pathway (9). Biallelic STAT1 loss of function (LOF) variants result in a severe phenotype of immune deficiency with increased susceptibility of patients to bacterial, viral, and mycobacterial disease (OMIM no. 600555) (10).
Herein, we describe a newborn with novel homozygous variants in both SGPL1 and STAT1 presenting with severe combined immune deficiency and additional clinical features.
Case description 2.1 Clinical presentation
A female baby was born at 39 + 6 weeks gestation to healthy, Gambian parents. Parents were not known to be related but originated from the same tribe. Parents had three other healthy children. There was a previous history of one spontaneous first trimester miscarriage.
During the prenatal period, the patient was noted by ultrasound to have right-sided chest effusion that was treated with pleural-amniotic intrauterine shunt insertion at 36 weeks gestation. The shunt spontaneously came out on the second day of life. The patient was born by spontaneous vaginal delivery and intubated at 8 minutes of life due to irregular breathing with increased oxygen requirements. She was admitted to the Neonatal Intensive Care Unit for 8 days due to respiratory distress and pleural effusion. The diagnosis of sepsis was raised due to positive maternal colonization with Group B Streptococcus. Intravenous antibiotics were commenced, but subsequently discontinued when blood culture results came back as negative.
She failed her neonatal hearing screen bilaterally. She was also noted to have dry and scaly skin (no ectropion or eclabium) raising the possibility of a congenital ichthyosis by the local dermatology team. The patient was discharged on day 8 of life and re-admitted on day 13 with cough, coryza, reduced feeding, increased work of breathing and tachypnoea with no fever. Continuous positive airway pressure was started but intubation was required due to respiratory deterioration. Empiric antibiotic and antiviral therapy (Piperacillin-Tazobactam, Amikacin and Acyclovir) were started.
ECG showed ongoing sinus bradycardia; echocardiogram was normal. Full sepsis screen was carried out. Bronchoalveolar lavage (BAL) and nasopharyngeal aspirate were positive for enterovirus, rhinovirus and Serratia marcescens (resistant to Piperacillin-Tazobactam). Blood and urine cultures showed no growth. Cerebrospinal fluid (CSF) examination was normal with normal cell count, no organisms on CSF culture. CT chest showed a hypoplastic right lung and mild right sided pleural effusion ( Figure 1).
The patient was edematous with significant and persistent hypoalbuminemia despite albumin infusion. In the absence of diarrhea and with a normal fecal alpha-1-antitrypsin, protein losing enteropathy was excluded. Low levels of albumin <20 (34-42 g/L) with raised urine/albumin ratio confirmed nephrotic syndrome. A left calcified renal vein thrombus was detected on renal ultrasound thus low molecular weight heparin was started. At 21 days of life, the patient's clinical condition deteriorated requiring high frequency oscillatory ventilation.
Full blood counts on admission showed marked lymphopenia; total lymphocytic count of 750 cells/ul, CD3 of 310 cells/ul, CD19 130 cells/ul, CD4 280 cells/ul, CD8 20 cells/ul and CD16 260 cells/ul. She had low naïve CD4 110 cells/ul with low recent thymic emigrants (9.3%) and reduced T cell receptor excision circles (TRECS) of 196 x10*6/L (0-1160 ml). Results on new born screening for SCID was back on day 13 th of life and showed zero TRECS on the Gurthie card. She had low levels of IgG 2.65 (3.90-13.00G/L), absent IgA, less than 0.07 (0.02-0.15 G/L) and normal IgM 0.25 (0.08-0.4 G/L) and normal expression of MHC class I protein (98%). The constellation of ichthyosis, hearing loss, nephrotic syndrome, and lymphopenia, raised the possibility of a causative SGPL1 variant. Next generation sequencing with a primary immune deficiency (PID) gene panel, including SGPL1, was initiated.
At 24 days of life, the clinical condition of the patient continued to deteriorate with ongoing respiratory acidosis and raised C-Reactive protein (195mg/L). A positive BAL for Candida parapsilosis was obtained, and Caspofungin was started.
Next generation sequencing showed a homozygous missense variant in SGPL1: c.1027G>C p. (Gly343Arg). This variant was previously unreported but in silico was predicted to have a deleterious effect. In addition, the patient was found to be homozygous for a STAT1 variant: c.945-12G>A. This variant was also previously unreported. Functional assays in fibroblast cell lines of the patient showed low STAT1 phosphorylation in comparison with healthy control (Figure 2). Parents were found to be heterozygous for both SGPL1 and STAT1 variants.
Subsequently, the patient went into shock, inotropes were added, and bicarbonate correction was given due to increased acidosis. The patient's clinical situation did not improve despite multiple therapeutic interventions and sadly the patient died at 45 days of life ( Figure 3).
Timeline
Data shown in Figure 3.
Genetic and functional assays 2.3.1 Genetic testing
Analysis of a virtual panel of 255 genes associated with PID was undertaken on whole exome data (TWIST Human Core Exome and Illumina sequencing) generated from DNA extracted from venous blood. Variant calling, annotation and filtering was carried out by a custom in-house pipeline and variant interpretation conducted in line with the ACMG/AMP guidelines (11) and ACGS best practice guidelines for variant classification in rare disease (12). Two homozygous variants of interest were identified in the SGPL1 and STAT1 genes. Confirmation and parental testing were carried out by targeted Sanger sequencing.
Variant interpretation
SGPL1 had been raised as a gene of interest by both the patient's clinical genetics consultant and immunology consultant due to the patient's pattern of clinical features. A homozygous missense variant c.1027G>C p. (Gly343Arg) (NM_003901.3) in SGPL1 was identified and predicted to be deleterious by in silico software. The variant was absent from the gnomAD population database (13). The variant was discussed at a multidisciplinary meeting where it was agreed that the specificity of the patient's clinical features was sufficient to report the variant as likely to be pathogenic with the following ACMG criteria: PM2 moderate, PP4 moderate, PP3 supporting and PM3 supporting.
An additional homozygous intronic variant c.945-12G>A (NM_007315.3) was identified in the STAT1 gene. The variant was predicted to have a deleterious effect on splicing by in silico tools and was absent from the gnomAD population database. Subsequent functional analysis of patient fibroblasts showed abnormal STAT1 phosphorylation. The variant was classified as likely to be pathogenic with the following ACMG criteria: PM2 moderate, PP4 moderate, PP3 supporting and PM3 supporting.
Functional assays
To detect STAT1 phosphorylation, patient fibroblasts were either left unstimulated or stimulated with 105 units/ml of IFNa (Stratech Scientific Limited). Cells were fixed, washed and permeabilized before adding 5ml anti-STAT1 phosphorylated tyrosine antibody (BD Bioscience) as previously described (14). Ten thousand fibroblasts were acquired and analyzed (FACsLyrics, BD Biosciences). The percentage change in phosphorylated cells was calculated by subtracting the percentage of unstimulated cells from that of the stimulated cells.
Discussion
We report two novel likely pathogenic variants in SGPL1 and STAT1 that, in combination, led to increased propensity for opportunistic infections in a patient with additional features of hearing loss, nephrotic syndrome and ichthyosis.
In 2017, several groups reported biallelic SGPL1 variants in association with SPLIS -a multi-systemic disease including primary adrenal insufficiency, ichthyosis, and steroid resistant nephrotic syndrome (2,15,16). We did a literature search and to date there are 59 reported cases with SPLIS including our patient (Table 1; Supplementary Table 1) with median age of presentation of 0.7 months (range: prenatal to 15 years) and a wide spectrum of clinical symptoms. Children presenting early in life had a more severe clinical presentation with primary adrenal insufficiency or steroidresistant nephrotic syndrome being seen in the early postnatal period. Overall mortality rate is high, approaching 47% (17).
There is no clear genotype/phenotype correlation in terms of morbidity and mortality. Various variants have been described, with three recurrent variants accounting for one third of reported cases so far.
There is a wide spectrum of presentation with multiple system involvement. The main affected systems are endocrine (80%), renal (80%) and neurological features (53%) in reported cases. Primary adrenal insufficiency is the most common endocrine manifestation (66%). Steroid-resistant nephrotic syndrome accounted for the most frequent renal manifestation (51%). Neurological dysfunction affecting both central and peripheral nervous system has been described (53%), including hearing impairment, seizures, developmental delay, and peripheral axonal neuropathy. Ichthyosis is the most frequently reported skin manifestation.
Immunological abnormalities reported include lymphopenia (39%), hypogammaglobulinemia (8%), with a history of infections reported in 20% (Table 1). Not all reported cases underwent an immunological workup. Specific patterns of infections (viral/fungal/ bacterial) are not described in detail in most cases, and limited data is available regarding their contribution to mortality. Mortality is mainly caused by respiratory failure secondary to steroid resistant nephrotic syndrome, without a specific emphasis placed on whether immunodeficiency and presence of infections could have contributed to increased risk of mortality. Further studies still are needed to define the cause and role of the disrupted immune system in this condition, and to establish if there is a genotypephenotype correlation.
Our patient presented early in life with respiratory infections and nephrotic syndrome, representing the more severe end of the spectrum of cases described. Lymphopenia and hypogammaglobulinemia could both be caused by primary immunodeficiency due to lack of T cells leaving the thymus mimicking a severe combined immunodeficiency (SCID) phenotype. In addition, secondary immunodeficiency due to protein losses in the context of severe nephrotic syndrome as well as protein and cellular losses due to pleural effusions are important contributing factors as well for the severe lymphopenia present in our patient. Both primary and secondary immunodeficiency lead to severe T-cell lymphopenia and are likely to be picked up through newborn screening for SCID. Interestingly, of the other patients with SPLIS presenting with severe clinical phenotypes early in life, four cases were picked up during newborn screening for SCID due to low TRECs at birth (6,18).
Our patient suffered from opportunistic infections including Serratia marcescens, enterovirus and pulmonary candida infection and was unable to clear these infections despite targeted antimicrobial treatments. This prompted the medical team to analyze the full PID gene panel despite an early diagnosis of SPLIS.
In addition to the homozygous SGPL1 variant, an additional homozygous STAT1 variant was found in our patient, raising the possibility of dual diagnosis with overlapping presentations.
STAT1 is one of the most important members of the STAT family, playing a critical role in regulating cell growth, differentiation, proliferation, metabolism, and apoptosis through the JAK-STAT pathway (19). STAT1 is critical for cellular response to IFNA/IFNB (type 1 interferon) and IFNG (type III interferon). STAT 1 LOF variants can be associated with both autosomal dominant and autosomal recessive immunodeficiency. Heterozygous STAT1 LOF variants selectively affect the IFNG pathway and cause impairment of mycobacterial but not viral immunity (20). Biallelic LOF variants were first described in 2003 in patients showing features of combined immunodeficiency with increased predisposition to both mycobacteria and viral infections (21,22). Patients with STAT1 deficiency due to biallelic LOF variants have impaired response to both IFNA/IFNB and IFNG.
Autosomal recessive STAT1 deficiency can be partial or complete (10). The phenotype is similar, however, patients with a partial deficiency present with a milder clinical course (23, 24). The largest series to date of complete STAT1 deficiency includes 24 patients (from 10 families) with 17 different variants and high mortality rate of 65%, mainly due to significant infections (19). In line with the previous described cases, our patient with complete STAT1 deficiency suffered from life threatening opportunistic viral (enterovirus), bacterial (Serratia) and fungal (candida) infections. She did not receive BCG vaccine.
At present, there is no curative therapy for SPLIS, but multiple therapeutic approaches have been implemented, such as renal transplantation, hormone replacement, and exogenous administration of pyridoxine (a co-factor for S1P lyase). Other potential treatments are under evaluation, in particular, enzyme replacement, gene therapy and CRISPR gene editing (1). Patients with complete STAT1 deficiency have poor prognosis with (10,25). HSCT has been considered as a therapeutic approach to these patients, and promising results have been published in the literature (25,26).
In conclusion, we report a patient with homozygous pathogenic variants in both SGPL1 and STAT1, leading to dual diagnosis of SPLIS and complete STAT1 deficiency. Both conditions can cause severe combined immunodeficiency and increased susceptibility to infections with poor outcome, as was sadly the case in our patient.
The finding of two homozygous variants highlights the importance of completing broad PID gene panel analysis. This is particularly the case in patients with complex clinical phenotypes, atypical findings and/or possible or known consanguinity.
The dual diagnosis has important genetic counselling implications. There is a separate recurrence risk of 25% for each condition in future pregnancies. Options for future pregnancies, including prenatal and preimplantation diagnosis can be discussed with the family. Carrier testing for both conditions can also be offered to other family members to inform their reproductive choices. Finally, for future children complete STAT1 deficiency is potentially curable with HSCT.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
Ethics statement
Written informed consent was obtained from the minor(s)' legal guardian/next of kin for the publication of any potentially identifiable images or data included in this article.
Funding
This work was partially supported by the Medium-term research fellowship of the European Society of Immunodeficiency (ESID). | 2023-06-12T13:08:13.563Z | 2023-06-12T00:00:00.000 | {
"year": 2023,
"sha1": "c1c3d991aead5b2c4fa8fc22f6dd0e52d711aae5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "c1c3d991aead5b2c4fa8fc22f6dd0e52d711aae5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12006816 | pes2o/s2orc | v3-fos-license | RNA-Seq Analysis of the Response of the Halophyte, Mesembryanthemum crystallinum (Ice Plant) to High Salinity
Understanding the molecular mechanisms that convey salt tolerance in plants is a crucial issue for increasing crop yield. The ice plant (Mesembryanthemum crystallinum) is a halophyte that is capable of growing under high salt conditions. For example, the roots of ice plant seedlings continue to grow in 140 mM NaCl, a salt concentration that completely inhibits Arabidopsis thaliana root growth. Identifying the molecular mechanisms responsible for this high level of salt tolerance in a halophyte has the potential of revealing tolerance mechanisms that have been evolutionarily successful. In the present study, deep sequencing (RNAseq) was used to examine gene expression in ice plant roots treated with various concentrations of NaCl. Sequencing resulted in the identification of 53,516 contigs, 10,818 of which were orthologs of Arabidopsis genes. In addition to the expression analysis, a web-based ice plant database was constructed that allows broad public access to the data. The results obtained from an analysis of the RNAseq data were confirmed by RT-qPCR. Novel patterns of gene expression in response to high salinity within 24 hours were identified in the ice plant when the RNAseq data from the ice plant was compared to gene expression data obtained from Arabidopsis plants exposed to high salt. Although ABA responsive genes and a sodium transporter protein (HKT1), are up-regulated and down-regulated respectively in both Arabidopsis and the ice plant; peroxidase genes exhibit opposite responses. The results of this study provide an important first step towards analyzing environmental tolerance mechanisms in a non-model organism and provide a useful dataset for predicting novel gene functions.
Introduction
High salinity is a critical problem in crop production that results in reduced plant growth and a significant reduction in productivity. The amount of arable land impacted by high salinity has increased, due to climate change, irrigation practices, desertification, flood, and other causes. The Food and Agriculture Organization of the United Nations (FAO) estimated that 45 million ha out of 230 million ha of irrigated land is affected by salinity (FAO: http://www.fao.org/ home/en/). Studies using Arabidopsis as a model plant have identified a number of genes involved in salt tolerance. In particular, several transcription factors have been identified as key regulators of salt tolerance in Arabidopsis, such as DREB2A [1]. Additionally, DREB2A orthologs in other plant species, such as rice, soybean, poplar, buffalograss, and sugarcane, also appear to be involved in salt tolerance [2], [3], [4], [5], [6]. Collectively, these studies have demonstrated that the DREB2A gene regulatory network is an important molecular mechanism for salt tolerance in the Plant Kingdom. Additional data from Arabidopsis have also revealed cross-talk of the DREB2A pathway with other pathways, such as the ABA-mediated signaling, osmotic response, and some ionic response pathways that are induced by exposure to high salt [1].
It is commonly accepted that better root growth supports better whole plant growth. Since root growth is strongly inhibited under high salt conditions, understanding how roots respond to high levels of salt is essential to understand salt tolerance. Numerous studies have been conducted at the molecular level on the response of Arabidopsis roots to high salt conditions. Dinneny et al. [7] reported that cell-type specific salt response machinery is essential for determining the appropriate transcriptional response to salt stress. Morphological changes in the root, such as swollen cortical cells and a delay in root hair development are among the celltype specific responses to high salt. These changes have been shown by using live-imaging analysis and these responses occurred within 24 hours in the roots of Arabidopsis thaliana [8]. In addition to the information derived from the studies of cell type-specificity, the analysis of salt tolerance among naturally occurring genetic variants (accessions) of Arabidopsis has also provided important molecular information. Katori et al. [9], in a study of Arabidopsis accessions, identified several QTLs that were associated with salt tolerance. Importantly, a genome wide association study (GWAS) indicated that the ability to accumulate NaCl in the leaves of Arabidopsis is dependent on the genetic variation of the Na transporter, AtHKT1 [10]. The authors indicated that the genetic variation is most likely related to the adaptation to coastal or high saline soil environments [10].
With the recent advances in sequencing and bioinformatic technologies, researchers have begun to move to non-model plants to study molecular mechanisms that are responsible for salt tolerance. Thellungiella halophila has been widely used due the similarity of its genome sequence to Arabidopsis. A recent study has also reported on the transcriptional response to high levels of salinity in semi-mangrove plants [11]. Using deep sequencing technology, Huang et al. [11] reported on gene expression in response to salt that was partially common to a variety of plants and species-specific responses also existed. Their study demonstrates the ability to use non-model plants to address biological questions. Based on this premise, we propose that studying gene expression in halophytic plants can discover unique aspects of salt tolerance.
Mesembryanthemum crystallinum (ice plant) is a halophyte that switches from C3 photosynthesis to Crassulacean acid metabolism (CAM) under high salinity and drought stress (reviewed in [12]). Mature ice plants can grow in soil that contains a salt concentration above 450 mM NaCl, which is higher than found within seawater [11]. This finding was based on studying the response of shoot growth to high salinity, however, to date no studies have been conducted in ice plants to characterize the response of roots to high salt concentrations. The genome size of M. crystallinum is 250 to 300 M bp and comprised of 2n = 18 chromosomes [13], [14]. Although transformation technologies for ice plant have not been established, ice plant seedlings are similar in size to Arabidopsis making their use in molecular analysis relatively straightforward. Since roots directly contact the soil containing the high concentrations of salt, analyzing root growth in this halophyte and their molecular response to high salinity should provide significant insight into the molecular adaptation of roots to high levels of salt. Genes identified in M. crystallinum that are associated with salt tolerance will serve as strong candidates for use in the genetic engineering of agricultural crops with increased salt tolerance.
In the present study, deep sequencing technology was used to characterize the regulatory network underlying high salinity tolerance in the ice plant. The transcriptomic dataset obtained from M. crystallinum was used to construct an ice plant mRNA database. Using this database, the transcriptional responses of ice plant and Arabidopsis were compared in order to determine if essential salt response pathways are conserved in these plant species. These data sets can be used to investigate the molecular mechanism of short-term salt tolerance in a non-model plant and should provide new insight into salt tolerance. The information obtained from this study, and the identified genes associated with salt tolerance, can be used to advance efforts to use plant biotechnology to improve agricultural productivity.
Plant growth and salt treatment
Seeds of M. crystallinum and Arabidopsis thaliana Col-0 ecotype were maintained in the dark at 4°C for 1 day, sterilized for 5 min in 25% bleach and 0.05% Triton X-100, washed 3 times with sterile water and sown onto Murashige-Skoog (MS) medium (pH 5.8) containing 1% sucrose and 1% agarose. Seeds were germinated in a vertical orientation for 5 days in a growth chamber at 22°C with a 16 h light and 8 h dark light regime (light intensity of 65 μmol Photons m -2 sec -1 ).
For NaCl treatment, five-day-old seedlings were transferred onto MS media containing either 140 mM, 250 mM, or 500 mM NaCl for 24 h. The plants were imaged under a stereomicroscope (Olympus SZX12) with a DP70 CCD camera. Root length was measured using Image-J software (http://imagej.net/).
RNA extraction and deep sequencing
Whole roots from five-day-old seedlings treated with 0 mM, 140 mM, 250 mM, or 500 mM NaCl for 24 h were used for RNA extraction with an RNeasy plant mini kit (Qiagen) according to the manufacturer's instructions. A single RNA isolate that was pooled from 20 roots was used for deep sequencing analysis and three biological replicates were utilized for RT-qPCR experiments.
A TruSeq RNA Sample Preparation kit (Illumina) was used to construct cDNA libraries according to the manufacturer's instructions. Briefly, 2 μg of total RNA were used for polyA selections with RNA purification beads. The cDNA library was purified by AMPure (Beckman coulter) by using a magnetic stand. The length of the cDNAs was determined with an Agilent Technologies 2100 Bioanalyzer using the Agilent DNA 1000 chip kit and cDNA quantity was measured by qPCR using PhiX Control (Illumina) as a standard. Both the 5' and 3' ends of the cDNAs were sequenced using an Illumina Genome Analyzer IIx with a paired end module for 60 cycles (Illumina). The resulting sequence data were deposited in the DDBJ Sequence Read Archive (DRA) at the DNA Data Bank of Japan (DDBJ; http://www.ddbj.nig.ac.jp/) under the accession number, DRP002316.
De novo assembly and annotation
A total of 84 million paired-reads from four libraries were filtered using cutadapt [15]. Low quality reads, which contained more than 20 nucleotides with less than a 15 quality value were further filtered. The remaining 70 million reads were used in the de novo assembly with Trinity [16] software released at 2013_08_14 with the following options "-seqType fq-output work-ing_dir-CPU 4-JM 100G-left left.fastq-right right.fastq". A total of 53,516 contigs were obtained (The assembled sequences can also be found in the DDBJ data libraries with accession numbers FX891461-FX944976). Using blastx, all contigs were queried against the A. thaliana protein database (TAIR10, http://www.arabidopsis.org/) in order to annotate them and identify the open reading frame. A total of 31,733 contigs, out of 53,516 contigs, had homology to genes in Arabidopsis and were grouped into 13,855 genes in Arabidopsis. A reciprocal blast search, namely tblastn search using Arabidopsis proteome as queries against ice plant contigs, were perfomed we cut off the result more than 1e -3 of e-value. A total of 10,818 pairs were selected as orthologous genes.
Data analysis
The first read of the paired end reads were used to analyze gene expression. Low quality reads, which contained more than 20 nucleotides with less than 15 quality value, were discarded prior to mapping. The filtered reads were mapped to the assembled 53,516 contigs using Bowtie [17] software and the number of reads mapping to each contig was counted. By using 10,818 orthologous genes, we identified differentially expressed genes with the R package, DESeq [18]. We used the following cut off values to determine differentially expressed genes between the control (0 mM NaCl) and treated samples: FDR<0.05 and |Fold Change (FC)|>2.
We performed a Gene Ontology (GO) analysis for biological functions by using these differentially expressed genes. GO enrichment categories of expression analysis were identified by using ChipEnrich software [19], which was available from http://www.arexdb.org/software.jsp. GO enrichment analysis associates each gene of a list with different biological processes and then subsequently evaluates whether the list contains more genes than expected "by chance" for a certain biological process.
Database construction
The database was built on a home-made cluster computer named Kiku 1st. Linux (http://www. centos.org/) and PostgreSQL (http://www.postgresql.org/) were installed as an operating system and as a relational database management system, respectively. The web interface was developed with PHP (http://www.php.net/) and ZendFramework libraries (http://framework. zend.com/), which were run on an Apache (http://httpd.apache.org/) web server.
RT-qPCR
First strand cDNA was synthesized by using the PrimeScript RT reagent Kit with gDNA Eraser (TAKARA). Reverse transcription-quantitative PCR (RT-qPCR) was performed using THUNDERBIRD SYBR qPCR Mix (TOYOBO) on an ABI 7500 Real-Time PCR (Applied Biosystems). RT-qPCR reactions were performed in a total volume of 25 μl; with 1 μl of firststrand cDNAs and 1 μl of each primer. The cycler conditions were: 1 min at 95°C, followed by 40 cycle of 15 sec at 95°C and 35 sec at 60°C. The primers that were used in this study are listed in S4 Table. RT-qPCR efficiency and the CT values for individual reactions were determined by the analysis of raw fluorescent data using the free web based algorithm PCR Miner [20] (http://www. miner.ewindup.info). Efficiency corrected transcript abundance values of three biological replicates were used for determining the relative expression values for all samples. Normalization of mRNA levels was performed against the level of poly UBQ10 mRNA as previously described [21]. Statistical significance was evaluated using a Student's t test analysis with and excel plugin "StatPlus". Primer specificity was confirmed by measuring the melting curve analysis after 40 amplification cycles by increasing the temperature from 60°C to 95°C.
Ice plant roots are tolerant to salt concentrations that inhibit Arabidopsis root growth
Five-day-old seedlings of ice plant and Arabidopsis, Col-0 accession, were treated with 0 and 140 mM NaCl for 24 h, after which time root length was measured ( Fig. 1A-H). Root growth of the Arabidopsis Col-0 accession was completely inhibited by 140 mM NaCl treatment for 24 h. In contrast, 140 mM NaCl treatment did not inhibit ice plant root growth (Fig. 1I). This result indicated that ice plant has a greater tolerance to NaCl stress than Arabidopsis. The concentration of NaCl was then increased to determine whether or not ice plant is tolerant to a higher concentration of NaCl. Results indicated that 250 and 500 mM NaCl both strongly inhibited root growth in M. crystallinum (Fig. 1J). As a point of reference, 500 mM NaCl is higher than the concentration of NaCl that is typically found in seawater (450 mM, [11]). In addition to the inhibition of primary root growth, the high salt concentrations also inhibited root hair growth in M. crystallinum (Fig. 1K to N). Inhibition of root hair growth has also been observed in Arabidopsis roots under high salt stress [7]. These results indicate that, although ice plant is tolerant to higher salt concentrations than Arabidopsis, similar morphological changes in roots are observed in both species when they are subjected to high salt conditions.
RNAseq analysis and de novo assembly
We were very interested to characterize the transcriptional events involved in the development of salt tolerance in M. crystallinum. Therefore, the transcriptional changes in young ice plant roots subjected to various salt concentrations were investigated using high-throughput sequencing technology. Total RNA was isolated from whole roots of five-day-old ice plant seedlings treated with 0 mM, 140 mM, 250 mM, or 500 mM of NaCl for 24 h. The isolated total RNAs were converted to cDNA libraries, and both ends of the cDNAs were sequenced for 60 cycles using a paired-end module. Approximately 84 million paired-reads, 5 G bp in total, were sequenced from four libraries and all reads were assembled using the Trinity software [16]. This resulted in 53,516 contigs, containing 67 M bp sequences. The averages, median and maximum N50 and N90 lengths of the assembled contigs were 1,179 bp, 803 bp and 16,785 bp, 1,919 bp and 518 bp, respectively. To annotate the contigs, the consensus sequence of all the contigs were used as queries against the Arabidopsis protein database (TAIR10, http://www. arabidopsis.org/) using blastx. Out of a total of 53,516 contigs, 31,733 contigs had 13,855 homologous genes in Arabidopsis. A reciprocal blast search was also performed using Arabidopsis proteome as queries against ice plant contigs and we obtained mutually top hit 10,818 pairs as orthologous genes. These orthologous genes were used as a reference for further analysis (Fig. 2).
The first end read of the paired end reads was mapped to the reference set of contigs using Bowtie [17] software with about 94% of the reads being mapped. The number of mapped reads obtained from the cDNA libraries from roots treated with 0 mM, 140 mM, 250 mM, and 500 mM salt were 8.6, 9.9, 12.2, and 7.0 million reads, respectively. The number of reads mapping to each contig was counted and used to obtain the gene expression data. The results, including the paired end reads, the assembled sequences, and the expression data (S1 Table), are available on the database website (http://dandelion.liveholonics.com/pothos/Mcr/). Annotation of genes and comparisons of ice plant gene expression data to gene expression data from microarray datasets of Arabidopsis treated with NaCl To discover which genes in the ice plant were responsive to salt stress, a comparison of the data set obtained from the ice plant with Arabidopsis datasets was performed. The ice plant data sets were normalized and a False Discovery Rate (FDR) and fold changes (FC) were calculated using the DESeq package for R [18]. The NaCl-treated datasets were compared to the 0 mM NaCl-treated dataset, the latter of which was considered as a control. A cut off of FDR<0.05 and |FC|>2 was used. Using these criteria, 44, 152, and 193 genes were found to be significantly up-regulated in roots of ice plant in response to 140 mM, 250 mM, and 500 mM NaCl, respectively. The microarray datasets retrieved for Arabidopsis from the Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/gds/; GSM184925.CEL, GSM184926.CEL, GSM184933.CEL, and GSM184934.CEL) for comparison with the ice plant datasets were of five-day-old Arabidopsis roots treated with 140 mM NaCl for 16h. The Arabidopsis microarray datasets were normalized using gcRMA [22], and FDR and FC were calculated using the SAMr algorithm (S2 Table, [23]). Using the same criteria that were employed on the datasets obtained from the ice plant, 644 genes were found to be significantly up-regulated in Arabidopsis roots. Among the up-regulated genes, only 4 genes were common to all datasets (Fig. 3A). On the RNA-Seq Analysis of the Response of M. crystallinum to High Salinity other hand, 46, 42, 50, and 366 genes were significantly down-regulated in 140 mM, 250 mM, and 500 mM NaCl-treated roots of ice plant, and 140 mM NaCl-treated roots of Arabidopsis respectively. Interestingly, no genes were found to be present in all the down-regulated datasets (Fig. 3C).
We subsequently determined whether or not common gene ontology (GO) categories of biological functions could be identified among salt responsive genes in M. crystallinum and Arabidopsis. A GO analysis was performed using a chip-enrichment program [19]. Only two enriched GO categories, "response to heat" and "response to salt" were identified in the up-regulated gene list obtained from ice plants treated with 140 mM NaCl (Fig. 3B). In contrast, 6 and 10 GO categories were significantly enriched in the gene lists obtained from 250 mM and 500 mM NaCl-treated roots of ice plant, respectively (Fig. 3B). Although 31 GO categories were represented in the up-regulated genes identified in Arabidopsis treated with 140 mM NaCl, only five GO categories, "response to heat", "response to cold", "response to water deprivation", "response to high light intensity" and "response to abscisic acid stimulus" overlapped with the GO categories that were identified for the up-regulated genes in M. crystallinum (Fig. 3B, bold text). Since only 44 genes were up-regulated in ice plant subjected to the 140 mM NaCl treatment, it was concluded that this level of salt does not have a large impact on the ice plant at the transcriptional level. The observation that the rate of root elongation in ice plants that were exposed to 140 mM NaCl was not significantly different than the rate in ice plants not subjected to salt stress (Fig. 1J), supports this contention. Regarding down-regulated genes, only the GO category "endomembrane system" was enriched in both the 500 mM NaCl-treated ice plant roots and the 140 mM NaCl-treated Arabidopsis roots (Fig. 3D, bold text). This may indicate that salt stress induces changes in the plant cell membrane system to protect cells from osmotic stress.
DREB2A is a key transcriptional regulator for salt response in Arabidopsis and other plant species and constitutive overexpression of DREB2A (DREB2A CA OX) resulted in a significant increase in salt tolerance [1]. To determine if M. crystallinum possesses a similar transcriptomic regulation to what is observed in DREB2A CA OX plants, the ice plant RNAseq datasets were compared to expression data obtained from the microarray analysis of DREB2A CA OX plants [1]. The up-regulation of four genes were found to be common amongst the DREB2A CA OX and 140 mM NaCl treated ice plant datasets. Additionally, those four genes were also up-regulated in the ice plant material that was treated with 250 mM and 500 mM NaCl (S1 Fig.). Lastly, 15 genes were up-regulated in both the 250 mM NaCl-treated ice plant roots and the DREB2A CA OX plants, and 21 genes were commonly up-regulated in both the 500 mM NaCl-treated ice plants and the DREB2A CA OX plants (S1 Fig. and S3 Table). Since only a few genes were commonly regulated in both the NaCl-treated ice plants and the DREB2 CA OX plants, it suggests that different regulatory mechanisms are involved in NaCl response in M. crystallinum and Arabidopsis, the latter of which is mediated by DREB2A.
RT-qPCR confirmation of RNAseq results and differences in gene expression in the ice plant and Arabidopsis
RT-qPCR was used to confirm the RNAseq data, including the identification of genes and expression data. Twenty genes with significant changes in expression in at least one NaCl concentration were selected from the RNAseq data for confirmation by RT-qPCR ( Fig. 4 and S2 Fig.). PCR primers were designed using the sequence data used to construct the contigs (S4 Table). Three of the twenty candidate genes exhibited no amplification. In contrast, sixteen out of the remaining seventeen genes tested exhibited the same level of expression profile in both the RNAseq and RT-qPCR results (Fig. 4). The expression levels were relative to that of poly- UBQ10 and normalized to the value in 0 mM NaCl data, arbitrarily set as 1. The expression level of one of the twenty selected genes was different in the RNAseq and RT-qPCR data. The expression of Mcr002321.000, was much higher in the RNAseq data than the level indicated by RT-qPCR. Despite the expression differences obtained by the two methods for Mcr002321.000 transcripts were up-regulated by all salt treatments in both the RNAseq and RT-qPCR results. These results indicate that the expression dataset obtained for M. crystallinum using RNAseq was reliable for analyzing gene expression patterns.
Among the genes selected for confirmation by RT-qPCR, five were up-regulated by salt treatment in both the ice plant and Arabidopsis. Specifically, Mcr016919.013 (FC 22.8), which is an ortholog of a LEA family protein, and Mcr017216.000 (FC 3.2), which is an ortholog of a LTP family protein were up-regulated in DREB2A CA OX Arabidopsis plants overexpressing DREB2A [1]. Mcr016047.000, which is an ortholog of RD22, was up-regulated by salt treatments up to 250 mM NaCl in the ice plant, and also up-regulated in NaCl-treated Arabidopsis. Similar to FMO1 in Arabidopsis, Mcr004980.000, which is an ortholog of FMO1 [24], was strongly repressed by NaCl treatment in M. crystallinum.
Three genes (Mcr003727.000, Mcr015149.014, and Mcr016501.000) showed the opposite transcriptional response to NaCl treatment in ice plant than they did in Arabidopsis (S2 Fig.). Mcr016501.000, which is an ortholog of AtEXPA7, was repressed by NaCl treatment in ice plant but was up-regulated by 140 mM NaCl treatment in Arabidopsis roots (S2 Fig.). Mcr003727.000, which is an ortholog to peroxidase, was repressed by the 250 mM and 500 mM NaCl treatment and Mcr015149.014, which is an ortholog of a cationic peroxidase, were repressed in the ice plant but were up-regulated in Arabidopsis (S2 Fig.). Reactive oxygen species (ROS) metabolism is known to be involved in the NaCl response in Arabidopsis [25], and the GO category 'peroxidase activity' is enriched in Arabidopsis roots in a cell type-specific manner [7]. This also indicates that there are different mechanisms for responding to NaCl in M. crystallinum than in Arabidopsis.
Discussion
Ice plant is a halophyte [12], and can survive in high salinity soils. The high levels of salt tolerance, present in some wild species of plants, present an excellent resource to study the adaptive mechanisms that form the basis of salt tolerance, and such plants may provide a valuable source of genes that can be used to improve salt tolerance in agronomic crops. In the present study, the transcriptional response of the ice plant, M. crystallinum, treated with different concentrations of NaCl was investigated using RNAseq. In the past ten years, high-throughput sequencing technologies have made whole genome sequencing of non-model organisms possible. As demonstrated in the present study, large numbers of short reads of transcripts of nonmodel organisms can be assembled into larger contigs composed of genes that can be identified, annotated, and quantified.
The genome size of Arabidopsis and rice is estimated to be 125 M bp and 389 M bp, respectively, with 28,517 and 37,869 encoded genes, respectively ( [26], TAIR (http://www. arabidopsis.org/); [27], IRGSP1 (http://rapdb.dna.affrc.go.jp/)). The genome size of ice plant has been reported to be 250-300 M bp [13], [14], with an estimate of 30,000 to 35,000 genes. In the current study, 53,516 contigs were assembled and 10,818 of them were found to have orthologs in Arabidopsis. It has been suggested that 50-100 x coverage of the genome is required in order to assemble and analyze the genome of an organism by next generation sequencing [28]. Based on this estimate, approximately 25 G bp of sequence data would be needed to conduct a comprehensive analysis of the ice plant genome and that around 0.8 M bp of sequencing data will be required to identify a single gene. We obtained approximately 5 G bp of sequence data and identified about 11,000 genes with significant homology to Arabidopsis genes. This indicates that the amount of sequencing needed per gene is only about 0.4 M bp. Improved prediction of gene structure based on genomic sequences requires the ability of bioinformatic software to assemble contigs from short EST-like transcript sequences [29]. Since this requires a genome coverage of 50-100x, it is easy to see that RNAseq is an economic and efficient approach for investigating the transcriptome and genome of non-model organisms where a reference genome does not exist. The N50 of the ice plant contigs obtained in this study was 1,919 bp, which was larger than the N50 of 887 bp obtained for a semi-mangrove plant, Millettia pinnata, that was also obtained by RNAseq [11]. The N50 of Arabidopsis and rice was 1,809 and 1942 bp, respectively, which is very similar to what was obtained for ice plant, indicating that many of the contigs from ice plant would be expected to contain the sequence of nearly fulllength transcripts.
Even with RNAseq technologies, there are difficulties in identifying genes that have a low level of expression. We identified 251 ice plant genes with homologues in Arabidopsis among the 644 genes that were significantly up-regulated by NaCl treatment in the Arabidopsis microarray datasets. The average expression level of these 251 genes, as determined by their signal intensity in the microarray, was approximately 322. The average gene expression level of the remaining 393 genes, for which homologous genes could not be identified in M. crystallinum, was 260. It is possible that all 393 of the genes that did not have homologues in the ice plant might represent genes that are unique to Arabidopsis. These results might be due to a failure in the ability to assemble the lowly expressed genes. A comparison of the expression data in the RNAseq datasets with the results obtained by RT-qPCR indicated that the results obtained by RNAseq were highly reliable and could be used to effectively characterize the genes that were affected by salt stress in the ice plant.
All results obtained in this study including the short reads, assembled sequences, and expression data have been deposited in a publically available database, along with some useful bioinformatic tools for analyzing the datasets. This database was established to foster collaboration between researchers and support present and future work on M. crystallinum.
Almost one-third of the genes identified in the ice plant have othologs in Arabidopsis. This number may not be sufficient enough to conduct a complete analysis of the gene regulatory network that is induced by salt in the ice plant. The GO analysis of the Arabidopsis dataset revealed a large number of GO categories that were up-regulated or down-regulated in response to the NaCl treatment. Almost all of the GO categories that were up-regulated in Arabidopsis were not identified in M. crystallinum. These results suggest that the system which allows ice plant to be salt tolerant is perhaps unique. Interestingly, "peroxidase activity" in ice plant was down-regulated by 250 and 500 mM NaCl but not in Arabidopsis. This GO category plays and important role in root growth, and the up-regulation of peroxidase activity in Arabidopsis has been reported to promote root growth [30]. The down-regulation of 'peroxidase activity'-related gene expression in ice plant is consistent with the inhibition of root growth that was observed under the high salt concentrations. Moreover, this category was up-regulated by salt treatment in the Arabidopsis dataset, and the RT-qPCR result indicated that at least two peroxidase genes (At1g30870 and At1g05260) were regulated in an opposite manner in Arabidopsis vs. ice plant. Mcr016912.000, which is an ortholog of the Arabidopsis RCI3 peroxidase gene (At1g05260), decreased its expression level in ice plant in response to the NaCl treatment. Overexpressing RCI3 in transgenic lines of Arabidopsis resulted in growth inhibition in response to salt stress [31]. A comparison of the transcriptome data of DREB2A CA OX with the transcriptome dataset of ice plant indicated that the number of genes significantly affected in both species was quite low. One gene in ice plant, a cationic peroxidase, which is an ortholog of At1g30870 in Arabidopsis, was strongly down-regulated but not in DREB2A CA OX (S2 Fig.).
These data also indicate that ice plant likely uses a different mechanism than Arabidopsis in responding to salt stress. In addition to DREB2A, there are multiple transcription factors that are involved in salt tolerance (see review in [32], [33], [34]). For the next step in the study of salt tolerance mechanisms, we will be able to use our datasets for comparing other signals that are regulated by other transcription factors besides DREB2A orthologs. It is plausible to suggest that perhaps the salt tolerance genes identified in the Arabidopsis studies are already expressed at a high level in ice plant even when the plant is not exposed to salinity. For this reason, ice plant exhibited tolerance to the 140 mM NaCl treatment and roots were able to continue to grow.
On the other hand, several genes analyzed in the current study exhibited the same expression response to NaCl in both Arabidopsis and the ice plant. However, only five GO categories, "response to heat", "response to cold", "response to water deprivation", "response to high light intensity" and "response to abscisic acid stimulus" were up-regulated in both Arabidopsis and in ice plant. Salt treatment has been reported to increase endogenous ABA levels in the roots of ice plant [35]. This finding, along with our transcriptome data, indicates that ABA plays an important role as a signal molecule in the response of both the ice plant and Arabidopsis to salt stress. A QTL analysis of salt tolerance in Arabidopsis was conducted by Katori et al. [9] to identify loci that are associated with salt tolerance. They examined 350 accessions of A. thaliana. One accession (Bu-5) was used for the transcriptome analysis and it was found that Δ-1pyrroline-5-carboxylate synthetase 1 (P5CS1; At2g39800) was up-regulated in this accession and other accessions exhibiting salt tolerance. P5CS is an enzyme which regulates a rate-limiting step in proline biosynthesis [36]. A previous report also demonstrated that proline accumulates in ice plant roots in response to salt treatment [37]. In addition, we also found that a P5CS1 ortholog was up-regulated in ice plant (Fig. 4, S2 Fig.).
Additionally, a genome wide association study (GWAS) of Arabidopsis accessions identified one locus, which possessed a gene encoding a sodium transporter protein, HKT1 [10]. Allelic variation in this gene was reported to be a major factor responsible for the natural variation in the ability to accumulate Na in leaves and salt tolerance in general [10]. Agarie et al. [38] reported that the mechanism responsible for salt tolerance in the ice plant is its ability to transport salt from roots to the shoots, where it accumulates in bladder cells on the surface of leaves. In our study, the AtHKT1;1 ortholog in ice plant was significantly down-regulated in response to NaCl treatment (S1 Table). The similar result on AtHKT1;1 expression observed in Arabidopsis and the ice plant in response to salt treatment, suggests that the ability to transport and isolate excess amounts of Na also plays an important role in salt tolerance in ice plant.
In conclusion, a comprehensive transcriptome analysis of the response to salt of the ice plant, M. crystallinum, was conducted and the resulting dataset was compared with Arabidopsis gene expression data obtained from previous studies using microarray. Using these data, we provided an overview of gene expression in the two species in response to salt stress and how expression was either similar or different. M. crystallinum is not a commonly used model plant species and a sequenced reference genome is not available. Using our transcriptomic datasets, however, we were able to observe new patterns of gene expression associated with salt tolerance in the ice plant and identify the sequence of the genes associated with salt tolerance in ice plant. Transgenic approaches can now be used to conduct functional studies of these ice-plant-specific genes in model plants and economically important crop species. Furthermore, metabolomic and proteomic data can be combined with our transcriptomic data to develop a comprehensive understanding of salt tolerance in M. crystallinum.
Supporting Information S1 Table. The list of the primers used to conduct an RT-qPCR analysis of gene expression in ice plant root exposed to salt stress. | 2017-04-21T04:33:09.277Z | 2015-02-23T00:00:00.000 | {
"year": 2015,
"sha1": "f22290ae11267499fd2a6d121641f9b90dbecb0d",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0118339&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f22290ae11267499fd2a6d121641f9b90dbecb0d",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248899272 | pes2o/s2orc | v3-fos-license | Effect of Diet and Exercise-Induced Weight Loss among Metabolically Healthy and Metabolically Unhealthy Obese Children and Adolescents
Objective: To study the effect of diet- and exercise-based lifestyle intervention on weight loss (WL) and cardiovascular risk among metabolically healthy obese (MHO) and metabolically unhealthy obese (MUO) children and adolescents. Methods: The sample included 282 obese individuals (54% males, age (±SD) 12.9 (±2.3) years) who completed a 3- to 4-week WL camp program between 2017 and 2019. MUO was defined according to the consensus-based definition of pediatric MHO in 2018. Results: The intervention exhibited significantly benefits in improving body weight, body mass index, body fat ratio, waist circumference, systolic blood pressure (SBP), diastolic blood pressure (DBP), resting heart rate (RHR), triglycerides (TG), total cholesterol, and low-density lipoprotein–cholesterol levels in both MHO and MUO groups (for all comparisons, p < 0.01). However, the beneficial high-density lipoprotein–cholesterol (HDL-C) level (both p < 0.01) decreased evidently in both groups after intervention. In addition, percent changes in SBP (p < 0.001), DBP (p < 0.001), RHR (p = 0.025), fasting blood glucose (p = 0.011), and TG (p < 0.001) were more profound in MUO group than that in MHO group. Conclusion: Metabolical health is a mutable and transient state during childhood. Although both groups gained comparable WL benefits from diet- and exercise-based lifestyle intervention, the MUO group may benefit more than the MHO group. Strategies aiming at lowering blood pressure and preventing the decrease of HDL-C level should be considered for the precise treatment of childhood obesity in clinical practice, with the goal of improving metabolically healthy state.
Introduction
It is well established that the global epidemic of obesity, with the accompanying rise in the prevalence of endocrine and metabolic disorders, would lead to a marked increase in cardiovascular disease (CVD) [1,2]. In China, overweight and obesity have increased substantially and the latest national prevalence estimates for 2015-2019 was 11.1% for overweight and 7.9% for obesity in children and adolescents aged 6-17 years [3]. As a vital point in the life course, adolescence is characterized by rapid and transformative physical, cognitive, and emotional growth, making it a vulnerable group to unhealthy influences, such as unhealthy diets, sedentary lifestyles, and other recognized risk factors [4].
Family-based lifestyle interventions, including dietary modifications and increased physical activity [5], are sufficient to produce notable health benefits both in anthropometry and cardiometabolism, and have been considered as the cornerstone of weight management in children and adolescents [6]. However, dieting-induced metabolism adaptations in the homeostatic system that control body weight [7,8] and unsustainable behavioral habits and cognitive factors will hinder long-term weight loss effects [9]. At present, there are still great difficulties in the long-term treatment of obesity in children and adolescents due to the rising prevalence and limited healthcare resources. Thus, it is necessary to carry out refined management of obesity based on different perspectives.
In recent years, a growing interest has been raised regarding a distinct subgroup of obese individuals called "metabolically healthy obesity" (MHO), which was observed with normal blood pressure (BP), blood lipids, blood glucose, and insulin sensitivity, despite having excessive body fatness [1]. Distinguishing obesity based on metabolically healthy status will be useful to identify individuals or subgroups with high cardiovascular and metabolic risks and to optimize prevention and treatment strategies of obesity [10,11].
In adults, numerous studies have confirmed the comparable health benefits in response to diet-and exercise-based intervention between MHO and "metabolically unhealthy obesity" (MUO) groups [12][13][14][15][16][17][18], while greater weight loss may produce more cardiovascular health benefits [16,18]. However, the consensus on MHO in children was introduced only recently in 2018 [19], and few studies have focused on the effect of diet-and exercisebased intervention on children with different metabolically healthy status [1]. In fact, the metabolically healthy status in childhood is vital because many of the metabolic-related disorders begin in early childhood and over time significantly increase the CVD risk in young adults [20]. Therefore, this study evaluated the influence of a traditional diet-and exercise-based intervention on weight loss (WL) and cardio-metabolically health state among MHO and MUO children and adolescents. The results can be used to contribute to precise management and treatment of children obesity according to their metabolically health status.
Study Population and Database
282 obese children and adolescents (54% males, age (±SD) 12.9 (±2.3) years) who volunteered to participate in the WL Summer Camp program at the Shanghai Dian Feng Weight Loss Center between 2017 and 2019 were evaluated. It is a national standard-setting unit for weight control designed to educate overweight and obese patients to establish healthy lifestyle habits based on a "5D Weight Loss Education System" (Mindset, Goal, Approach, Motivation, and Action). The center operates all weight control services, including diagnostic testing, physical examinations, personalized diet and exercise programs, blood analysis, online guidance, educational group sessions, and follow-up services. The patients were not recommended for the WL Summer Camp if they presented concomitant renal, hepatic, cardiac disease, and/or were being treated with bariatric surgery or medications that would affect the body weight (BW) in the initial screening. A sample of 1063 participants were recruited from WL centers located in Shanghai and Beijing and included in this study. Subjects whose age were not recorded or who were ≤6 years or ≥19 years (n = 37) were excluded from the analyses. Individuals with missing initial or post-intervention anthropometric and metabolic parameters (n = 598) and those who had a body mass index (BMI) ≤ age-and sex-specific 95 percentiles (n = 38) and who attended the camp less than 3 weeks or over 4 weeks (n = 105) were also excluded. Patients with extreme values of measurements (<1 percentile or >99 percentile) (n = 3) were considered outliers and excluded from the analyses. Finally, 151 males and 131 females were included in this research. The detail protocol and written informed consents were obtained from all participants and their parents. The study protocol was approved by the local institutional ethics committee and adhered to the tenets of the Declaration of Helsinki (Ethics approval number: 2021tjdx046; 9 March 2021). This research is conducted independently, and the findings and conclusions of the research are not influenced by related centers.
Diet and Exercise Protocol
All subjects received a professional assessment and individualized diet and exercise advice for 3 to 4 weeks (Average intervention days for all participants: 24.9 ± 1.9 days, MHO: 24.7 ± 2.4 days; MUO: 25.1 ± 1.5 days, p = 0.180). The recommended daily energy intake was based on basal metabolic rate requirement obtained from body composition measurement during intervention [21]. The caloric intake was calculated based on the Chinese food chart and food categories recommended by Dietary Guidelines for Chinese Residents (2016) were selected [22]. The types of breakfast foods usually include buns, soy milk, milk, porridge, and eggs. The types of foods for lunch and dinner mainly include vegetables (lettuce, cabbage, celery, broccoli, radish, cauliflower, tomato, mushroom, etc.), high-protein meats (red beef and pork, fish and shrimp meat, chicken breast, etc.), eggs (mainly eggs) or beans (mainly soybeans and their products), coarse cereals (mainly rice and steamed bread) or potato foods (potatoes, sweet potatoes, yams, etc.), and a piece of aftermeal fruit (usually banana, apple, orange, watermelon, cantaloupe, pitaya, orange, etc.). Three well-balanced meals were provided each day with the following calorie allocations according to previous methods [21]: protein, 20% to 30%; carbohydrate, 50% to 60%; and fat, 20%. Breakfast accounted for 35% of the total daily energy intake, lunch for 40%, and supper for 25% [21]. The prescribed diet was formulated by dedicated nutritionists, and pivotal nutrients such as vitamins, minerals, essential amino acids, fiber, and polyunsaturated fatty acids were included.
All participants applied the same incremental exercise testing as previous methods [23] on the premise that the resting 12-lead electrocardiogram was normal. Participants began the running on a flat treadmill (H/P/Cosmos Pulsar, Nussdorf-Traunstein, Germany) at a speed of 4 km/h, increased by 2 km/h every 2 min, and then paused for 10 s to record the immediate electrocardiogram (ECG) at the end of each exercise load level, followed by the next exercise load until 8 km/h was reached. The test was stopped if the subject could not bear the exercise intensity or abnormalities appeared in the ECG. The exercise program (6 days/week, 2 sessions daily, 2 h/session) consisted of jogging, aerobics, basketball, swimming, badminton, etc. Each session included a warm-up, aerobic exercise at an intensity of 20-40% of heart rate reserve (220-age-resting heart rate (RHR)), and a cool down stage [23,24]. Exercise intensity was monitored with a finger clip pulse oximeter recording heart rates when feasible. In addition, Borg's rating of perceived exertion (RPE) was applied to assist in adjusting individual exercise intensity. Taking basketball class as an example, the 2-h section is divided into 2 classes. The first class mainly includes four parts: (1) preparatory activities (15 min, about 20% HRR, mainly jogging and dynamic stretching of joints and muscles of the whole body), (2) basic basketball skills practice (15 min, about 20-30% HRR, mainly including body posture practice, training without the ball and then practice with the ball), (3) fun games (15 min, about 30-40% HRR, mainly including fun passing, running and other mini-games), and (4) rehydration and rest (15 min). The second class also includes four parts: (1) small field games (20 min, about 40% HRR, 4 vs. 4), (2) physical fitness exercise (20 min, about 20-30% HRR, upper and lower body explosive power, strength endurance, agility, and coordination exercises, etc.), (3) post-exercise relaxation activities (10 min, mainly static stretching), and (4) discussion and communication (10 min, mainly to summarize the performance of this activity). Professional physicians and trained coaches were employed to ensure the health eligibility and safety of all participants. In addition, subjects were encouraged to develop good lifestyle habits through health lectures, nutritional and kinesiology knowledge, early bedtime and early riser, and less screen time, etc.
Data Collection
Questionnaire surveys were sent out to collect demographic characteristics (age, sex), medical history, and lifestyle information. Similar to measurements used in previous studies by our team [21,24], BW and height were measured using a digital scale (Yaohua Weighing System Co., Shanghai, China) and a wall-mounted stadiometer (TANITA, Tokyo, Japan) following the manufacturer's instructions [25], respectively. BMI was calculated by BW in kg/m 2 . Waist circumference (WC) and body fat ratio (BFR) were measured using an impedance analyzer (TANITA, Tokyo, Japan), and systolic blood pressure (SBP), diastolic blood pressure (DBP), and resting heart rate (RHR) were measured using a sphygmomanometer (Nishimoto Sangyo Co., Tokyo, Japan) following the manufacturer's instructions [21,24]. Twelve-hour overnight fasting blood samples at baseline and afterintervention were centrifuged, aliquoted, and immediately frozen and further analyzed by Adicon Medical Laboratory Center (Shanghai, China), which was certified by the China Inspection Body and Laboratory Mandatory Approval (CMA). All instruments were calibrated every day and all assessments were conducted by trained surveyors during the research.
Definition of MHO and MUO
According to the consensus reached in 2018 [19], a definition of MUO is as follows. Meeting one or more of the risk factor sets: (1) high-density lipoprotein-cholesterol (HDL-C) ≤ 1.03 mmol/L (or ≤40 mg/dL); (2) triglycerides (TG) > 1.7 mmol/L (or >150 mg/dL); (3) SBP or DBP > 90th percentile; (4) fasting blood glucose (FBG) > 5.6 mmol/L (or >100 mg/dL). If none of the risk factor sets were present, the patient was categorized as MHO. In addition, the latest industry standard for diagnosis of obesity (age-and sex-specific 95 percentiles) and abnormal blood pressure (age-, sex-and height-specific 90 percentiles) in Chinese children and adolescents was adopted in our study [26,27].
Statistical Analyses
Descriptive data are presented as mean ± standard deviations (SD) and analyzed by SPSS Statistics 25.0. Normality of all variables was tested by the Kolmogorov-Smirnov test. Independent t tests and chi-square analyses were conducted where applicable to compare the continuous variables and categorical variables at baseline, respectively. Differences of anthropometric and metabolic indicators before and after intervention in each group were analyzed via Paired t test. For comparison of percent changes in variables between MUO and MHO groups, analysis of covariance (ANCOVA) with metabolically healthy status as the between-subjects factors with inclusion of baseline level of dependent variable and other confounding variables as covariates. Interactions of sex and age with groups (MUO and MHO) were also considered. As no evidence of interactions were observed, the analysis was conducted using the whole sample. Significance was set at a 2-tailed p value < 0.05.
Distribution of Indicators Related to Metabolically Healthy Status of the Two Groups before and after Intervention
Among MUO children, 90 (50.0%) subjects present only one risk factor, 59 (32.8%) with two risk factors, 26 (14.4%) with three risk factors, and only 5 (2.8%) had all four risk factors on the definition of the MUO phenotype (Table S1). Table 2 shows the frequency changes of metabolically healthy indicators that were used to distinguish MHO from MUO before and after intervention. Although meeting the MHO definition criteria at baseline, 36 subjects (35.3%) in the MHO group transitioned to MUO state after intervention. Accordingly, an increased frequency of low HDL-C (28.4%), hypertension (6.9% with higher SBP and 5.9% with higher DBP) and hyperglycemia (1.0%) were found in our results. In contrast, the most frequent metabolically healthy risk factor in the MUO children was hypertension (65.0% with higher SBP and 48.9% with higher DBP), followed by low HDL-C (42.2%), hypertriglyceridemia (12.2%), and FBG (1.7%) at baseline. As expected, the frequency of hypertension (20.0% with higher SBP and 17.8% with higher DBP) and hypertriglyceridemia (1.1%) decreased remarkably except for low HDL-C (56.7%), the frequency of which displayed a moderate increase after intervention in MUO group. In addition, a total of 68 individuals transitioned from MUO state to MHO (28.9%) or nonobese status (8.9%, BMI≤ (age-and sex-specific 95th percentile)) after intervention. 1 Age-and sex-specific percentiles, for blood pressure, also height-specific. MHO, metabolically healthy obesity; MUO, metabolically unhealthy obesity; WL, weight loss; BMI, body mass index; HDL-C, high-density lipoprotein cholesterol; TG, triglycerides; SBP, systolic blood pressure; DBP, diastolic blood pressure; FBG, fasting blood glucose.
Changes of Anthropometry and Blood Indicators in MHO and MUO Groups before and after Intervention
In the MHO group, a distinct decrease in most of the anthropometry (BW, BMI, BFR, WC, SBP, DBP, and RHR) and blood indicators (TG, TC, HDL-C, and low-density lipoprotein-cholesterol (LDL-C)) were found after intervention compared with before intervention (for all comparisons, p < 0.01), except for FBG, which only showed a downward trend, but did not reach statistical significance (p > 0.05) ( Table 3). In contrast, all the above indicators improved evidently after intervention when compared with before intervention in MUO group (for all comparisons, p < 0.01). Analysis of covariance (ANCOVA) displayed that the percent changes of SBP (p < 0.001), DBP (p < 0.001), RHR (p = 0.025), FBG (p = 0.011), and TG (p < 0.001) in the MUO group were more prominent than that in the MHO group in responding to intervention, while percent changes of BW (p = 0.317), BMI (p = 0.077), BFR (p = 0.292), WC (p = 0.357), TC (p = 0.670), HDL-C (p = 0.121), and LDL-C (p = 0.730) were comparable between the two groups. In general, these results suggest that the MUO group may benefit more significantly than MHO group in modulating CVD-related metabolically healthy risk when dealing with diet and exercise intervention. Data presented as the group means ± SD. ** p < 0.01. " † " and " § " mean comparison between before-and after-intervention in MHO group and MUO group, respectively. p value means comparison of percent changes in variables between MHO and MUO group. Baseline levels of dependent variables were adjusted for each ANCOVA analysis. For analysis of BW and blood pressure, age, sex, and height were also adjusted. For analysis of BMI, age and sex were also adjusted. For analysis of WC, age, sex, and BMI were also adjusted. BW, body weight; BMI, body mass index; BFR, body fat ratio; WC, waist circumference; SBP, systolic blood pressure; DBP, diastolic blood pressure; RHR, resting heart rate; FBG, fasting blood glucose; TG, triglycerides; TC, total cholesterol; HDL-C, high-density lipoprotein-cholesterol; LDL-C, low-density lipoprotein-cholesterol.
Discussion
To our best knowledge, this study is the first time to observe the effects of a traditional diet-and exercise-based intervention on WL-and CVD-related risk among MHO and MUO Chinese children and adolescents. Consistent with previous reports, MHO children were significantly younger and of lower excess body weight compared with MUO children [11,28], who were characterized by more frequent manifestations of hypertension, low HDL-C, and hypertriglyceridemia successively at baseline. Diet-and exercise-based intervention has a similar regulation effect on reductions in BW, BMI, BFR, WC, TC, HDL-C, and LDL-C between the two groups, while improvement of SBP, DBP, RHR, FBG, and TG in the MUO children was more prominent than that in MHO group. These results are in accordance with those reported in adult studies [14,16,17] and suggest that dietand exercise-based intervention is quite beneficial to both MHO and MUO individuals, and the latter may benefit more. Moreover, the observation that 37.8% of MUO children (68 of 180) transitioned to MHO (52 subjects, 28.9%) or non-obese (16 subjects, 8.9%) status after intervention indicates a strong metabolic plasticity in childhood compared with adulthood [17]. Finally, the changes in HDL-C levels after intervention in both groups give us a hint that strategies should be taken to prevent HDL-C declines in diet-and exercise-based WL. Our results will provide favorable recommendations for the precision management of childhood obesity based on metabolically healthy status.
Previously reported prevalence of the MHO phenotype in children varies 20 to 68%, based on the different MHO definitions and study populations [11,29,30]. The prevalence of MHO phenotype in our study (36%) is higher than that reported by Chen F et al. (15.3%) [31], analogous with that reported by Genovesi S et al. (39%) [11], but lower than another study reported by Reinehr T et al. (49%) among large obese children populations [28]. For distribution of risk factors among MUO children, the results that 50% of the MUO children present only one risk factor and the most frequent metabolic risk factor in the MUO children is high SBP are consistent with the above reports [11,28] and indicate that most MUO phenotype in childhood and adolescence is relatively mild, and management of blood pressure, especially SBP, should be one of the important goals to protect from MUO.
For morphological indicators, although there were significant differences in baseline BW, BMI, and WC between the two groups, the authors found that metabolically healthy status has no significant effect on the percent changes in these indicators after intervention with age and sex adjusted, suggesting a considerable benefits in improving these morphological indicators (including BFR) between the two groups. These results are in accordance with previous studies [14,17] and confirmed the well-matched WL effects between MHO and MUO children.
Hypertension, an important indicator to distinguish between MUO and MHO [1], is the most frequent CVD-related risk in the MUO individuals in our study. The mechanisms of hypertension in obese children are complex and may be associated with sympathetic activation, renin-angiotensin system activation, inflammation, endothelial dysfunction, and oxidative stress [32]. WL, through healthy lifestyle modifications, such as diet and physical activity, is the cornerstone in the treatment of obesity-related hypertension [9,33]. Accordingly, decrease in BMI has been reported to be associated with decreases in blood pressure and blood lipids [34]. As expected, the authors found that diet-and exercise-based lifestyle intervention improved SBP and DBP significantly, whether in the frequency of hypertension or the range of BP changes in MUO group. Although within the normal level at baseline, a moderate but significant drop in BP after intervention in MHO group was also observed. The reason why the decrease range of BP in the MUO group was greater than that in the MHO group may be due to the distinct difference in the initial BP between the two groups. These results are in line with some of the previous studies on adults [14,16,18] and show that lifestyle intervention is beneficial to BP control in both MHO and MUO group. On the other hand, the better regulation effects of BP may be one of the reasons why lifestyle intervention brings further health benefits for individuals in MUO group, for no change in BMI was found to correlate with no change in blood pressure among children and adolescents in a recently published study [34].
The greater changes of RHR in MUO group than that in MHO groups suggest a more distinct improvement of the cardiovascular fitness after intervention [35]. Although FBG was included as one of the criteria for defining MUO [19], there was no individual who belonged to the MUO group simply for FBG > 5.6 mmol/L in our study. The three cases of hyperglycemia in MUO group were all accompanied by other risk factors that used to define MUO (Table S1). This phenomenon was similar to the lower proportion of hyperglycemia in MUO phenotype reported in other studies [11,17]. Therefore, FBG combined with other indicators, such as insulin resistance, glucose intolerance, glycosylated hemoglobin, or insulin sensitivity [19], may be more reliable to identify MUO. The phenomenon of similar baseline FBG levels between the two groups but evidently lower (still within the normal range) in MUO group than that in MHO group after intervention indicated a greater plasticity and stability in glucose regulation in MUO individuals. In addition, the reason why improvement of TG was more profound in MUO group than that in MHO group was similar to the above-mentioned BP changes.
The similar change range of TC, HDL-C, and LDL-C in the two groups after intervention indicated a parallel effect that might be independent of metabolically healthy status. Notably, HDL-C level decreased significantly after intervention in both groups. Indeed, there is a paradoxical link between diet-and exercise-induced weight loss in children and adolescents and HDL-C levels, which has been shown as increased [36], unchanged [37], or decreased [21,38] in the concentrations of this recognized biomarker of cardiovascular health. A biological plausibility for reduction in HDL-C level may be the metabolism related to fat intake, as fatty acids are substrates for HDL-C components, especially those smaller, more dense particles exhibiting greater protective potential [38,39]. Moreover, the lower levels of HDL-C might indicate fewer particles to mediate its multiple functions. Aicher BO et al. [40] reported that the unreduced apolipoprotein A-I levels and enhanced reverse cholesterol transport by the ABCA1 transporter might facilitate the cholesterol efflux capacity of HDL-C. Therefore, the decrease in HDL-C in this setting may not be associated with increased CVD risk. In addition, it was worth noting that the decline in HDL-C level was also the main reason for the 36 MHO individuals shifting to the MUO state, suggesting that the metabolically healthy status is highly variable and therapeutic goals aimed at protecting from the decrease of HDL-C should be considered for both MHO and MUO individuals. A Mediterranean diet, especially when enriched with virgin oilseeds and olive oil [39,41], and the addition of aerobic and resistance training during WL program, will significantly enhance parameters of HDL-C cardioprotective functions [38,42,43].
This study has several limitations. First, the baseline data is not consistent, as the MHO group tends to be younger and less obese than MUO group due to the study's retrospective design. The large age range of subjects may be the main reason for this difference. Indeed, some anthropometric and metabolic variables are very different depending on age and sex among children and adolescent groups [44,45]. Thus, future attention should be paid to age-and sex-based anthropometric and metabolic changes in these groups during weight loss to obtain more convincing results. Second, the authors focused on a specific population of MHO/MUO children and adolescents who actively participated in a fully enclosed WL summer camp that was not free and lasted for at least 3 weeks, so the subjects of this study might not be representative of all obese children and adolescents. Third, dietary habits may be the determinants of metabolic differences between MHO and MUO populations. Therefore, the neglect of change of dietary habits is one of the important shortcomings of this study. Future researchers should pay more attention to the impact of dietary habit changes on weight loss and metabolic changes in MHO/MUO children and adolescents. Additionally, the scattered abnormal changes in BP and FBG in the MHO group after intervention were likely attributed to measurement or operation errors, which should be take care of in future studies. Keeping subjects still before measurement of blood pressure may improve the accuracy of measurement. Finally, because metabolically healthy status in obese children and adolescents is susceptible with high plasticity, further long-term clinical randomized controlled trials are required to obtain more convincing time-to-event results.
Conclusions
In summary, our study shows that both MHO and MUO children and adolescents can benefit equivalently from diet-and exercise-based interventions in improvement of BW, BMI, BFR, WC, TC, and LDL-C levels, while SBP, DBP, RHR, FBG, and TG levels improved more obviously in the MUO group than in the MHO group. Thus, our findings support the opinion of highlighting the importance of metabolically healthy maintenance across all BMI groups among Chinese children and adolescents. Importantly, our results indicate that metabolically healthy status is transient and likely to modify during childhood and adolescence, a period which has vital indicative implications for maintaining a metabolically healthy state in adulthood. Early targeted interventions, such as strategies aiming at lowering BP and preventing the decrease in HDL-C level, should be considered for the precise treatment of obesity in clinical practice with the goal of improving metabolically healthy state.
Author Contributions: Conceptualization and study design, manuscript preparation, Q.Y. and Q.T.; methodology, review, and editing, T.C.; test supervision, data curation and preliminary analysis. K.W., J.Z. and L.Q. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Tongji University (protocol code 2021tjdx046; 9 March 2021).
Informed Consent Statement:
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
Data sharing is not applicable to this article. | 2022-05-20T15:17:23.394Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "0e954a97a53ae6d6a282aab01066d6e1d75ea55c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/10/6120/pdf?version=1652855166",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d7871c2b43c3c04133e841b39d06c845c179d728",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
112725014 | pes2o/s2orc | v3-fos-license | Innovation and Entrepreneurship : The Case of Afe Babalola University Ado Ekiti
Innovation has been defined variously by authors. An all encompassing definition describes innovation as the successful introduction of a new thing or a method which is put into economic use. The entrepreneur is an innovator, who is at the center of an integrated model of economic development. This paper examined the goals and sources of the innovation and entrepreneurship, and applied it to the motives of Aare Afe Babalola to embark on the founding of a university, a social project that is highly capital intensive, without a profit motive. Afe Babalola suffered deprivation and hardship while growing up. Yet, he was not deterred. He achieved greatness by dint of hard work and self development. In spite of his great childhood challenges, he decided to give a substantial part of his life investment back to humanity. Dissatisfied with the decay in the university education in Nigeria, he found the university with the main objective of reforming education in Nigeria by example. The university which started on January 4, 2010 has been acknowledged as the best and the fastest growing private university in Africa. The paper made recommendations for the sustainability of the institution. Keywords; Innovation, entrepreneur, invention, creativity, succession
INTRODUCTION
Innovation has been defined variously by authors. It has been defined as a change in thought process for doing something or the useful application of new inventions or discoveries. An all encompassing definition is the one given by Lueck and Katz (2003), which describes innovation as the successful introduction of a new thing or a method, an embodiment, combination or synthesis of knowledge in original, relevant, valued new products, processes or services. Often time creativity and innovation are interchangeably used by some authors. This is misleading. Creativity is the basis of innovation while innovation is the end product, the successful implementation of creative ideas. Similarly, some get confused in differentiating between invention and innovation. In business, invention is expressed as the conversion of cash into ideas while innovation is the conversion of ideas into cash. Innovation is the end product of invention. While invention carries no risk to the organization, innovation typically carries with it a lot of risks. An invention becomes innovation only when it is put into economic usage. Innovation usually improves performance and growth in business or social organizations through the following ways; efficiency, productivity, competitive advantage, market shares etc.
The entrepreneur is an innovator (Schumpeter, 1934). He places the entrepreneur at the center of an integrated model of economic development incorporating a theory of the crisis of the capitalistic system. The model began with a circular flow, an unchanging economic process, which merely reproduces itself at constant rates in a closed domain, with an absence of innovators or entrepreneurs. The model, which assumed perfect competition, full employment, without capital accumulation and no technical change, helped to bring out in a clearer way the impact of the emergence of entrepreneurs. Schumpeter, (1934) argued that an entrepreneur who is motivated by money profits introduces an innovation which consists of a new production function or raising the marginal productivity of the various factors of production flows into this model. The orthodox assumptions that business behaviour is ideally rational, prompt, similar for all firms, appropriates within the precincts of tried experience and familiar motive, break down when the business community encounters new possibilities of business action which are as yet untried and about which the most complete command of routine teaches nothing.
GOALS OF INNOVATION
The entrepreneur sees profits as the premium for innovation. Innovation however sets up only a temporary monopoly gain, which is soon wiped out by imitation. For profits to continue, it is necessary for an innovator to maintain one step ahead of rivals with innovations because once a new production function has been successfully set up, imitation becomes easier for competitors. However, absolute imitation does not often present itself, as techniques developed outside the firm must be adapted to its own circumstances. This is especially true in the case of a firm that operates in a less developed country but borrows technology from an advanced economy with different relative factor prices. This author does not share the general clamour in the developing countries for transfer of technology as it is simplistic and misleading. With few exceptions, even the simplest first processes cannot be lifted bodily from an industrial country and established unchanged in a non-industrial country and this is more so for advanced and more industrial processes. In almost every instance, innovation imported from industrialised country will have to be adapted to the new environment in order to make effective in the non-industrial country for which the facilities by which it was developed in the industrial country do not exist. And there is no automatic advance from one stage to the next. At every point new problems must be solved whose solution cannot be obtained anywhere else. Hagen (1980) admonished that the course of industrialization in any country must, to some degree, be created within that country. Other goals of innovation include the following: • The introduction of a new product which may be completely new to the consumer or modified to extend product range • The introduction of a new product to meet an unmet needs • The introduction of a new production function that reduces input costs, with consequent drop in product costs or a product that reduces environmental damage or makes more compliant with government or professional regulations • The opening of a new market • The conquest of a source of supply of raw materials • The breaking of a monopoly position and • The carrying out of a new organization of an industry
SOURCES OF INNOVATION
Innovation may emerge from various sources such as: • From the entrepreneur or his agents who may wish to achieve one or more or a combination of the above listed goals. This may be through research and development (R&D) which is often, time consuming and expensive or by the agent of the entrepreneur in the course of the agent's performance of his official duties • From the end user who is not sufficiently satisfied with the existing product performance • From a private professional who is constantly involved in the act of research and development.
THE CASE OF AFE BABALOLA UNIVERSITY
The University which started operation in January 2010 was founded by Aare Afe Babalola with the main objective of reforming education in Nigeria by example. That is an innovation in the operation of the university system in Nigeria. Babalola believes private individuals should stand up to the challenge of providing qualitative and affordable tertiary institutions as it is done in other nations of the developed world where the quality and performances of graduates produced will be at par with the competitive world field. A university where there are established code of conduct, rules and regulation and disciplinary measures which are pursued rigorously and religiously.
The Goals of the University are; • To be a world class educational centre of excellence in academics, character, sports and vocational development • To be a result-oriented institution for producing highly skilled and socially relevant graduates capable of applying scientific knowledge for the resolution of social problems. These goals are guided by the need to • Produce professionals who are sound and agile. • Through the university, its academic and professional programmes, its graduates will emerge as people with professional skills and become leaders, achievers, self reliant, kind, generous, considerate and sportsman like. • Students shall be made to believe in the Babalola's golden rule that no matter one's background, nothing is impossible; and with hard work, one can make it to the top . This vision is in line with the policy of the Nigerian government which encourages public-private partnership in educational development.
Afe Babalola was born about 1931 (Babalola, 2008). His only formal education was in the primary school between 1938 and 1945 where he obtained the primary six school leaving certificate. He acquired the degree of BSc. Economics form the University of London in 1959 as an external candidate. He thereafter studied law as an external candidate of the London University. Since it was mandatory for a law student to be in an inns, he had to go to the Lincoln's Inn, London to complete his bar examinations and to become a registered member of the Bar of England and Wales. While in London, he visited the Secretary to the Senate of the London University, Babalola was described as "the wonder man who specialized in private study" (Babalola, 2008:49). He obtained the degree of LLb honours of the London University as an external student in 1963.
WHAT ARE THE FACTORS THAT MOTIVATED AFE BABALOLA TO FOUND A UNIVERSITY
The following factors have been argued to aid innovation. They include: Recognized need: The axiom that recognized need encourages innovation has been found not to be always true. History is replete with examples of innovations, which occurred independently of any practical need, or which more frequently, failed to obtain a sponsor or market. For example Cole (1959) contended that the Stanley steamer automobile in the United States in the early part of the twentieth century failed, not because it was inferior to other automobiles with the internal combustion engine, but because of the Stanley brother's failure to mass produce the automobiles soon after it was developed.
However, the appointment of Afe Babalola as the Pro-Chancellor and Chairman of Council of the University of Lagos brought him into the mainstream of university administration. That exposed him to the myriads of problems confronting university education in Nigeria. He therefore, determined to make a difference hence, the decision to establish the Afe Babalola University.
Competent people with relevant technology:
Innovation had been found to occur when innovational persons existed. He was at various times a pupil teacher, a secondary school teacher, Vice Principal, University teacher, economist, auditor, administrator farmer and educationist. Afe Babalola has been involved in the development of tertiary institutions in Nigeria since 1980(Babalola, 2008. He is a phillantropist, an innovator and entrepreneur.
Demand and supply:
In the critique of the thesis that, exports to the industrial countries were the "engine of development" in newly developing countries in the 19 th century. Kravis (1970) showed that the differences between superior, middle and inferior export performance by these countries depended much less on differences in world demand for their products. Some energetic and innovational" countries gained increased share of the world market whilst others lost part of theirs. He argued that if a vigorous innovating spirit is present among the people of a country at a time when world demand for one of the country's major products is rising or is in prospect, the opportunity presented is likely to channel innovational talent into improvement in the methods of making the product. If a vigorous innovating spirit is not present, the country is likely to rest on its oars without innovating and enjoy the prosperity that has come its way. This author found that, there is high demand for university education in Nigeria as less than thirty percent of applicants for admission gets placement in the available universities. The Afe Babalola university which started with two hundred and forty students in three colleges in its first academic session has admitted double that figure in four colleges in its first batch of admission for its second session. The university has been described as the fastest growing private university in Africa (Anon, 2010).
Survival and achievement:
The motive to innovate has been found to be especially strong in people who have no other chance of achieving social distinction. Such people are motivated by the will to conquer. They are propelled impulse to fight for recognition, to succeed just for its sake, not for the fruits of success but for success itself. The financial result is a secondary consideration or at all events, valued as an index of success and as a mark of victory. Finally, there is the joy of creating, of getting things done, or simply of exercising one's energy and ingenuity. (Schumpeter 1934). Afe Babalola recounted one incidence that has remained indelible in his mind as a child. He recounted that for fear of corporal punishment for being late to school on a raining day, he had to go in the rains to school. Rather than empathizing with him for going to school in the rain, two of his uncles who were watching him made jest of him in the following dialogue in Babalola, (2008: 19-20); The first uncle; "Look at this boy, He is shaking and shivering and yet, he wants to go to school in the rains to Okesha-21/2 miles away." His second uncle responded thus; Don't mind him and his father. The father has only one son. Instead of making use of him to carry his hoe and cutlass, he decided to send him to school' Continuing his first uncle added rhetorically; What is he going to do with his education? Nothing. He will come back to the farm, retorted the other. As a young boy Babalola did not say a word but never forgot what they said. There and then he vowed within himself to be successful in his studies never to return to the farm as a professional farmer.
Social setting: If a native group, too deeply rooted in their culture is regarded as alien in a society, they may decide to be innovative and distinct by way of dressing, by their manners and traditions even if deviant in some respects. If such group forges ahead economically in fields that have traditionally been distasteful, a problem is created for the rest of the society. The derogators if the hither to inferior person achieves economic success may now become inferior economically. Then a familiar principle of sociology comes into play, the principle that it is relative status, not absolute position that moves people (Epstein,1962). In spite of his brilliance, his application for a federal government scholarship to read MSc. Economics after his BSc. Economics was impliedly refused because, instead of an award for MSc. Economics, he was awarded a down graded scholarship to read Diploma in Estate Management in a Nigerian College of Technology after a degree of BSc. Economics (Babalola, 2008). This was possible because he was of lowly placed background who had no person in the high places to plead his cause. He was therefore determined to achieve success in future.
Derogation: In 1962, Hagen (1962 reported the historical developments in seven countries that the sense within any social group that leading groups in its society of the same blood and general culture looked down on it tended to create within the group the need to excel, a concern that children must be capable, so that they can overcome or discount the derogation, and a corresponding joy in their little achievements, that will arouse the need for achievement. Thus derogation, it was argued, is a source of the talents that lead to innovation. Babalola was the son of a farmer who though was hard working and very brilliant yet, did not get recognized because farming was of low value. No wonder, Babalola read law and pursued it to the pinnacle of his career as a Senior Advocate of Nigeria (SAN). He had mentored many lawyers amongst them, many Senior Advocates of Nigeria (SAN), several Judges and Attorneys' General.
RECOMMENDATIONS;
Many privately owned enterprises in Nigeria have collapsed soon after the demise of their owners simply because of poor management structure, lack of well laid out succession plans and the overinvolvement of family members. These issues must be adequately addressed for the enterprise to outlive the entrepreneur.
Management structure;
The pattern of governance of ABUAD has been structured to remove all unnecessary administrative bottlenecks (Anon, 2009). It is organized in tandem with the standard university administration, put in place by the National Universities Commission These organs must be empowered to function appropriately. In order to achieve efficiency and effectiveness of administration, deliberate efforts must be made to ensure that the principle of the unity of command is not violated by any member of the administration.
Well laid out succession plan;
The following succession plans must be put in place. They include; human capacity, capital, infrastructural development and maintenance and Research and Development.
Human Capacity; One Nigerian Chief was reported to have said that, "Success without succession is no success." This author shares this view. A well laid succession plan must be articulated and put in place. Many enterprises have been found to fail because the entrepreneur assumed that their children or wives or relation would take up the management of the business when they become inactive or cease to exist. Regrettably, because the successors had not been sufficiently developed for such responsibilities, they failed and the business also failed.
Capital; The founder has established an endowment fund which is expected to provide sponsorship for about five hundred brilliant and indigent students of the university in perpetuity (ABUAD, 2010:4). The endowment programme should be regularly reviewed and re-launched annually to sustain awareness and take care of the regularly increasing inflation. Although, the university is not a profit making project, the university has established the ventures arm which is self sustaining and profit oriented. It is important that the ventures outfit be managed by qualified, competent and experienced professionals for effectiveness. Controls must be installed within the outfit to make it mandatory for all its activities to be guided by planned and approved annual budgets, quarterly financial reports and annual audited accounts.
Infrastructural Development and Maintenance;
Many infrastructural projects in Nigeria had been reported to fail soon after commissioning because maintainability was not built into them (Lawal, 2000). It is recommended that appropriate maintenance plans be installed in the university. Such plans must be adequately provided for in the annual budgets of the University.
Research and Development;
The academic planning and the Research and Development activities must be strengthened to ensure regular review of the academic programmes of the university and put the University at least a step ahead of its competitors.
The over involvement of family members; While it is necessary to assist extended family members, one very important problem in the survival of business enterprise is the active involvement of members of the extended family. The extended family firm is ubiquitous in non-industrial societies. Extendedfamily ownership and management of a firm has been attacked as inimical to entrepreneurship. Its main criticism is that obligations to relatives force the extended family firm to dissipate its capital in loans or gifts to family members when these are requested and employment of incompetent persons (Nafziger, 1969). In assisting members of the extended family, merit must not be sacrificed for mediocrity.
CONCLUSION;
Afe Babalola as an innovator and entrepreneur who has achieved greatness. He suffered deprivation while growing up. Yet, he was not deterred. He achieved greatness by dint of hard work and self development. In spite of this, he decided to give a substantial part of his life investment back to humanity, believing that what is perceived to be impossibility can and should always be made possible. This project, ABUAD has been described by the Chairman, Screening Committee of Private Universities of the National Universities Commission (SCOPU, 2009) as "...a reference point for us in SCOPU. They helped us to raise the quality bar for private universities. Those coming after Afe Babalola University will have a higher hurdle to scale." A recent past president of Nigeria and a university proprietor described ABUAD as "...a unque sample of private university with finesse, purpose, commitment and self sacrifice by the proprietor. More like this will change the panorama of tertiary institution and education in Nigeria. It is a model emulate." (Obasanjo, 2010).
All the tiers of government should provide infrastructure, stable political and economic environment that will encourage more investors in education and other social services. Government should also give a five year tax free moratorium period to investors in education. | 2019-04-14T13:06:23.831Z | 2011-06-01T00:00:00.000 | {
"year": 2011,
"sha1": "6105116ba9082d0aeb1b14775ba61306859475a5",
"oa_license": null,
"oa_url": "https://doi.org/10.5251/ajsms.2011.2.2.202.207",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e24168db346a8f20068cdbf6d43a01cfc329a847",
"s2fieldsofstudy": [
"Business",
"Education",
"Economics"
],
"extfieldsofstudy": [
"Engineering"
]
} |
218670857 | pes2o/s2orc | v3-fos-license | Manipulating the fluorescence lifetime at the sub-cellular scale via photo-switchable barcoding
Fluorescent barcoding is a pivotal technique for the investigation of the microscale world, from information storage to the monitoring of dynamic biochemical processes. Using fluorescence lifetime as the readout modality offers more reproducible and quantitative outputs compared to conventional fluorescent barcoding, being independent of sample concentration and measurement methods. However, the use of fluorescence lifetime in this area has been limited by the lack of strategies that provide spatiotemporal manipulation of the coding process. In this study, we design a two-component photo-switchable nanogel that exhibits variable fluorescence lifetime upon photoisomerization-induced energy transfer processes through light irradiation. This remotely manipulated fluorescence lifetime property could be visually mapped using fluorescence lifetime imaging microscopy (FLIM), allowing selective storage and display of information at the microscale. Most importantly, the reversibility of this system further provides a strategy for minimizing the background influence in fluorescence lifetime imaging of live cells and sub-cellular organelles. Using fluorescence lifetime as the readout modality offers more reproducible and quantitative outputs compared to conventional fluorescent barcoding, being independent of sample concentration and measurement methods. Here, the authors design a photo-switchable nanogel exhibiting variable fluorescence lifetime, and demonstrate visual mapping by using fluorescence lifetime imaging microscopy on a sub-cellular scale.
W ith increasing attention directed to exploring microscale events, more effective tools are needed to fully understand processes at the micro-level 1 . Microbarcoding is a versatile technique that provides multiplex and high-throughput information storage for micro-and nanoscale applications across the fields of biological, medicinal, and material sciences 2,3 . Particularly, optical multiplexing or fluorescent barcoding has recently attracted increasing interest, largely owing to its high sensitivity, fast signaling, and minimally invasive nondestructive nature [4][5][6][7][8] . However, current fluorescent barcoding techniques mainly rely on the use of spectral multiplexing and fluorescence intensity (FI) encoding, which are typically susceptible to spectral overlap of the encoding elements. Moreover, obtaining a quantitative readout is a major challenge as a consequence of the variability of sample concentration and external microenvironment.
One of the forefront techniques to provide reproducible output in the micro barcoding area is utilizing fluorescence lifetime, an intrinsic photophysical property that is independent of the local fluorophore concentration and the technique used for measurement 9,10 . With state-of-the-art microscopic imaging techniques, fluorescence lifetime could be used as a straightforward technology to minimize the limitations in traditional fluorescent barcoding, providing a reproducible and quantitative readout over time [10][11][12] . Current research into lifetime barcoding has predominantly exploited inorganic fluorescent materials containing lanthanides and transition metals ions, in which the fluorescence lifetime can be modulated by altering their structural configuration and composition during synthesis [13][14][15][16] . Alternatively, fluorescence lifetime can be tuned by modifying the efficiency of energy transfer between different molecules 17,18 . Hildebrandt and co-workers have revealed detailed insight into the energy transfer between lanthanide complexes and semiconductor quantum dots, leading to a photoluminescence lifetime platform for a variety of applications including bioanalysing, imaging, and information storage [19][20][21][22] . Despite these advances, the search for responsive materials in which the fluorescence lifetime can be flexible and adjustable in real-time is still an ongoing challenge 23 . Alternatively, polymeric nanoparticles can offer great advantages in this area, owing to their ease of functionalization and capability to quickly respond to external stimuli 24 .
Substituted maleimides, a range of versatile small molecule fluorophores, are distinguished by their small size and bright fluorescence emission, but most importantly their fluorescent properties can be tuned by carefully modifying the substituents and are dependent upon the fluorophore microenvironment [25][26][27] . As previously reported by our group, the encapsulation of dithiomaleimides (DTM) in a hydrophobic environment, such as the core of an amphiphilic polymeric assembly, significantly increases the fluorescence lifetime by eliminating both selfquenching caused by DTM aggregation and collisional quenching from the surrounding solvent 28,29 . In comparison to inorganic fluorophores, small molecule fluorophores like substituted maleimides are promising as a consequence of lower toxicity and easily tuneable fluorescent properties, therefore representing a potential tool for barcoding applications [30][31][32][33] .
Light, as a non-invasive stimulus, is one of the prominent tools used to achieve external manipulation without the need for physical contact, hence being ideal for information imprinting, realtime labeling, and selective tracking [34][35][36][37][38] . In this study, we design a light-switchable lifetime barcoding system based on twocomponent nanogels involving substituted maleimides and a spiropyran (SP) switch. By inducing reversible photoisomerization processes, nanogels with similar structures but multiplex lifetimes could be realized via Förster Resonance Energy Transfer (FRET) between the DTM fluorophore to the ring-opened form of the SP photochrome, allowing for dynamic fluorescence lifetime coding in a controllable, non-invasive fashion. Moreover, the multi-states of these light-switchable lifetime barcodes benefit from the selfcorrection technique which could be used to increase the sensitivity of the fluorescence lifetime imaging microscopy and provide a high level of information stored at the micro-scale. As a proofof-concept, the switchable nanogel is functionalized with mitochondria targeting group and visualize at both the cellular and subcellular scales in living cells, showing the applicability of this system in monitoring the cellular microenvironment.
Results
Design and preparation of photo-switchable nanogels. Fluorescence lifetime, as a direct measurement of the time-resolved fluorescence decay, is closely related to the structure of the molecule and the interaction with its microenvironment 39 . In this study, the tailoring of fluorescence lifetime relies on modulating the FRET, which has been proved to be a practical tool for controlling fluorescence properties through long-range dipole-dipole coupling 18 . DTM was chosen as a fluorescence donor owing to its strong emission and sufficiently long fluorescence lifetime in a hydrophobic environment 28,29,40 . Compared with commonly used small-molecule fluorophores, the uniquely high fluorescence lifetime of DTM dyes in hydrophobic environments allows both (i) a relatively wide lifetime range for more diverse fluorescence lifetimes with minimum overlap and (ii) high resolution in a biologically relevant context, representing a distinct advantage over commonly used organic dyes, in which short lifetimes are usually interfered with cellular autofluorescence 41 . A photochromic SP derivative was chosen as the fluorescence acceptor for its ability to undergo isomerization processes in response to a light stimulus 42 . By carefully varying the spectral overlap between the donor and the acceptor, the efficiency of the FRET process and consequently the fluorescence lifetime can be readily modulated.
Herein, methacrylate DTM and SP monomers were synthesized (Supplementary Figs. 1, 2) 43,44 and copolymerized via onepot micro-emulsion polymerization with methyl methacrylate (MMA) as the hydrophobic matrix and ethyleneglycol dimethacrylate (EGDMA) as the crosslinker (Fig. 1a) 45 . The experimental details are provided in the Methods section. In order to understand the effect of energy transfer on fluorescence lifetime properties, we constructed a series of crosslinked nanogels with different ratios of DTMMA and SPMA formed in situ during the polymerization (N1-5, Fig. 1c, Supplementary Table 1). The high conversion (>99%) of the polymerization was determined by monitoring the consumption of the acrylate groups in 1 H NMR spectroscopy analysis of the nanogel ( Supplementary Fig. 3). Thus, the amount of the two functional monomers in the resultant nanogels can be quantitatively assessed based on the starting ratio of two monomers. Transmission electron microscopy (TEM) visualization confirmed the spherical morphology of the nanogels ( Fig. 1b and Supplementary Fig. 4) while the size was measured by dynamic light scattering (DLS) and ranged from 22 to 32 nm with a low polydispersity ( Fig. 1c and Supplementary Fig. 5). The nanogel solutions were found to be stable for at least two months when stored in the dark ( Supplementary Fig. 6a, b) and exhibited no obvious size change after 120 s of light irradiation ( Supplementary Fig. 6c-f). The thermostability of the nanogel solution (N1 as an example) was also analysed by increasing the temperature from 25 to 70°C in steps of 5°C ( Supplementary Fig. 6g, h). The hydrodynamic diameters of the nanogel undulated slightly within the range of 25-35 nm without visible sign of aggregation or disassembly.
Evaluation of the energy transfer in the photoisomerization process. DTM, protected in the crosslinked polymeric environment, has a green emission in the range of 450-600 nm with high fluorescence quantum yield (Φ f = 51%, 5-(6)-carboxyfluorescein as the reference) (Supplementary Figs. 7,8). In comparison, the absorption of the ring-closed form of SP is negligible between 500 and 600 nm, while the ring-opened form of the photochromic dye absorbs in this wavelength range, offering the ideal scenario for an efficient FRET (Fig. 2a). The energy transferability between the donor and acceptor was quantitatively determined as the Förster radius (R 0 ) which is the critical Förster distance for 50% FRET efficiency 46 . The critical transfer distance R 0 of the DTM and ring-opened form of SP was estimated to be 18 Å (Supplementary Equations 9, 10, Supplementary Table 2).
To gain further insight into the energy transfer interactions between SP and DTM, time-dependent density functional theory (TD-DFT) calculations were carried out. The values of first singlet excited states for DTM and the ring-opened form of SP were calculated as 2.7 and 2.4 eV, respectively, while the first singlet excited state for the ring-closed form of SP was calculated as 3.1 eV, confirming that the FRET only occurs when SP is present as the ring-opened isomer (Supplementary Tables 3-7 and Fig. 2b).
Photophysical behavior of nanogels using light as a stimulus. The 2D excitation-emission spectrum of the nanogel only containing DTM (N1) revealed that fluorescence emission of DTM was maintained in the nanogel environment with green emission (450-600 nm) when excited at ca. 410 nm ( Supplementary Fig. 7). On the contrary, upon irradiation of the nanogels containing both DTM and SP (N2-4), the green emission in the DTM channel (λ em. = 520 nm) gradually decreased and a new peak around 610 nm emerged, indicating a FRET donor/acceptor pair was formed between the DTM and ring-opened SP (Fig. 2c). As anticipated, DTM emission could be fully recovered via irradiation with visible light, during which the FRET process is blocked by switching the SP back to the ring-closed form. The kinetics of FI against UV irradiation time, in N4, selected as an example, clearly revealed that the emission is closely related to the UV irradiation time (Fig. 2e). This can be ascribed to the increased amount of ring-opened SP during the light irradiation process. The photoconversion of SP reached a plateau after 31 s of irradiation, resulting in the final state in which the FRET is no longer influenced by UV ( Supplementary Fig. 10). This can be fully recovered within 40 s of visible light irradiation, as confirmed by monitoring the intensity changing at 520 nm (DTM) and 610 nm (SP) ( Fig. 2d and Supplementary Fig. 10). Moreover, the fluorescence emission was reversibly switched through alternating UV and visible irradiations (Fig. 2e). A slight decrease in intensity in the red channel and an increase in the green channel were observed after irradiating the sample with light 10 times presumably as a consequence of the irreversible photo-oxidation side reaction in SP molecules 42 . Different quenching effects on DTM emission in nanogels N2-4 were observed (Fig. 2f) after UV irradiation owing to the different efficiencies of the FRET process . A higher ratio of SP to DTM in the nanogels led to an enhanced quenching effect in the DTM channel as a result of the expanded spectral overlap and this provided a method by which the energy transfer efficiency and photoswitch contrast could be selectively controlled. Table 8). Nearly identical fluorescence decays were observed for N1 when changing the concentration (1-4 mg mL −1 ) or reversibly exposing the nanogel to UV and vis light (6 cycles), leading to a stable and sufficient lifetime platform ( Supplementary Fig. 12).
Once this was established, the time-resolved fluorescence decays of the nanogels containing both DTM and SP (N2-4) were evaluated in situ after irradiating for 120 s to guarantee the photoconversion process. Two-dimensional time-resolved fluorescence decay spectra were firstly compared via monitoring the lifetime decay at different emission wavelengths ( Supplementary Fig. 13). The decrease in the fluorescence lifetime of N4 solution in the DTM channel was observed, with a new fluorescence decay of ring-opened SP appearing from 600 to 650 nm, as the consequence of light-stimulated FRET. As depicted in Fig. 3a, b and Supplementary Table 8, the average lifetimes (τ Av, I ) in N1-4 can be tuned from 15 to 28 ns after UV irradiation, linearly relating to the ratio of the two monomers. The FRET efficiency was calculated from the acquired lifetime results as 6% for N2 to 27% for N4 (Supplementary Table 9). Moreover, multiple lifetimes could be achieved in the same nanogel by tuning the UV irradiation time before reaching a plateau (Fig. 3c, Supplementary Fig. 14). However, with the aim to provide reproducible and quantitative outputs for barcoding applications, this study only focusses on manipulating the final states of the nanogel by changing the ratio of the FRET donor and acceptor. The overall decay of lifetime in N4 was fully reversible for 4 cycles of UV and vis light without any measurable alteration (Fig. 3d). This reversible behavior was further visualized through fluorescence lifetime imaging microscopy (FLIM), in which the different lifetimes can be observed (Fig. 3e, Supplementary Figs. [15][16][17][18]. Finally, we investigated the ability to encode information in our nanogels that could be subsequently decoded using UV light. Spherical PVA films were loaded with two nanogel solutions, N1 as the control, and N4 as representative of the photoswitchable system. While the lifetime decay of the two region-of-interest (ROIs) was comparable before irradiation, a change in lifetime after UV exposure was detected for the ROI corresponding to N4, while unchanged lifetime was observed for N1 ( Supplementary Fig. 19). This demonstrates that not only information can be stored in our nanogel systems but can also be selectively decoded using light as a stimulus. Moreover, although the information can be visualized using both fluorescence microscopy and FLIM, a more quantitative result could be achieved with FLIM, which allows the lifetime to be accurately extracted by selecting ROIs.
Reversible fluorescence lifetime barcoding in living cells.
Having demonstrated the capability of dynamic lifetime barcoding in selective information encryption, we next investigated the ability of the reversible nanogel to be used as a fluorescence lifetime barcoding tool. In order to increase the stability of our nanogel system, the CLD was increased to 50% to achieve a denser core (N6). The cytotoxicity of the nanogels was firstly investigated in A549 cell line (cancer lung fibroblasts) where satisfactory cytocompatibility was observed when incubated with nanogels with or without fluorescent units up to 2 mg mL −1 (Supplementary Fig. 20). A549 cells were then incubated with nanogel N6 and the internalization of the material in these cells was evaluated. FLIM microscopy was employed to obtain quantitative photophysical values such as fluorescence lifetime, fluorescence count rate, and the total number of photons. From the FLIM images, nanogel N6 showed homogeneous distribution in the cell cytoplasm, while maintaining its photoswitchable properties ( Supplementary Fig. 21). We then sought to investigate the intracellular encoding and decoding process in more detailed cell structures, such as the mitochondria. In order to target mitochondria, triphenylphosphonium (TPP) was conjugated to the nanogel via an azidealkyne cycloaddition to obtain TPP-N6, after functionalizing the nanogel with an azide reactive group (Fig. 4a, Supplementary Figs. 22-26) 50 . The functionalized nanogel was incubated in live A549 cells along with commercial MitoTracker Red for comparison. From confocal fluorescence analysis, co-localization was observed between the DTM channels in TPP-N6 and the red channel of commercial Mito Tracker (Supplementary Fig. 27), with a Pearson correlation coefficient (PCC) of 0.85 for TPP-N6 compared with the nanogel without the TPP modification (0.57), indicating the successful localization of these materials inside the mitochondria (Fig. 4b and Supplementary Fig. 28). Importantly, the incorporation of the tracker group did not influence the reversibility of the nanogel system.
Compared with traditional lifetime barcoding materials that require very long lifetimes (µs-ms) to minimize the effect of autofluorescence in the cellular environment, the reversible nature of the lifetime in this nanogel system provides a strategy to amplify the signal to noise ratio by deducting the two FLIM images between the reversible states (Fig. 4c). As an example, the FLIM image of TPP-N6 after UV irradiation (UV2, Fig. 4d) was deducted from the FLIM image before UV irradiation (Vis1, Fig. 4d), resulting in an amplified image in which the difference in lifetime was doubled (subtracted FLIM, Fig. 4e). In conclusion, the switchable nanogel system is not only able to track subcellular organelles with low toxicity but also provides a strategy to amplify the signal and diminish the background autofluorescence in FLIM without the need for the extra-long lifetimes.
Discussion
Herein, we report a strategy to remotely control the fluorescence lifetime in polymeric nanogels via a photoisomerization-induced FRET process. The dynamic light-control over the fluorescence lifetime was successfully achieved via an efficient FRET process between the substituted maleimide DTM and a SP photoswitch, resulting in a series of nanogels with similar surface chemistry but broad dynamic lifetimes suitable for multiplex system coding and counting. These nanogel systems were further employed for lifetime barcoding using FLIM, where the multistates of the fluorescence were tracked and selectively visualized using a controllable, reversible, and non-invasive method. By simultaneously extracting lifetimes as the readout, the nanogel systems are capable of being selectively decoded with quantitative results, allowing information storage at the micro-scale. As a proof-ofconcept, a mitochondrial tracker was introduced onto the nanogel via a click reaction obtaining live-cell lifetime barcoding at the subcellular scale and increasing the sensitivity of the imaging without the need for extra-long lifetimes. We believe that the potential of the strategy presented herein will pave the way towards the application of soft materials in fluorescence lifetime barcoding. The multistate of the fluorescence lifetime in the polymeric nanogel has the potential to break the limitation of background influence without using metal ions. The spatially defined nature and multiplex-output of these nanogels will generate a high level of interest across a broad spectrum of areas and could lead to practical applications in bioanalytical science and bioengineering including highthroughput gene detection, clinical diagnosis, and drug screening.
Methods
Materials. All chemicals and reagents were purchased from either Sigma Aldrich, Fisher Chemicals, Acros Chemicals or Alfa Aesar. Solvents were purchased from Fisher Scientific and used as received. Dry solvents were used directly as obtained from a solvent tower purifying system. Commercially available monomer MMA and ethylene glycol dimethacrylate (EGDMA) were purified using a column of basic alumina prior to use. Experimental procedures for the preparation of azide functionalized monomer (3-azidopropyl methacrylate) were reported previously 51 . (But-3-yn-1-yl) triphenylphosphonium bromide was synthesized following previous literature 50 . Preparation of nanogel via micro-emulsion polymerization. Nanogels were synthesized by the micro-emulsion polymerization reported by our group 45 . In general, sodium dodecyl sulfate (0.1 g) was dissolved in water (50 mL) under N 2 bubbling at room temperature. Subsequently, the methyl methacrylate (MMA, 0.5 g), ethylene glycol dimethacrylate (EGDMA 2.5 mg, CLD = 1%), DTMMA and SPMA were mixed before being added to the above solution under N 2 protection. The reaction was kept stirring at 800 rpm and a solution of potassium persulfate (10 mg in 1 mL H 2 O) was added. The reaction was further stirred at 70°C for 14 h. The resulting nanogels were filtered using a 0.45 μm nylon syringe filter and dialyzed (MWCO 3.5 kDa) against water prior to analysis. The final concentration was determined after freeze-drying.
Hydrodynamic diameters (D h ) and size distributions (PD) of acquired nanogel were determined by DLS using a Malvern Zetasizer Nano ZS with a 4 mW He-Ne 633 nm laser module operating at 25°C. Measurements were carried out at an angle of 173°and results were analyzed using Malvern DTS v7.03 software. All determinations were repeated four times with 15 measurements recorded for each run. D h values were calculated using the Stokes-Einstein equation where particles are assumed to be spherical.
Preparation of functionalized nanogels via click reaction. The azide functionalized nanogel was prepared using a similar method with minor changes. Briefly, sodium dodecyl sulfate (0.1 g) was added to water (50 mL) under N 2 at room temperature. Subsequently, the MMA, EGDMA (0.125 g, CLD = 50%), DTMMA (1 mg), SPMA (8.4 mg) and azide monomer (3-azidopropyl methacrylate) were mixed before being added to the above solution under N 2 . The reaction was kept stirring at 800 rpm and a solution of potassium persulfate (10 mg in 1 mL H 2 O) was added. The solution was further stirred at 70°C for 14 h. The resulting nanogels were filtered using a 0.45 μm nylon syringe filter and dialyzed (MWCO 3.5 kDa) against water prior to analysis. The final concentration was determined after freeze-drying.
Computational method. To obtain the most stable conformations of DTM and ring-closed SP, a Monte Carlo conformational search was carried out using the OPLS force field (for each system 1000 conformational search steps have been performed). The 20 low-energy structures were selected and re-optimized using the B3LYP and CAM-B3LYP functionals and the 6-311 G(d,p) basis set. It is worth noting that both DFT methods coincided with the same lowest-energy structures. Additional optimization processes were also performed using the M06-2X and PBE1PBE functionals and the 6-311 G(d,p) basis set. The dispersion effects (in exception of the M06-2X functional) and the solvent were included in all the optimization processes. The D3-Grimme's dispersion with Becke-Johson damping factor was used to evaluate the dispersion effects. The solvent was considered using the polarization continuum model (PCM) and the dielectric constant of cyclohexane (ε = 2.0165). The ring-opened SP geometry was obtained from the modification of the lowest-energy ring-closed SP conformation. The harmonic vibrational frequencies were also calculated to verify that all the stationary points are minima of their potential energy surface.
These structures (DTM and ring-closed and ring-opened SP) were used for the TD-DFT calculations (B3LYP, CAM-B3LYP, M06-2×, and PBE1PBE) to describe the absorption and emission (geometry optimization of the first singlet excited state) processes. The Macromodel and Maestro software packages were used to carry out the conformational search. All the remaining calculations (geometry optimizations, frequencies, and TD-DFT) were performed using the Gaussian 16 program package (the references of the computational methods are reported in the Supplementary Information).
Light irradiation. The UV (365 nm, 6 W) and light-emitting diode (White LED lamps, 2 W) were used as light sources for UV and visible light irradiation, respectively.
Fluorescence steady-state and lifetime measurement and imaging. Steadystate fluorescence spectroscopy: All steady-state spectra were obtained with an Agilent Cary Eclipse Fluorescence spectrophotometer equipped with the photomultiplier tube (PMT) detector with a scan rate of 600 nm per minute. The emission kinetics were measured on an Edinburgh Instruments FS5 spectrofluorometer equipped with a Xenon lamp. The samples were measured in deionized water and the acquired data were analyzed in Origin 2019 (Origin Labs).
Fluorescence lifetime spectroscopy: Time-correlated single photon counting (TCSPC) was employed to obtain all fluorescence lifetime spectra. This was achieved with an Edinburgh Instruments FS5 spectrofluorometer equipped with 375 ± 10 nm ps pulsed diode laser source (PicoQuant) using 10 mm path length quartz cuvettes with four transparent polished faces (Starna Cells). The emission wavelength was chosen with a monochromator at 510 ± 4 nm. The signal level was kept below 5% of the light source repetition rate. Instrument response functions (IRF) were determined from the scatter signal solution of Ludox HS-40 colloidal silica (10% particles in water w/w). The analysis was performed on Fluoracle software (Edinburgh Instruments).
Fluorescence lifetime imaging microscopy (FLIM): FLIM was performed on LSM upgrade kit (PicoQuant) mounted on a FV3000 (Olympus) confocal microscope with a IX-81 inverted base (Olympus) and the 20× and 60× oil lens (Olympus) were used for imaging. The FV3000 system was driven with the FV31S-SW Viewer software platform (Olympus) with scan rates of 1 μs pixel −1 at 515 by 512 pixels. FLIM images and spectra were detected by single-photon avalanche diodes using a 520/60 bandpass filter (AHF analysentechnik) with a 405 nm (PicoQuant) pulsed diode laser driven at 2.5 MHz. FWHM for the 405 nm laser head was 59 ps and maximum power was 0.3 mW (attenuated by variable neutral density filters to prevent count pile up and maintain counting rates below 1% bin occupancy). Acquired images containing fluorescence lifetime were analyzed using fast-FLIM method implemented in SymPhoTime software (PicoQuant) and ImageJ. All IRF deconvolved exponential fits were performed with the number of exponents selected for completeness of fit as determined by boot-strap chi-squared analysis in SymPhoTime software, typically three.
Cell culture and fluorescence imaging. Cell viability assay: A549 were purchased from Public Health England. Cells were cultured in F12K medium with addition of 10% FBS and 100 U mL −1 pen/strep at 37°C and 5% CO 2 . Cells were seeded on 12well plates at 2000 cells cm −2 and left to adhere and proliferate for 72 h. The medium was then replaced with the nanogel samples (N6 or nanogel without SPMA and DTMMA as control) in a concentration range from 0 to 2 mg mL −1 . Briefly, a solution of nanogels in water (100 mg mL −1 ) was sterile filtered through a 0.45 μm filter. This solution was then diluted with cell culture medium (with the addition of 10% FBS and 100 U mL −1 pen/strep) to a final concentration of 10 mg mL −1 . This stock solution was then used to prepare the dilutions directly on the well plates containing cells. After 24 h, the solution was removed and cells were washed with PBS (1 mL × 3) and incubated with 10% PrestoBlue viability assay following the supplier instructions. The FI was detected in a FluoStar Omega microplate reader (BMG Labtech) (λ ex. = 530 nm, λ em. = 590 nm). Cell data are reported as viability % in comparison to the control sample. Experiments were performed in triplicate.
Live-cell imaging and colocalization: A549 cells were cultured in F12K medium with addition of 10% FBS and 100 U mL −1 pen/strep at 37°C and 5% CO 2 . Cells were seeded in glass-bottom micro dishes (Thermofisher Scientific) at 5000 cells cm −2 and incubated for 24 h at 37°C in 5% CO 2 . After that, cells were pre-treated with commercial MitoTracker TM Deep Red (20 μg mL −1 ) for 30 min prior to incubation with different nanogels at 1 mg mL −1 for one to two hours. Upon washing with cell medium three times, the resulting cells were transferred to an Olympus FV3000 confocal microscope equipped with an incubator to keep live cells at 37°C in a 5% CO 2 atmosphere during image acquisition. Live cells were imaged with a 60× oil-immersion objective with scan rates of 1 μs pixel −1 at 515 by 512 pixels both on Olympus FV3000 Microscopy for fluorescent images and PicoQuant LSM Upgrade Kit for fluorescence lifetime images. The original images were processed using CellSens software (Olympus), SymPhoTime 64 (PicoQuant) and ImageJ with Coloc2 plugins.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. | 2020-05-18T14:00:38.323Z | 2020-05-18T00:00:00.000 | {
"year": 2020,
"sha1": "29047a988513c95f37a878fd04e94a8b081457ca",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-020-16297-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "29047a988513c95f37a878fd04e94a8b081457ca",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
56399703 | pes2o/s2orc | v3-fos-license | Adipose Derived Tissue Engineered Heart Valve
Introduction: A major challenge associated with heart valve tissue engineering is the in vitro creation of mature tissue structures compliant with native valve functionality. Various cell types have been investigated for heart valve tissue engineering. In addition to prenatal, umbilical cordand vascular-derived cells, mesenchymal stem cells (MSCs) have gained large interest for tissue engineering purposes, because of their broad differentiation potential. However, bone marrow derived MSCs require a highly invasive harvesting procedure and decline in both cell number and differentiation potential proportionally with the donor’s age. In contrast, adipose derived stem cells (ADSCs) represent an interesting alternative. The ease of repeated access to subcutaneous adipose tissue as well as the less invasive donation procedures provide clear advantages. Therefore, this study investigated the suitability of ADSCs as alternative cell source for tissue engineered heart valves (TEHVs). Methods: Human ADSCs were seeded on TEHV-scaffolds (n=11) made of nonwoven polyglycolic acid coated with poly-4-hydroxybutyrate. TEHVs were cultivated in diastolic-pulse-duplicator-bioreactor systems and subsequently seeded with a superficial layer of ADSC-derived endothelial cells. Quantitative assessment of extracellular matrix composition of the TEHV-leaflets was performed with biochemical analyses for sulphated glycosaminoglycans, hydroxyproline and DNA content. Microstructural evaluation was performed on representative samples of the TEHVleaflets by (immuno-)histochemistry and scanning electron microscopy. The mechanical properties of the ADSC derived TEHV-leaflets were characterized by biaxial tensile tests. Results: ADSC-derived TEHV-leaflets showed a homogenous vital cell distribution throughout the whole leaflet structure that consisted of large amounts of glycosaminoglycans and collagen and was endothelialized. Furthermore, the mechanically stable matrix of the ADSC-derived TEHVs showed a stiffness range in the right order of magnitude for heart valve applications. Conclusion: Human ADSCs represent a promising alternative autologous mesenchymal cell source for TEHVs that is of large clinical relevance due to their easy accessibility, efficient proliferation and excellent tissue formation capacities. Journal of Tissue Science & Engineering J o u r n a l o f T iss ue S cience &ngine e r i n g
Introduction
Heart and other cardiovascular-associated diseases rank among the foremost causes of death worldwide. More people currently suffer from cardiovascular diseases (CVD) than from any other disease. According to the World Health Organization (WHO), approximately 17.3 million people died from CVD in 2008. The number of affected individuals is expected to progressively increase in the decades to come. Cardiac dysfunction can be due to congenital or acquired heart diseases, which often impair the heart valves. Approximately 280'000 heart valve replacements are required annually worldwide [1]. Considering the aging of the world population, this number is anticipated to triple to over 850 000 within five decades [2].
Despite continuous progress in treatment, an ideal therapy has not yet been found making this a field of disease with high medical relevance. Current best clinical practice consists of surgical prosthetic heart valve replacement [3][4][5]. These valves can be either a mechanical, biological allogenic [6,7] or xenogenic [8] prosthesis. Unfortunately these replacements suffer from disadvantages associated with clinical limitations such as necessary lifelong anti-coagulation treatment, risk of immunogenic reactions or adverse reaction such as extensive calcification and degradation giving rise to possible reoperations [9,10]. Moreover, these prostheses lack the ability of both growth and remodeling.
Heart valve tissue engineering represents a promising solution to overcome these limitations of current heart valve substitutes. A major focus in heart valve tissue engineering is the in vitro creation of mature tissue structures compliant with native valve functionality. Various cell types have been investigated for their suitability for heart valve tissue engineering including high remodeling capacity of the Extracellular Matrix (ECM), including prenatal cells [11][12][13], umbilical cord [14][15][16] and vascular derived cells [17][18][19]. Furthermore the observed multipotency of adult Mesenchymal Stem Cells (MSCs) including their ability to differentiate into cells found in the adult heart, such as cardiomyocytes and endothelial cells, sparked interest to develop them for future cell-based therapy such as heart valve engineering. MSCs are commonly isolated from the bone marrow and represent a great cell source for tissue engineering of living heart valves mainly due to their wide availability [14,16,20,21]. However, adult bone marrow derived MSCs are suboptimal for clinical use due to the required highly invasive donation procedure and the decline in both their proliferation and differentiation potential with increasing senescence. In turn, adipose derived stem cells (ADSCs) represent a promising alternative cell source of mesenchymal origin [22] with comparable differentiation potential. Additionally, the ease of repeated accessibility to subcutaneous adipose tissue and simple isolation procedures define their superiority as alternative clinical cell source. Since human ADSCs have not been previously exploited for the generation of Tissue Engineered Heart Valves (TEHV), this study investigated their suitability as and performance on TEHV scaffolds.
Extraction of fat tissue and isolation of adipose derived stem cells
Extraction of fat tissue: Human ADSCs were isolated using the adipose tissue of patients undergoing plastic surgery (n=20). Samples were retrieved following procedures approved by the local ethics committee (KEK-ZH-2010-0476/0).
To harvest the fat tissue, liposuction was performed using the tumescent technique. This technique is used to loosen the fat cells from the connective tissue and especially to reduce the risk of bleeding. Tumescent fluid was injected into the surgical area (basic tumescent solution: 1000 ml 0.9% NaCl, 1 ml Kenacort A 10, 12.5 ml NaHCO 3 , 1 ml of 1:1000 epinephrine, 50 ml lidocaine 2%) and left there for at least 30 minutes. Next, blunt Mercedes-cannulas (3 mm in diameter) were inserted into the fatty tissue and an excess of adipose tissue was aspirated into a glass-container with a vacuum of -80 kP a . The tumescence solution was allowed to settle in the collection container allowing for the removal of fat tissue supernatant without centrifugation.
Isolation of adipose derived stem cells:
ADSCs were isolated and prepared as previously described by Digirolamo [23]. Briefly, the extracted adipose tissue was collected into a 50 ml Falcon tube and digested at 37ºC on a shaker (180 rpm) for 45 minutes with 1 mg/ml collagenase Type A (Roche). The digested sample was then filtered through a 40 μm nylon cell strainer. The mononuclear cell fraction was isolated by density gradient separation (Ficoll-PaqueTM Plus, Amersham Pharmacia Biotech) using standardized protocols [24][25][26]. The separated mononuclear cell fraction was cultured in growth medium (Dulbecco's Modified Eagle Medium, DMEM, Sigma), supplemented with 2 mM L-glutamine, 50 U/mL penicillin, 50 µg/ mL streptomycin and 10% (v/v) of a selected batch of heat-inactivated Fetal Calf Serum (FCS, Gibco) at 37ºC in a humidified atmosphere (5% CO 2 ). After 24 hours, the non-adherent cells were discarded and adherent cells were washed gently with medium and cultured for ~ 14 days. Growth medium was replaced twice per week.
Characterization of expanded ADSC
Expression profile of ADSC: ADSC phenotype was determined based on the presence of the mesenchymal stem cell antigens CD44 (Santa Cruz Biotechnologies) and CD73, CD90, CD166 (all Biolegend), as well as on the absence of the hematopoietic stem cell markers CD34 (Immuno Tools) and CD45 (Biolegend). Isolated cells were fixed with 4% paraformaldehyde and then incubated with the primary antibodies as specified above. Primary antibodies were detected with Cy-2-conjugated affinity-purified goat-anti-mouse antibodies (Jackson Immunoresearch Laboratories Inc.). The high-affinity filamentous actin (F-actin) probe Alexa 546 phalloidin as well as the nuclear counter stain DAPI (4' ,6-Diamidino-2-Phenylindole, Sigma) were used to counterstain cells (Invitrogen, Life Technologies). Negative controls were included by omitting the primary antibodies. Analysis was carried out using an inverted fluorescence microscope equipped with a CCD camera (Leica Microsystems AG). Image processing was performed using the Leica Application Suite processing software (Leica Microsystems AG) For quantitative characterization of the ADSC surface antigen expression profile, flow cytometric analysis was performed using the primary and secondary monoclonal antibodies described above using a FACSCalibur (BD Biosciences) and appropriate scatter gating. $Per sample 10 4 events were acquired.
Tissue engineering of heart valves
Fabrication of heart valve scaffolds: The trileaflet heart valve scaffolds (n=11), were produced from nonwoven polyglycolic acid meshes (PGA) with a thickness of 1.0 mm and a density of 70 mg/cm 3 (Cellon). The scaffold was integrated into a self-expandable nitinol stent with an outer diameter of 30 mm (pfm AG) by sewing the scaffold molds onto the inner surface of the stent struts. Then, the scaffold-stentconstruct was coated by dipping them into the biologically derived and rapidly degradable biopolymer poly-4-hydroxybutyrate (P4HB, 1, 75%, Tepha Inc.) in Tetrahydrofuran (THF, Fluka). After solvent evaporation, physical bonding of adjacent fibers and continuous coating was achieved. After evaporation of the THF, the produced heart valve construct was sterilized with ethylene oxide. The solvent was allowed to evaporate to reduce any toxic reaction to the cells before the scaffold was washed twice in PBS. Prior to cell seeding the scaffolds were incubated in tissue engineering (TE)-medium, DMEM growth medium listed above additionally supplemented with 0.1% FCS, 1% GlutaMax, 1% Penicillin-streptomycin and L-ascorbic acid 2-phosphate (0.25 mg/mL; Sigma-Aldrich) overnight.
Seeding of heart valves:
For seeding onto the PGA/P4HB scaffold ADSC were diluted to a final concentration of 1.5 x 10 6 cells /cm 2 in fibrin glue. To ensure homogeneous distribution throughout the scaffold the cells were first re-suspended in the thrombin component (10 IU, Sigma) and then quickly and thoroughly mixed with the fibrinogen component (10 mg protein, Sigma) and finally applied onto the scaffold construct.
Cultivation of the heart valves: After seeding, the heart valves were placed into a diastolic pulse duplicator system, previously described in detail [27]. This strain-based conditioning approach uses dynamic strains for cultivation of the TEHV with an additional continuous perfusion loop (4 mL/min) to ensure a closed system. The leaflets were exposed to dynamic strains by applying increasing transvalvular pressure differences. After 5 days of culture with only perfusion and no transvalvular pressure differences, the system started with 3 mm Hg, increased up to 15 mm Hg in the 4 following days and remained at that pressure until the end of the culture (day 28). TE-medium was replaced every 4 days.
Endothelialization of the TEHVs: After 4 weeks of culturing, TEHVs (n=3) were endothelialized with 0.2 x 10 6 ADSC derived endothelial like cells per cm² scaffold. Subsequently, TEHVs were further kept under dynamic culture conditions for another 48 hours in endothelial differentiation medium to ensure adequate cell attachment.
Thereafter, heart valves were harvested from the bioreactor system and analyzed accordingly.
Analysis of tissue engineered heart valves
Histological and immunohistochemical staining: For qualitative evaluation, representative samples of the cultivated TEHVs were fixed with 4% formalin, embedded in paraffin and cut into 5 µm sections. To assess the tissue composition, the slides were stained with Haematoxylin-Eosin, Masson-Trichrome, Elastica-van-Gieson and von Kossa.
To determine the potential alteration of the seeded cells after cultivation, the slides were immunohistochemically analyzed using specific antibodies for vimentin, alpha smooth muscle actin (a-SMA, Dako) and the endothelial marker CD31 (Biolegend). Primary antibodies were detected by use of Diaminobenzidine (DAB, Histochemistry Kit, Molecular Probes).
Scanning electron microscopy:
To evaluate the ultra-structural morphology of cells growing in the PGA/P4HB matrix, representative TEHV tissue samples were analyzed by Scanning Electron Microscopy (SEM). Tissue samples were fixed in 2% (v/v) glutaraldehyde with 0.1% cacodylate (P H 7.3). After preparation, samples were sputtered with gold and investigated using a Zeiss Supra 50 VP Microscope (Zeiss).
Extracellular matrix production:
To determine the major tissue structures responsible for native valve function, Extracellular Matrix (ECM) production was assessed by biochemical assays. To deduce the cell number on the scaffold, the content of total Deoxyribonucleic Acid (DNA) was analyzed as an indirect indicator. For measuring DNA amounts, the Hoechst dye method [28] was used and DNA content inferred from a standard curve prepared from calf thymus DNA (Sigma). The amount of produced collagen structures was determined by analyzing hydroxyproline (HYP) and sulfated glycosaminoglycan (GAG) content. HYP content was determined from lyophilized samples with a modified version of the protocol described by Huszar et al. [29] with trans-4-hydroxy-L-proline (Sigma) as standard. Sulphated GAG content was colorimetrically calculated using a chondroitin-6-sulfate from shark cartilage (Sigma) as standard [30]. The DNA, GAG and HYP content was normalized to the mg of dry tissue weight and standardized to native tissue.
Biomechanical analysis:
The mechanical properties of the ADSC TEHV samples were analyzed with a biaxial tensile tester (BioTester, 5 N load cell; CellScale, Waterloo, Canada) in combination with LabJoy software (V8.01, CellScale). Two square samples of 36 mm 2 each were symmetrically cut from one ADSC TEHV leaflet, taking into account the radial and circumferential orientation. Sample thickness was measured prior to testing. Samples were mounted onto the biaxial tensile tester, resulting in an effective test surface area of 12.25 mm 2 . A custom programmed protocol was stretching the sample equally biaxial in both radial and circumferential direction up to 30% strain at a strain rate of 1.66% per second. After stretching, the sample recovered directly back to 0% strain at a strain rate of 1.66% per second, followed by a rest cycle of 54 seconds. Prior to measuring the final stresses, the sample was preconditioned for 5 cycles. A high order polynomial curve was fitted through each individual data set in both radial and circumferential direction. The stiffness of the tissue was represented by the tangent modulus and was calculated as the slope from the tangent to the fitted polynomial curve at 30% strain.
Statistical analysis
Biochemical measurements and quantitative biomechanical data
Morphology and phenotype of adipose derived stem cells
In this study, a cell population of mesenchymal origin obtained from human adipose fat tissue was examined concerning the multilineage potential. Human adipose tissue was obtained by liposuction (n=20) from female patients of 49 ± 8 years of age. Approximately 0.5 x 10 6 mononuclear cells/g tissues were isolated independent of source type and subsequently cultured under standard conditions (10% DMEM). In culture the cells attached to the bottom of the culture flask and assumed a fibroblast-like morphology. This morphology was maintained through repeated subcultures under expansion conditions; no other cell morphology was observed. During the subsequent 4 passages an average doubling time of 1.5 days was observed (data not shown). Neither source type, age nor body mass index (BMI) influenced the population doubling rate.
Multilineage capacity of ADSC was then asserted using adipogenic, osteogenic, chondrogenic as well as endothelial-like differentiation assays with lineage specific induction factors (Table 1). Calcium deposits and lipid vacuoles were detected 3 weeks after induction using Alizarin Red S and Oil Red-O staining, and revealed that ADSC had differentiated toward the osteogenic and adipogenic lineages, respectively (Figure 2A and 2B).
ADSC initially grown in a monolayer were also cultivated in 3D spheres for 3 weeks, which is known to facilitate chondrogenesis [31]. Toluidin blue staining confirmed the production of cartilaginous matrix and hence the chondrogenic phenotype ( Figure 2C). Differentiation of ADSC into functional endothelial like cells was investigated by performing a tube formation assay. The observed tube formation after 10 hours was comparable to freshly isolated endothelial cells from human umbilical cord ( Figure 2D).
Analysis of tissue engineered heart valve leaflets
Macroscopic appearance: TEHV based on PGA/P4HB were seeded with human adult ADSC and cultivated using diastolic pulse duplicator systems to evaluate the cellular behavior on 3D formation [27]. This system mimics the diastolic phase in a closed leaflet culture by applying pressure, which acts on the surface of the leaflets resulting in tissue straining. The macroscopy of the ADSC-TEHV presented intact TEHVs with smooth and shiny tissue formation and homogeneous thickness after 4 weeks of cultivation ( Figure 3A). However, dissection of the merged TE leaflets after culture revealed slight retraction ( Figure 3B).
Histological and immunohistochemical stainings and scanning electron microscopy: Microstructural features of representative tissue samples of the ADSC derived TE leaflets were analyzed both by histological and immunochemical staining procedures (Figure 4) as well as by scanning electron microscopy ( Figure 5). Haematoxylin Eosin staining demonstrated appropriate tissue formation with welldeveloped outer layers and little cellularity in the inner part of the TEHVs ( Figure 4A). Furthermore, effective tissue formation with collagen fibers was shown in Masson Goldner ( Figure 4C) staining and a relatively high amount of α-smooth muscle actin (α-SMA) was detected within the neo-tissue ( Figure 4D). The intermediate filament protein vimentin, predominantly found in cells of mesenchymal origin, was expressed throughout the entire TEHV tissue ( Figure 4E). However, elastic fibers were undetectable by the Elastica van Gieson staining after the in vitro conditioning in the bioreactor system ( Figure 4B). The ultra-structural analysis using SEM showed a densely covered surface of the TEHV leaflets with extracellular matrix elements and spindle shaped cells when seeded with ADSC ( Figure 5A). Moreover after additional coating with ADSC derived endothelial-like cells, a cobble stone pattern on the tissue surface was observed ( Figure 5B). This superficial endothelial cell lining was also confirmed by the positive detection of the endothelial marker CD31 on the surfaces of the TE leaflets ( Figure 5C).
ECM components:
Next, neo-tissue ECM composition of the ADSC derived TEHV leaflets was biochemically analyzed using HYP, GAG and DNA assays. TEHV leaflets showed on average of 28.9 µg HYP per mg dry tissue, which corresponds to about 65% of native heart valve tissue, implying that the collagen content is significantly lower in tissue engineered when compared to native heart valves (p<0.05). The GAG and DNA content amounted to 19.7 µg and 3.1 µg per dry tissue, respectively. These values correspond to 50% GAG (p<0.05) and 88% DNA (p>0.05) when compared to native valve tissue ( Figure 6).
Biomechanical behavior: Equal biaxial tensile tests were executed up to a strain of 30% to investigate the biomechanical behavior of the ADSC TEHVs. The stress-strain curves and calculated tangent moduli are shown in both radial and circumferential direction in Figure 7. The tangent moduli are 2.66 ± 0.43 MPa and 2.45 ± 0.51 MPa in radial and circumferential direction, respectively. No significant difference in tissue stiffness can be observed between both radial and circumferential direction. Therefore, isotropy of the material may be assumed.
Discussion
As a novel and clinically interesting cell source in regenerative medicine, adipose tissue seems to be a rich source of multipotent adipose tissue-derived mesenchymal stem cells. In order to evaluate ADSC as an alternative cell source for the production of TEHV, adult ADSC were isolated from fat suction from plastic surgery and characterized. The cells exhibited the characteristics of mesenchymal stem cells regarding their differentiation capacity into the osteogenic, adipogenic, chondrogenic and endothelial-like lineage. Furthermore, the stem cell specific combination of surface markers was detectable by immunohistochemical staining's as well as by flow cytometry. According to current literature the anatomical site of the adipose tissue does not affect the total number of viable cells that can be obtained from the subcutaneous fatty tissue [32,33]. Compared with mesenchymal stem cells from bone marrow biopsies, ADSC show also no statistically significant correlation between ADSC stem cell quality, proliferation capacity and the patient's age [34]. Additionally, ADSC implicate an equal differentiation potential into cells and tissues of mesenchymal origin [35]. However, the proportion of ADSC (hip/thigh) is much higher than the frequency of MSC in bone marrow, which is lower than 0,001%-0,01% [36]. Taken together, fat tissue is a promising and clinical highly relevant source for isolating mesenchymal stem cells in large quantities with comparable cell quality.
Accordingly, clinical applications for cell therapy and tissue engineering using ADSC are highly promising and have already been used successfully in a variety of clinical trials, especially in tissue reconstruction. The first clinical trials were already initiated at the beginning of the 21 st century. There the feasibility and safety of autologous ADSC transplantation were tested, e.g. in peripheral nerve repair [37], treatment of Crohn´s disease fistulas [38], osteogenesis imperfecta [39], bladder diseases, urethral sphincter dysfunction associated with birth trauma and hormonal deficiency as well as the regeneration of bladder tissues [40]. Further studies substantiated the therapeutic potential of ADSC by transplanting them in a setting of chronic heart failure [41] or Acute Myocardial Infarction (AMI). After cell transplantation into the myocardial scar tissue in rabbit and porcine models, ADSC formed cardiac islands and vessel-like structures, induced angiogenesis and improved cardiac function with no report of potentially severe arrhythmias [42,43]. The APOLLO trial, a "firstin-man", prospective, double blind, randomized and placebo-controlled trial, demonstrated the safety and feasibility of ADSC transplantation in patients with acute myocardial infarction [43].
In the field of cardiovascular surgery, Taylor et al. in 2011 were the first to report that ADSC respond to mechanical stimulations/stress in a similar manner as valve interstitial cells, e.g. by secreting collagen. It was demonstrated that stretching increased the incorporation of hydroxyproline in ADSC, which was followed by the enhanced production of collagen and elastin crosslinks. This observation represents a fundamental mechanism which is essential to maintain the load capacity of leaflets and allows therewith valve functionality [44]. In the present study, after four weeks bioreactor cultivation, TEHV based on ADSC consisted on average of 29 µg collagen (HYP) per mg dry tissue, which is 65% of the values of native valves. Previous studies that used e.g. amniotic fluid derived stem cells for TEHV production, only reached up to 3 µg HYP (2.5% of the value of native tissue) [13]. Collagen is the most copious protein in cardiovascular tissue and is essential to provide tensile strength in an organized scaffold. Also, the ECM component elastin in the TEHV is of particular importance for the biomechanical behavior of the valves [45]. Without the mechanical component elastin, the mechanical behavior of native valve cusp will be altered, primarily by reducing the valve's extensibility and increasing their stiffness in the radial direction [46]. Using the histological Elastica van Gieson staining, an elastin network was not detectable after in vitro conditioning in the bioreactor system in the present study. However this in vitro observation is only of minor impact due to elastin production after transplantation which occurred in in vivo studies already 20 weeks after implantation [17].
From the biomechanical analyses it cannot be concluded if these valves are functional under pulmonary conditions. However, the tissue stiffness range is in the right order of magnitude [47,48], which suggests that the TEHVs may perform appropriately in vivo.
The necessity of endothelialization of tissue engineered heart valves is still highly controversial. Endothelialization of grafts can improve their long term patency and prevent thrombogenesis [49]. However, in vitro endothelialization of grafts involves multiple additional procedures, for instance donation of patient specific tissue for isolation of endothelial cells. In vivo studies showed that implantation of nonendothelialized TEHV resulted in almost confluent endothelialization already after 4-8 weeks in vivo [16,48,50,51]. However, using ADSC for tissue engineering would enable the endothelialization after differentiation of the cells in the presence of vascular endothelial growth factor without requiring any surgical harvesting of additional patientspecific tissues. In the present study, the endothelial surface layer on the TEHV wall as well as on the valvular leaflets was detected by the immunohistochemical staining with the endothelial marker CD31 as well as by ultra-structural analysis (SEM).
After separation of the TE leaflets (which is a prerequisite before implantation), retraction of valve leaflets occurred, which might affect functionality of the valves. This phenomenon, which was also observed in TEHV based on other cell types [52,53], might be the result of the complete relaxation of the stent after harvest or of the relatively high amount of α-SMA. Elimination of cellular components (decellularization) of the TEHV might strongly reduce this problem, without altering the collagen structure or tissue strength [47]. As a result, the decellularised TEHV (dTEHV) would not show any retraction of leaflets after separation. Moreover, decellularization would enable on-the-shelf storage of the TEHV and simplify the logistic of TEHV implantations. Prior to implantation, dTEHV could be loaded with autologous less contractile cells, endowed with a reservoir of soluble factors that can exert paracrine effects on other cells ingrowth resulting in remodelling stimulation.
Conclusion
The aim of the present study was to evaluate the feasibility of autologous Adipose Derived Stem Cells (ADSC), obtained through plastic surgery fat harvesting in combination with the fast degrading scaffold material PGA/P4HB for the in vitro generation of tissue engineered heart valves.
The simple and minimally invasive surgical procedure, the easy and repeatable access to the subcutaneous adipose tissue, and the uncomplicated enzyme-based isolation procedures make this tissue source of MSC most attractive for clinical application. Therefore, ADSC represent an alternative source of autologous adult stem cells that can be obtained repeatedly in large quantities under local anesthesia with a minimum of donor-site morbidity.
TEHV based on human adult ADSC after cultivation using strain bioreactors showed a good and vital cell distribution throughout the whole tissue structure. Furthermore, a mechanically stable isotropic matrix with collagen production was built, producing a tissue stiffness in the right order of magnitude for heart valve applications [47]. These preliminary results indicate that ADSC represent an auspicious cell type for the production of tissue engineered heart valves, in particular due to their capacity to synthesize and remodel extracellular matrix, and to respond to biophysical and biochemical stimuli.
However, despite the promising outcomes, the biologic mechanisms that underlie the therapeutic success of stem cell transplantations are still unknown [54]. The successful generation of functional TEHV based on ADSC clearly demands further experiments including work on animal models before successful clinical application. Our data will be of key relevance for promoting the efficiency of stem cell therapy. | 2019-04-08T13:08:32.198Z | 2015-09-24T00:00:00.000 | {
"year": 2015,
"sha1": "a916be33ad0f58027c0d777cc6eb03055727242c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2157-7552.1000156",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d1f2fb417445194fb24bcea9e26db9c3b7318160",
"s2fieldsofstudy": [
"Biology",
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
228840183 | pes2o/s2orc | v3-fos-license | Sheltering megalithic Temples in Malta – evaluating the process through data collection and modelling
Since their excavation, a number of the sites listed as part of “The Megalithic Temples of Malta” inscription on the UNESCO World Heritage list have been afflicted by material and structural problems, including collapses. Therefore, three of these sites, the Ħaġar Qim, Mnajdra and Tarxien Temples, were protected by open-sided shelters, to address some of the principal causes of deterioration (e.g. direct rainfall, surface weathering, thermal stress). Environmental monitoring, condition assessments and biological surveys of the three sites took place before and after sheltering and are still in progress. To understand how the shelters are affecting these structures, a research programme has started aimed at analysing, through Computational Fluid Dynamics (CFD), the environmental data collected over a period of more than ten years. The aim of using CFD on the Temples is to provide detailed information on how different environmental conditions can affect the sites. For the CFD, macro and meso scale approaches will be used. The macroscale model represents the regional environment, including the all-terrain features around the Temples. Mesoscale modelling represents the Temple structures in a more detailed way. The final goal is to find confident correlations between CFD, and representative areas selected within the Temples showing particular deterioration patterns. All this information will be integrated with the results of in situ analyses to identify the causes of material deterioration and possibly mitigate against them.
Background and context
The megalithic Temples of Malta are free-standing stone structures which started being constructed around 3,500 B.C. and continued being used until 2,500 B.C [1] Six of these sites have been included on the UNESCO World Heritage List as 'The Megalithic Temples of Malta' inscription (16 th session of the World Heritage Committee, 1992).
The structures are composed of an outer and inner limestone megalithic wall running alongside each other, with an infill made of smaller and larger stones, soil and other compacting material between these two. Evidence both within the remains excavated, and in the artefacts found within (including contemporaneous stone "models" [1]) show that these structures were roofed. This was done through the corbelling method, evidence of which still survives in these structures [2] (figure 1). The typical layout of these complexes can be seen in figure 2, which shows the Mnajdra complex composed of three of these apsed structures. The Ħaġar Qim complex has two such structures; Tarxien, the largest of these complexes, has four of these apsed structures [3].
The Ħaġar Qim and Mnajdra complexes were covered by a protective shelter in 2008 and 2009 respectively, and this after much deliberation, with the aim of slowing the deterioration processes affecting the sites [4]. Subsequently the Tarxien Temples were also protected by means of a protective shelter in 2015. These protective shelters have already shown conservation-related benefits, with the elimination of periodic collapses of megaliths such as those which had occurred after heavy rain in Ħaġar Qim and Mnajdra [5][6][7][8], and periodic flooding of the Tarxien complex.
Objectives
The primary objective of this work is to outline a methodology, using Computational Fluid Dynamics (CFD), to guide research into conservation-related problems of archaeological and historical structures such as Malta's megalithic Temples. A data-driven multiscale approach is being used for the simulations where input data for boundary conditions are based on environmental monitoring carried out on site. The output from the modelling will provide possible correlations between the simulated environmental variables and problematic areas showing particular damage patterns, also highlighted from previous investigations, inside the Temples. These areas are being studied in detail through non-invasive portable instruments (e.g. XRD/XRF, FTIR, Raman); the results will eventually be fully integrated with the CFD information to try to elucidate the cause/effect relationship of identified deterioration forms and patterns. This part of the research will be the subject of a future paper.
Weathering processes
The main types of deterioration forms seen in the three Temple complexes are powdering, flaking, fissuring and alveolar weathering of the Globigerina Limestone megaliths [8] and the occasional presence of superficial layers of calcite recrystallisation on the original stone for the Coralline Limestone, as observed under Scanning Electron Microscope, by Mandrioli et al. [9]. At times severe deterioration of the megaliths, together with loss of infill and subsequent destabilisation due to heavy rains, led in the recent past to severe collapses in these three Temple sites. The main responsible factors and the consequent related weathering processes identified mainly before the installation of the shelters are highlighted in figure 3. Nowadays, thanks to the action of the shelters, the main problems related to the direct and indirect action of rainfall and solar radiation are for the most part attenuated and/or even eliminated. Nevertheless, salt cycles are to be carefully monitored and investigated, as well as relative humidity and temperature fluctuations, in addition to wind action on the megalithic surfaces in the new microclimate/s created by the shelter. Also being studied are surface conditions, including in the stone and on the ground, inside and outside the Temples.
Landscape, topography and terrain
The Temples of Ħaġar Qim and Mnajdra are located on the southwest coast of Malta (figure 4). The landscape around them is largely garigue, interspersed with terraced fields, many of which have been abandoned. The predominant wind is North Westerly (NW) [17]. These two Temples are located near the coastline formed predominantly of steep cliffs created by the Maghlaq Fault. The Mnajdra complex is built in a hollow on the Lower Coralline Limestone slopes around 85 meters above sea level and around 200 meters from the sea, while Ħaġar Qim is built on the crest of a Lower Globigerina Limestone ridge around 600 meters from the sea, at an elevation of around 130 meters above sea level [18]. The region between the coastline and Temples is again largely garigue, partly covered by a relict agricultural landscape. The complex configuration of the coastline, and the steep cliffs that characterise it, cause wind flow from the direction of the sea to become highly asymmetric and turbulent. Small changes in the main wind direction or magnitude may have a significant influence near and within the Temples.
Fluid simulations in heritage science
The majority of papers that describe fluid dynamic simulations in cultural heritage are related to air movement and the resulting ventilation of spaces (90% of the papers reviewed in [19]), most of which indoors (70%). There is a varying level of complexity of the CFD approach applied to heritage [20,21,22]. In the simplest of cases, simulations aim at obtaining a visualisation of airflow in a given environment. Other, more elaborate, simulations provide supporting evidence for the historical interpretation of a site [23,24]. Finally, simulations intended as an integral step within a design process, a conservation project or, more generally, to support decision-making, can also be found [25,26,27]. The processes of change which are of interest to researches of heritage environments take place generally over decades or years rather than hours or seconds. Short-term processes such as relative humidity or temperature fluctuations are usually a concern because of their long-term effects. The emphasis on long-term material change and the cumulative effects of rapid variations can be seen to be at odds with the nature of CFD, which is best suited for the simulation of short time-spans or steadystate problems.
There are 3 main ways of Time representation in CFD. First, the simulation can represent an unchanging state that is true for a certain period that could be infinitely long, which is known as steadystate. Secondly, pseudo-transient simulations can be taken into consideration, which represent a series of steady-state scenarios that approximate a continuous variation, for example, winter and summer conditions or monthly conditions. In other words, these are time steps that are significantly longer than the time that the system takes to reach steady-state conditions. Finally, there are transient simulations, which aim to resolve equations for every time step of the evolution of the system.
As stated in the [19] only a quarter of the published simulations are experimentally validated. There is a need for more comparisons between simulations and real-world data, collected in the simulated environment. The difficulties of this task in heritage environments are many: slow change, the difficulty of monitoring, the uniqueness of the sites studied and their conditions. Since CFD aims at simulating the spatial distribution of a quantity, validations should also use spatially distributed data. There is a need for the development of benchmark cases that can be used for the validation of models for a diversity of conservation issues, to be used when other types of validation are not possible. Velocities indoors are usually low (under 0.1 m/s) and sometimes air flows may not be fully turbulent. Models employing modified versions of the k − ε model, such as the Renormalisation Group (RNG) model, seem to provide acceptable results [28], but there needs to be a critical reflection on the use of turbulence models in indoor heritage spaces. There are references to the adoption of SST (shear stress transport) k-ω models for application in indoor environments [29]. Further research is needed in the assessment of the levels of turbulence found indoors and the methods to model it. Despite on the interaction between air and heritage materials, few published simulations include estimations of wall fluxes, such as evaporation or condensation of moisture or dust and gas deposition. This may be a valid assumption in many instances but should be explicitly discussed. The implementation of near wall modelling is arguably the most problematic area in turbulence modelling. Dealing with near wall modelling means focusing on the turbulent boundary-layer, such models will, additionally, require computational refinements close to surfaces that may differentiate heritage CFD models from other indoor simulations.
Fluid simulation in Urban Environment -guidelines
There have been several previous initiatives to establish best practice guidelines in the field of flow simulation in general and for application to the built environment. As stated in [30], for general CFD applications the European Research Community on Flow, Turbulence and Combustion (ERCOFTAC) Best Practice Guidelines [31] is still the most comprehensive document. Special problems of micro-scale meteorological applications are however deliberately not addressed. Best practice guidelines on CFD for wind engineering problems have been published by the Thematic Network for Quality and Trust in the Industrial Application of CFD (QNET-CFD) [32,33]. Besides these European activities, the Architectural Institute of Japan has conducted a cooperative project for CFD prediction of the pedestrian wind environment [34]. For the same application, a working group of the European COST action C14 "Impact of Wind and Storms on City Life and Built Environment" has compiled recommendations for conducting CFD simulations from a comprehensive literature review [35]. The closely related guideline of the VDI (the German Association of Engineers) concentrates on evaluation and validation of these models for flow around buildings and obstacles [36]. The guideline is structured according to the general steps of conducting a numerical simulation [31]. The main objective of the COST Action 732 [38] is the improvement and quality assurance of microscale obstacleaccommodating meteorological models and their application to the prediction of flow and transport processes in urban or industrial environments. This guideline focuses on applications of the statistically steady Reynolds-averaged Navier-Stokes (RANS) equations for situations with neutral stratification without dispersion modelling. However, users of other models like unsteady RANS (URANS) and Large eddy simulation (LES) models should consider the same suggestions. Differences and some more -but not extensive -information for URANS and LES applications are also given. The guideline provides general advice that should be considered when performing simulations for model validation and has been tested within the COST Action 732 [37].
Thus, this guideline should be addressed as the main guideline for development of best practice in the simulation of flows in heritage, by placing great attention on how to choose the target variables, approximate equations describing the physics of the flow, the geometrical representation of obstacles, computational domain, boundary conditions, initial data, computational grid, numerical approximations, time step size, iterative convergence criteria and other related variables.
Fluid simulation on Complex Terrains
There are studies which address turbulence modelling issues related to the simulation of flow over complex terrains using a coupling between NWP (Numerical Weather Prediction) code and a classical CFD (computational fluid dynamics) code [38].
In the field of geophysical fluid dynamics, numerical and laboratory scale modelling of atmospheric flows are the mainly investigated topics covering many different applications ranging from the determination of near-surface winds for wind energy applications to high-altitude atmospheric physics applications [39]. The first simulations in the current research project were produced for the Mnajdra Temples, in order to have a first representative case study. As already stated, these Temples are 85 meters above sea level and around 180 meters from the nearest coastline. This makes this site quite challenging from a fluid dynamics perspective. A slight change in wind speed or direction could completely change wind flow patterns in and over the Temple. Therefore, this site was chosen as the first site to be modelled. The site has also one of the simplest layouts of the entire group of Maltese Temples, which would help in the actual modelling itself.
Problem definition
In order to get representative velocity vectors distribution inside what is still a complex Temple geometry, a high-resolution grid must be used. As the Temples are located near the coastline and on the cliff, this makes this case very sensitive and computationally expensive to get detailed velocity vectors inside sheltered and unsheltered Temple cases. Therefore, a different simulation approach is being proposed.
CFD approach
In this study, a multiscale flow domain process is being used. Three flow domains are identified; (i) the macro scale, (ii) the mesoscale, (iii) the microscale. In the macroscale domain, the characteristic length scale being model is of the order of the site dimensions, a few kilometres. Within the mesoscale domain, the geometry of the Temples is modelled in more detail. The characteristic length scales modelled in this case are of the order of the Temple principal dimensions, a few meters. Finally, in the microscale, the length scale modelled will probably be of the order of the boundary layer dimensions found over the rough stone surfaces, being a few millimetres.
In Stage 1 (macro-scale) verification and validation of pseudo-transient simulation will be performed -representing seasonal / monthly/ or day-night cycle, of wind speed, and direction. These simulation results will provide additional information such as a valid representation of boundaries (wind speed, temperature, turbulent kinetic energy dissipation rate of turbulent kinetic energy and turbulent intensity) which will be used as an initial condition for the mesoscale. In Stage 2 (mesoscale) verification and validation of pseudo-transient or transient simulation will be performed -representing averaged season (or month, or day-night cycle) conditions. In this stage, more accurate and detailed information about local wind flow magnitude and direction inside the Temples is obtained. By integrating additional environmental data (temperature, humidity, solar radiation, etc.) more realistic representations of the microclimate inside the Temple will be achieved. These simulation results will provide more representative time scale (seasonal/monthly/daily) data which will be used for correlation with the experimental data of in situ analyses using portable instruments, which is being planned. The aim of Stage 2 is to provide valid answers to what are the main causes of specific deterioration issues inside the Temple. Coupling methodology (macro and meso) will be tested, verified and validated. In Stage 3 (micro-scale) -the aim of micro-scale stage is the study of processes near the wall. (mass and energy transfer between solid surfaces and surrounding environment).
Verification and validation
Every multiscale flow domain stage (macro, meso and micro) must be verified and validated independently. The examination of the spatial convergence of a simulation is a straight-forward method for determining the ordered discretization error in a CFD simulation. The method involves performing the simulation on two or more successively finer grids. The term "grid convergence study" is equivalent to the commonly used term "grid refinement study". Establishing grid convergence is a necessity in any numerical study. It is essential to verify that the equations are being solved correctly and that the solution is not sensitive to the grid resolution. The "grid convergence index" [40] is a standardized way to report grid convergence quality. It is calculated at refinement steps.
The proposed approach must also be validated. This will be done using 2D and 3D ultrasonic flow anemometers on site. Inlet profile validation will also be carried out. Vertical distribution of 2D ultrasonic anemometers used outside the shelter representing the main inlet boundary for mesoscale. An internal probe inside the Temple will include a 3D ultrasonic anemometer inside one the apses of e the Temple. Global validation will be achieved by using installed environmental monitoring stations acquiring data, located outside the Temples.
The aim of verification and validation is the assessment of accuracy and reliability for computational simulations, which leads to a closer representation of Real word.
Conclusions
The aim of this paper is to propose a multiscale domain approach of CFD studies on cultural heritage (archaeological) sites by integrating different domain scales (macro, meso and micro). Thus, being a similar approach to coupling NWP (numerical weather prediction) with CFD code, this approach could provide a more detailed and more accurate microclimate simulation of the inside of these Temples. These results can then be used to find confident correlations between CFD and representative areas selected within the Temples showing deterioration patterns. These correlations will provide a deeper insight of the effects of shelters on the megaliths themselves and could also lead to the development of further mitigation measures against the deterioration of these sites.
Further studies must be carried out on specific coupling mechanisms between different domains for the integration of guidelines for the CFD simulation of flows in the urban environment. It is in fact planned that these simulations will also be carried out for the other sheltered, more complex sites of Ħaġar Qim and Tarxien. If proved to effectively represent the conditions in these very complex structures, this methodology could then possibly be used for the development of best practice guidelines in the simulation of flows in heritage structures. | 2020-11-12T09:07:20.730Z | 2020-11-11T00:00:00.000 | {
"year": 2020,
"sha1": "43a2a656e1c39c12a4f7fff7b41112e861c2f2e9",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/949/1/012035",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "74dfc5bf66cdf4520bb2bba10903c79a0d498079",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
} |
210156927 | pes2o/s2orc | v3-fos-license | Theoretical Prediction of Magnetic Exchange Coupling Constants from Broken-Symmetry Coupled Cluster Calculations
Exchange coupling constants ($J$) are fundamental to the understanding of spin spectra of magnetic systems. Here we investigate the broken-symmetry (BS) approaches of Noodleman and Yamaguchi in conjunction with coupled cluster (CC) methods to obtain exchange couplings. $J$ values calculated from CC in this fashion converge smoothly towards the FCI result with increasing level of CC excitation. We compare this BS-CC scheme to the complementary EOM-CC approach on a selection of bridged molecular cases and give results from a few other methodologies for context.
I. INTRODUCTION
The energy level structure of spin states is fundamental to the description of magnetism in molecules and materials. For molecules with localized spins on different atoms, the low-energy spin-states can often be qualitatively understood in terms of the phenomenological Heisenberg model 1-3 where A and B index the "spin centers". The Heisenberg model is completely parametrized by the magnetic exchange coupling constant, J AB , for each spin interaction A − B.
Estimating the exchange coupling, and its geometric dependence, is complicated by the fact that the underlying mechanism of spin-interactions is a multielectron process, such as Anderson super-exchange; 4 furthermore, the low spin electron configurations that often appear in such investigations are a formidable challenge to quantum chemical methods. The most commonlyused approach involves calculations with density functional theory (DFT). Although DFT is ill-suited to describe eigenstates of the Heisenberg model, which possess multireference character arising from the largely independent spin orientations of the different centers, correctly parametrizing the model only requires us to match the energies of low-energy states, which need not be chosen as eigenstates. Consequently, it is commonly found that approaches based on broken-symmetry (BS) spin states such as the ones proposed by Noodleman 5 and Yamaguchi 6 can give estimates of J that are qualitatively comparable to experimentally-extracted values, even in cases where the exchange coupling arises due to super-exchange. [7][8][9][10][11][12][13][14] Still it is worthwhile to explore more sophisticated approaches within electronic structure, as this potentially permits the intrinsic Heisenberg energy level structure to be predicted with quantitative accuracy.
Coupled cluster (CC) theory is often used to generate benchmark quality descriptions of molecular proper-ties. 15 Recently, Mayhall and Head-Gordon used spinflip equation-of-motion (EOM) CC methods to obtain exchange couplings, 16 based on using CC and EOM-CC to approximate the two eigenstates of highest and nexthighest spin described by the Heisenberg model. However, as mentioned, it is not necessary to target spineigenstates when parametrizing the Heisenberg model. Here we adopt the broken-symmetry methods of Noodleman and Yamaguchi in conjunction with coupled cluster theory to estimate the exchange parameters. We assess this broken-symmetry CC technique in a variety of magnetically coupled small molecules and bridged transition metal dimers.
A. Extracting exchange couplings
Where used, the Heisenberg model is intended to describe the low-energy spin excitations of the system, but such a description is necessarily approximate. Thus the value of the exchange coupling depends in part on the way in which it is extracted from data. Experimentally, values reported in laboratory studies are generally obtained by fitting the measured magnetic susceptibility to predictions based on the Heisenberg model.
Within theoretical approaches, we can easily illustrate the ambiguity in a system with only two spin centers like the ones studied in this work, in which a single value of J defines the Heisenberg model completely. For example, Fig. 1 shows the spin ladder for Fe 2 OCl 2 -6 computed from spin-averaged complete-active-space self-consistent field (CASSCF) (10,10) orbitals 17 so as to treat all spin states on equal footing and corrected by n-electron valence second-order perturbation theory (NEVPT2) [18][19][20][21] to partially recover the lost correlation from limiting the active space. Choosing the two highest spin states (HS, HS-1) as is done in the procedure of Mayhall and Head-Gordon gives a value of J that is 71 cm −1 smaller in magnitude than if the states of lowest multiplicity are used, a discrepancy which is comparable to the J values as obtained by CASSCF (10,10) and NEVPT2 with orbitals obtained via a spin-average over all spin-states. A simple Heisenberg model cannot exactly capture this spectrum and fits of the Heisenberg model yield exchange couplings that vary by up to a factor of 2 depending on the chosen weighting of the states in the fit. Note that for single-reference methods |Stot| can differ significantly from integer values. themselves. A least-squares fit to all states yields -85 cm −1 , which is within 10% of the former value, and even closer to that obtained when the lowest spin and highest spin (LS and HS, respectively) states are selected. Note that strong S dependence of J in these fits does not necessarily mean that the Heisenberg model is a poor approximation for the molecule itself, because the quality of the theoretical approximations themselves depends on the spin state. Thus we see that, when giving a theoretical value for J it is important to specify which states were used to compute it, which we do in our work below.
Finally, we stress that it is not necessary, nor always desirable, to fit the exchange parameters of the Heisenberg model to theoretical calculations of spin eigenstates. The basis of an effective model is that there exists a space of low-energy states where the matrix elements of the model Hamiltonian and the ab initio Hamiltonian agree, but one is free to choose any rotation within this space to characterize the model parameters. While fitting to eigenstates is convenient, it is undesirable if the theoretical approach incurs a large error for such states. This is the rationale behind broken symmetry approaches, which we now discuss.
B. Broken symmetry approach to J couplings One of the earliest proposals to estimate exchange couplings from broken-symmetry wavefunctions was given by Noodleman. 5 His popular method computes magnetic exchange coupling constants using broken symmetry unrestricted Hartree-Fock (BS-UHF) solutions for low-spin states where E(BS) is the energy of the low-spin solution, E(HS) is the high-spin energy, and s max is the total spin of the high-spin state. This assumes that the broken symmetry state is an equal mixture of the lowest and highest spin states, which is strictly valid only for broken symmetry determinants with two s = 1/2 centers in the weak overlap limit. A more general approach was suggested by Yamaguchi, originally for DFT calculations. 6 That approach, and its correspondence to that of Noodleman (which is also today applied with DFT calculations), 5 can be developed as follows. Consider two coupled spins S A and S B , for which the resultant spin is Using the definition of J AB , Eq. 1, the energy of a given state ψ (not necessarily an eigenstate) is which can be used to determine J AB by using energies of any two states ψ 1 and ψ 2 , viz.
Typically, one chooses ψ 1 to be an approximation to the HS state, which is usually close to a spin eigenfunction with most methods. For the case under consideration then, one can obtain the specific form of the Yamaguchi formula by inserting the HS (T) and BS (S) energies and spins For two uncoupled spins, the broken-symmetry UHF singlet solution is roughly "half-singlet" and "halftriplet", so that S 2 BS ∼ 1, the equality of which recovers the Noodleman formula with s max = 1 provided the high-spin wavefunction is a spin eigenfunction. Similarly, for the desired broken symmetry solution in which all unpaired α spins are on one center, and all unpaired β on the other, it can be shown that S 2 BS = s max , so that the denominator of the Yamaguchi formula reduces to which serves to show the correspondence between the Yamaguchi and Noodleman equations. The advantage of the Yamaguchi formula is that it can be applied to any wavefunction for the low-spin state, approximate or exact, while the Noodleman formula (at least in the sense of the correspondence illustrated above) applies only when the broken-symmetry wavefunction is used in its unadulterated form, i.e. at the SCF (or Kohn-Sham DFT) level of theory. The accuracy of the Yamaguchi formula then depends on how completely the low-spin state is contained in the linear-span of spin eigenstates that form the model space of the Heisenberg model, and how well the theoretical method captures the expectation value of the energy in such a state. It has been recognized that coupled-cluster (CC) calculations based on broken-symmetry reference functions are an expedient way to obtain reasonably accurate energies in many situations qualitatively described by low-spin electronic configurations, 22 such as in homolytic bondbreaking and some transition states (similar strategies are followed in broken-symmetry DFT, which is often referred to as broken-symmetry unrestricted Kohn-Sham theory (BUKS)). As the expectation value of S 2 is easily calculated for coupled-cluster wavefunctions, 23 it is thus worthwhile to explore the Yamaguchi formula to calculate magnetic exchange coupling constants using brokensymmetry CC wavefunctions, and such calculations form the core of the work reported here.
III. ILLUSTRATIVE CALCULATIONS
J values for a series of molecules with bridged spin centers will now be presented, comparing the BS-CC approach described above to the EOM-CC approach described previously by Mayhall and Head-Gordon. For reference, we will also give results obtained by the most commonly used approach, evaluating the Noodleman formula with DFT orbitals, and a few other methods.
A. Computational Details
All calculations were carried out in the cc-pVDZ basis 24,25 unless specified otherwise, or in plane-wave bases where denoted by PW. PBE, HF, CAS, EOM, and CCSD(T) results were generated with pyscf 17,21,26,27 . Coupled cluster results beyond CCSD(T) were generated with CFOUR 28 and the MRCC program of Kállay. 29,30 PW-DFT results were generated in VASP 31,32 for a simple check of the robustness of the procedures to computational basis.
For the Gaussian orbital calculations, orbitals were first obtained via a restricted open-shell calculation (ROKS/ROHF) for the HS state. Guess orbitals for the LS solution were derived by localizing the singly occupied space of the ROKS/ROHF solution and assigning α and β occupancies to them, which were subsequently converged to the BS-UKS/BS-UHF ground state. In addition, HS UKS orbitals were computed, taking care to break spatial symmetry when present in order to obtain the lowest energy solution.
For the plane-wave calculations projector-augmentedwave (PAW) pseudopotentials 33,34 were employed with a plane-wave cutoff energy of 500 eV and an energy threshold for self-consistency of 10 −6 eV.
Correlated wavefunction calculations were carried out starting from the Gaussian orbital mean-field solutions. UCCSD calculations were based on the corresponding (HS/LS) HF solution keeping all core orbitals frozen. For the BS approach, the BS-UHF orbitals were used. For the EOM approach, ROHF orbitals were used since this allowed for easier convergence of the EOM amplitudes. To initialize the EOM eigenvectors into the correct space, a small EOM calculation was carried out freezing all but the singly occupied orbitals. The singles amplitudes from this calculation were then taken as an initial guess for the eigenvectors in the full space EOM calculation. Preliminary testing showed S 2 values computed by CCSD and CCSD(T) to be similar. To avoid large memory requirements for the larger systems, S 2 values computed by CCSD were used for CCSD(T) as well.
CASCI calculations were performed using ROHF/ROKS orbitals, choosing all singly occupied orbitals as the active space. Further CASCI calculations were performed using orbitals determined from spinaveraged CASSCF calculations over the same space, weighting the HS and LS state equally (CASCI(sa)). Second-order perturbative corrections were calculated for all cases separately via NEVPT2 (denoted "+PT2" below). Because both CASCI and NEVPT2 used a spin-adapted implementation, the spins appearing in the Yamaguchi formula for these methods are equivalent to the spins of the eigenstates.
B. Comparison to the full configuration interaction limit
We first look at two cases which can be solved effectively exactly (i.e. full CI quality results are available) in Table I. Both model systems comprise two spin-1 2 centers coupled via super-exchange into a singlet and a triplet. Both structures are centrosymmetric molecules comprising two hydrogen atoms bridged by a central closed shell atom (X=He, R(H-He)=1.5Å, and F -, R(H-F)=2Å).
Applying the Yamaguchi formula, the series of CC methods converges smoothly to the FCI limit. CCSDTQ is exact for H−He−H and CCSDTQPH can already be seen as almost converged for [H−F−H] -, where FCI requires octuple excitations. In routine chemical practice, however, calculations beyond CCSD(T) are rarely feasible. It is encouraging that J values obtained with BS-CCSD and EOM-CCSD are comparable in both cases and in good agreement with the exact limit. Specifically, they are considerably closer than the traditionally used Noodleman approaches with mean-field methods. As the coupled cluster series approaches the exact limit, the corresponding S 2 LS values have to decay from the broken-symmetry value of the reference determinant to the spin eigenfunction value of zero. Therefore, applying the Noodleman formula with coupled cluster energies with increasing excitation level must converge to the wrong result since it does not take this effect into account. Since the deviation of the spin of the LS state from the broken-symmetry value is already substantial within the CCSD description, especially for H-He-H (S 2 = 0.362), it is critical for the BS-CCSD approach to employ the Yamaguchi and not the Noodleman formula to correct for the non-zero S 2 value. Without any correction, one would obtain only J =-437 cm −1 even with CCSD(T), while the Noodleman formula would drastically overshoot (see Table I). This difference between the Noodleman and Yamaguchi equations does not occur within the mean-field description for which the Noodleman approach was originally intended, as the BS S 2 value (0.998) is quite close to the ideal value of 1. We will study in Section III C how important this difference is in real molecular systems. Surprisingly, for the H−He−H case, EOM-CCSD even outperforms BS-CCSD(T). We will see in Section III C that this is not always the case in realistic molecules.
While Noodleman and Davidson originally suggested their equation for HF, it is often used with density functionals instead. While the Noodleman ROHF/BS-UHF results for these cases are only off by up to 37%, the corresponding PBE results can be off by more than a factor of two. Similar results are seen in both the Gaussian and PW basis.
CASCI underestimates the magnitude of the coupling constant since the active space only correlates the valence electrons of the two spin-centers and thus does not capture the super-exchange mechanism. NEVPT2 treats the effect of all other electrons perturbatively and recovers part of the missing correlation. We find that NEVPT2 still underestimates the missing correlation and therefore the magnitude of J, although it outperforms all meanfield methods independent of whether the Noodleman or Yamaguchi formula are used.
C. Application to bridged transition metal dimers
We next consider how these findings generalize to realistic bridged transition metal dimers with varying numbers of d-electrons (Figure 2).
The different CC approaches are contrasted in Ta going from BS-CCSD to BS-CCSD(T) increases the magnitude of J. In all these cases then, the FCI limit is probably slightly larger in magnitude than the BS-CCSD(T) result. Given this assumption, BS-CCSD(T) performs best across all molecules. There is no clear trend as to whether BS-CCSD or EOM-CCSD performs better.
All three methods are consistent even away from equilibrium geometry. Figure 3 shows the energy curves for Ti 2 OCl 4 with respect to symmetric stretching of the Ti−O bond distance maintaining all other angles and distances and the corresponding J values. All methods agree regarding the equilibrium distance and show J to (properly) decay towards zero as the bond is dissociated at similar rates.
We contrast BS-CCSD(T) and EOM-CCSD with results from mean field calculations as well as Table III. Both CC approaches are broadly consistent with experimental results in all cases. From this, one can surmise that CC methods provide reliable results which can be used to compare with other methods.
CASCI(sa)+PT2 and experiment in
While a rigorous benchmark of different mean-field approaches is beyond the scope of this study, the following deserves mention: Hart et al. 35 had concluded from studying H-He-H, [H−F−H] -, and Ti 2 OCl 4 that the Noodleman formula with HF was performing better when using restricted open-shell rather than unrestricted HS energies. While we can reproduce this effect for the same molecules (in fact, for two of them using unrestricted HS orbitals even yields the wrong sign), this seems not to be true in general. In all cases involving transition-metal systems, the mixed ROHF-UHF approach tends to vastly overestimate the magnitude of the coupling constant.
HF results are off drastically in many cases, regardless of the orbitals and formula chosen (Noodleman or Yamaguchi), even as much as an order of magnitude in the case of Mn 2 O(CN) 6 -10 . The same can be said for PBE results. That the BS-CCSD(T) result is obtained from the same BS-UHF orbitals as the UHF J values indicates that the rather poor results for the other methods reflect true shortcomings of those methods in the context of these applications. This is even true for CASCI(sa)+PT2 which, apart from this case, follows the same behavior as discussed previously. II. State-specific absolute energies and J values computed via EOM-CCSD, BS-CCSD, and BS-CCSD(T). For EOM "low" denotes the HS-1 state, for BS the brokensymmetry LS state. The EOM HS energy is obtained from ROHF orbitals, the BS HS energies from UHF orbitals. The exact values for S 2 HS and S 2 HS-1 were used for the evaluation of J from EOM to reproduce the procedure used by Mayhall and Head-Gordon. 16 For BS-CCSD(T) the S 2 values computed with BS-CCSD were used.
One interesting finding in this study concerns the values of S 2 for the BS-CC wavefunctions. While the small systems that are treated in Section III B are such that correlation at the CCSD level acts to significantly reduce the LS S 2 value from the near-unity value of the reference determinant, it turns out that this is not true for the transition metals, where the correlation contribution to S 2 is rather small. It seems that in these cases, the electrons within the spin-centers are being more "correlated" than are the interactions between electrons on different spin centers. Because of this, it is apparent that the simple Noodleman equation -which does not require the (somewhat expensive) calculation of S 2 can be applied in conjunction with the BS-CC wavefunctions. We verify this in Table IV and find that indeed this approach yields results almost identical to full BS-CCSD and BS-CCSD(T) respectively. It is important to note that this simpler approach appears to work well in practice (with less than triples excitations). As discussed previously, however, it is apparent that as one converges the level of CC excitations in these molecules, S 2 will tend to zero and this approach has to eventually converge to the wrong limit. We have seen this in Section III B, where due to the small size of the molecules already CCSD resulted in significantly reduced S 2 values.
IV. CONCLUSION
This work demonstrates that a simple application of the broken-symmetry approach for calculating magnetic exchange coupling constants in conjunction with coupledcluster theory provides useful results in practice. As such, this method complements recent work by Mayhall and Head-Gordon that has used the spin-flip variant of equation-of-motion coupled cluster theory. The two approaches both rely on fitting J to two energies; the present method uses the highest and broken-symmetry lowest-spin state, while the EOM-CC method uses the two highest spin states. Note that there is no formal disadvantage to using broken symmetry states so long as the S 2 values are computed for the states of interest, as in the formula of Yamaguchi. However, we have also shown that the simpler approach of Noodleman, which posits the value of S 2 for the broken-symmetry lowestspin state, works as well in practice for many realistic molecules.
Computations by the present method are quite straightforward; one needs only to find BS solutions to the self-consistent field equations to obtain a reference single determinant, and to evaluate coupled-cluster energies and (optionally) one-and two-electron density matrix elements ( S 2 is straightforwardly computed from these) if the Yamaguchi formula is used. In particular, one does not need to wrestle with converging the EOM-CC equations or assigning spin states, which is not always straightforward. 16 In our experience, iterative solvers to the EOM equations can get stuck on higher energy solutions unless initial guesses are constructed very carefully. While we studied binary systems with only a single J coupling in this work, in many cases one is interested in finding J for each of multiple interactions in a molecule separately. BS coupled cluster methods can then potentially be applied the same way as BS DFT -by spin flipping into separate configurations. Calibrating other methods may be one of the main uses of more accurate methods to determine exchange couplings. Since both the EOM and BS coupled cluster approaches agree broadly with experiment and yield consistent results across all studied systems even away from equilibrium, they represent a reliable gauge by which to assess the accuracy of other methods. This is especially valuable since we observe very different behavior for different classes of molecules. For example, we can confirm that in small model systems using the commonly applied Noodleman formula with ROHF energies instead of UHF energies for the HS state yields superior results as posited by Hart et al. 35 However, we observe the same not to be true for the larger transition metal complexes.
In short, broken-symmetry coupled cluster theory provides a straightforward methodology to predict magnetic exchange coupling constants, complementing approaches that target spin-eigenstates, such as equation-of-motion coupled cluster methods and complete-active-space tech-niques. It is especially reliable when employing the Yamaguchi equation, in which case it can cope with almost arbitrary amounts of spin contamination. | 2020-01-09T21:17:56.000Z | 2020-01-09T00:00:00.000 | {
"year": 2020,
"sha1": "28a68347648ecb49d00c55a3ec41b12d3a3d264b",
"oa_license": null,
"oa_url": "https://authors.library.caltech.edu/101898/2/1.5144696.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "28a68347648ecb49d00c55a3ec41b12d3a3d264b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
254613553 | pes2o/s2orc | v3-fos-license | Determining if Honey Bees (Apis mellifera) Collect Pollen from Anemophilous Plants in the Uk
ABSTRACT Whether insect pollinators use wind-pollinated plants have implications for insect monitoring and conservation strategies in a wide range of environments. Habitats, such as coniferous plantations and arable crops of the Poaceae family are not typically considered priority for the monitoring of insect pollinators or habitat enhancement. Further many pollinator monitoring techniques focus on flowers and do not count insect interactions with wind-pollinated plants. Using two honey bee colonies from distinct environments (urban and rural) in north east England, we investigate the use of wind-pollinated plants over the summer of 2021. We combine honey bee pollen pellet analysis with airborne pollen sampling to investigate whether honey bees use three common wind-pollinated plant groups (Pinus sp., Plantago sp. and Poaceae) that have previously been considered sources of forage. Our results show that honey bees do forage on Plantago and Poaceae pollen, in line with previous studies. However, we show statistically that Pinus pollen is contamination from the atmosphere and not actively collected. It is important to consider airborne contamination before making interpretations based on small amounts of pollen in samples of bee products. The use of members of the Poaceae has implications for insect pollinator monitoring in urban environments, which has not always been considered in past studies.
Introduction
Global insect pollinator decline is a consequence of a wide range of factors, including land management decisions, climate change, pesticides, pathogens and species introductions (Potts et al. 2016;Baldock 2020). Headline figures regarding insect pollinator importance are commonly associated with agricultural land where pollinators improve the quantity, or quality, of the yield (Potts et al. 2016;Garibaldi et al. 2021). These agricultural systems, associated with insect pollinators, are typically dominated by entomophilous (insect-pollinated) plants. However, there is a growing awareness that insect pollinators use anemophilous (wind-pollinated) plants (Jones 2014;Saunders 2018). This has led to calls for the promotion of more sustainable practices and conservation management strategies in agricultural and forestry communities that were not previously considered priority for insect pollinators (Saunders 2018). Use of anemophilous plants has been documented through direct and indirect observations. Direct observations of insect pollinators foraging on anemophilous plants include many members of the Poaceae family and species of Plantago (Jones 2014;Saunders 2018). However, far more observations are indirect, coming mainly from pollen analysis of honey, corbiculae pollen loads, brood cells or nests (Severson and Parry 1981;Keller et al. 2005;Baum et al. 2011;Saunders 2018;El-Sofany et al. 2020). Such indirect observations show a wide range of anemophilous plants apparently being used by pollinators (Saunders 2018). For honey bees (Apis mellifera L.), the use of some anemophilous plants is well established (Keller et al. 2005;Saunders 2018). Observations and pollen analysis show the use of Zea mays crops and a variety of tree species as a widespread phenomenon (Severson and Parry 1981;Keller et al. 2005;Di Pasquale et al. 2016;El-Sofany et al. 2020). However, indirect observations alone present something of a paradox, especially for anemophilous plant pollen present in a sample in small quantitieswas it collected?
Honey bees collect pollen as a source of amino acids, fats, minerals, proteins, starch, sterol and vitamins (Brodschneider et al. 2018). A diverse selection of floral sources is required for a colony to get all their nutritional needs (Roulston and Cane 2000). Whilst the general rule still persists that honey bees forage on one plant per foraging trip, multiple studies have shown that around 40% of pollen pellets contain two forage sources (Betts 1935;Brodschneider et al. 2018;Hornby et al. 2022). Within all pellets, are typically small quantities of other pollen. Betts (1935) termed these as 'doubtfuls' and suspected they were not actively collected by the honey bee. Brodschneider et al. (2018) proposed a value of >10% pollen in a pellet for a plant to have been actively foraged on. Whereas, anything below 10% was considered contamination. Suggestions for these sources of contamination range from previous pollinator activity on flowers, residual pollen left on hairs from previous foraging trips, bee to bee contact, or contact with another contaminated surface (Betts 1935;Brodschneider et al. 2018). One source that has not been widely considered is atmospheric contamination.
Pollen released into the atmosphere can travel long distances, achieve great altitude and be in sufficient quantities to cause allergic reactions (Ziello et al. 2012;Szczepanek et al. 2017;Williams and Barn eoud 2021). Both Pinus and Poaceae pollen are ubiquitous in the atmosphere across Europe during their pollination season, including long-range transport of pollen to sampling stations when local plants are not releasing pollen (Kasprzyk 2006;Szczepanek et al. 2017). The potential for contamination in the beehive comes from the interaction with airborne particles during flight (Negri et al. 2015), during beekeeper inspections and hive management practices (Molan 1998), and by the proximity/ size of the brood chamber to honeycomb (Fernandez and Ortiz 1994). In this paper, we aim to test whether honey bees collect pollen of anemophilous plants, with a focus on Poaceae, Plantago and Pinus pollen, or if airborne contamination can better explain observations. We include Poaceae and Plantago as extensive melissopalynological work and direct observational data confirm their usage (Severson and Parry 1981;Keller et al. 2005;Di Pasquale et al. 2016;El-Sofany et al. 2020). Pinus pollen is included because it has recently been identified as being used by honey bees through indirect observations of very small numbers of pollen grains (Saunders 2018 and references therein).
Materials and methods
Two hives were sampled from June to October 2021 in North East England. One hive (urban) was located in Newcastle-upon-Tyne city centre on an enclosed terrace (ground level was 45 m above sea level and the terrace is one floor up from ground level), with urban trees, amenity grasslands, parks and residential gardens within 3 km of the hive (Figure 1). The second hive (rural) was located 18 kms to the west of Newcastle-upon-Tyne on a partially reforested disused airfield, located at 145 m above sea level and was mainly surrounded by farmland ( Figure 1). An urban and rural hive were selected, to try and incorporate opposing surrounding environments in a landscape controlled by human intervention. This can be summarised by comparing the percentage of the two major land-uses form a 3-km radius circle around each hive ( Figure 1). For the urban hive this is unsurprisingly Urban (52.3%) and Sub-Urban (27.8%). The rural hive is surrounded by Arable and Horticultural (58.1%) and Improved Grassland (31.7%).
In agreement with the beekeeper, pollen traps (Abelo Universal Pollen Trap) were attached for one-hour a week during fair weather and corbicular pollen samples (hereon 'pellets') were collected. A regular, but short collection period was opted to avoid placing the colonies under stress that might potentially modify their foraging behaviour (Baum et al. 2004) The collecting screen and collection draw were removed cleaned and stored indoors in between each sampling period. Airborne pollen was sampled by leaving a Tauber trap near each hive for the duration of the week, at the time of hive pollen sampling the Tauber trap was sampled and cleaned, before being placed back in the same spot. In total 19 weeks were sampled at the urban hive and 10 at the rural hive.
Pellets were sorted and colour was determined digitally following Hornby et al. (2022). The total number of pellets of each colour type was counted and a subset of these (3-5 pellets) were chemically treated to facilitate pollen identification. Chemical treatment on both airborne and pellet samples followed a modified version of the method presented in Jones and Bryant (2014). This involves the disaggregation (pellets only) in hot water (9 ml) and 95% Isopropyl Alcohol (1 ml), which is then centrifuged for 3.5 min at 3500 RPM and the supernatant decanted, before acetolysis treatment. Acetolysis treatment began with dehydration in 5 ml of acetic acid, before samples were heated to 90 C for 3 min in a 9:1 ratio of acetic anhydride and sulfuric acid. Samples were then washed with acetic acid and centrifuged for 3.5 min at 3500 RPM, before being stored in distilled water and 10% copper sulphate solution. Whilst acetolysis treatment is beneficial for the identification of pollen grains, it can be detrimental to thinner walled specimens and fungal spores (Pound et al. 2021;Riding 2021). However, damage to pollen types we are interested in for this study is only observed after treatment periods in excess of 10 min (Jardine et al. 2015). Airborne pollen slides were mounted in dilute PVA (polyvinyl acetate) glue (Riding 2021), whereas pellet samples were analysed by placing one drop on a temporary slide and covering with a cover slip. Pollen counting followed Lau et al. (2018) using Leica DM500 microscopes. A minimum of 500 pollen grains were counted for pellets, and all pollen was quantified in airborne samples. Percentage values were then calculated from the count. Count data is presented in the supplementary information.
Analysis and plotting was conducted in R-Studio software (R Development Core Team 2021). To test the hypothesis that Pinus, Plantago and Poaceae pollen in pellets were caused by high amounts of these pollen in the atmosphere (contamination), Pearson Correlation and Granger Causality tests were performed. Pearson correlation shows how two datasets change and correlate, it offers no insight into cause and effect. The Pearsons Correlation simply identifies if high pollen content in the pellets is correlates with that in the airborne samples. Whilst the Granger Causality test does not provide a true measure of causality (and indications of causality will be presented in italic font to indicate this), it does offer a priori rather than post-hoc assumptions of causality (Dorestani and Aliabadi 2017). Using the Granger Causality test allows us to test the hypothesis that high airborne pollen is causing high pollen in the pellets (contamination). The Granger Causality test was run using a one-week time-lag.
Results
Over the six-month period a total of 2424 pellets were collected and analysed, 1454 came from the urban hive and 970 from the rural hive. Of these, 296 pellets from the urban hive (20.4%) contained Poaceae pollen and 172 pellets from the rural hive (17.7%) contained Poaceae pollen. Only one pellet contained more than 10% Poaceae pollen (from the urban hive). Poaceae pollen is present in the pellet samples for the entire study period and shows peaks in July and September, which is coincident with peaks in the airborne samples ( Figure 2). Pinus pollen was present in 347 pellets (23.9%) from the urban hive and 64 pellets (6.6%) from the rural hive during June to July. These coincide with airborne levels greater than 10% in the urban setting and 25% in the rural. In the pellets Pinus pollen never exceeds 1% of the counted sample. Plantago pollen was present in 52 pellets (3.6%) from the urban hive from June to September, but not reported from the rural hive ( Figure 2). In one pellet during August, it comprises 54% of the assemblage, but for the other samples rarely exceeds 1%. It is present in the rural airborne samples throughout the study interval, but more sporadically in the urban airborne samples (Figure 2).
Pearson correlation shows no correlation between the airborne samples and the pollen in pellets for Poaceae and Plantago (Table 1). The Granger Causality tests allows us to reject the hypothesis that high atmospheric pollen causes high amounts of Poaceae pollen in pellets, and the same is true for Plantago (Table 1). For Pinus, Pearson correlation shows positive correlation, and the Granger Causality test suggests it is causation: high Pinus pollen in the airborne samples is causing Pinus pollen to be present in the pellets. Pinus pollen is only present in the pellets around the time of peak atmospheric amounts (Figure 2). n/a -0.128 n/a n/a n/a n/a
Discussion
Pollen analysis of two hives in North East England shows that honey bees do use some anemophilous plants ( Figure 2). However, comparison with trapped airborne pollen shows this is more nuanced than recent reviews of the topic have proposed. Our results confirm the use of both Poaceae and Plantago pollen by Apis mellifera (Saunders 2018). Plantago pollen is intentionally collected by honey bees, as demonstrated by one urban pellet in our study where it constituted 54% of the pollen. Honey bees (and other flower-visiting insects) have been shown to sometimes actively forage on Plantago inflorescences (Stelleman 1984;Abrahamczyk et al. 2020) and previous pollen analysis routinely shows Plantago as a source of pollen (e.g. Percival 1947;Baum et al. 2004).
Occurrences of these anemophilous plants being foraged on is during the challenging summer period (also referred to as the 'hungry gap'), seen in greater foraging distances and lower sugar content in foraged nectar (Couvillon et al. 2014;Timberlake et al. 2019). This could mean that these plants are being used as a result of more preferential foraging not being available. Both occurrences of Poaceae and Plantago pollen being collected are also from the urban hive ( Figure 2). It is known that urban habitats can be beneficial for insect pollinators (Baldock 2020). However, hive densities and floral availability can negatively affect pollinator success in urban environments (Ropars et al. 2019;Egerer and Kowarik 2020). How the multi-factor pressures facing honey bees in urban areas is resulting in foraging on anemophilous plants, that may be sub-optimal, is beyond the scope of this study. The presence of Pinus pollen in pellet samples is here shown to co-occur with periods of high Pinus pollen in the atmosphere and therefore is most likely contamination (Table 1; Figure 2). Simple correlation analysis and Granger Causality tests both support the idea that Pinus is present due to airborne contamination (Table 1). Given the quantity of Pinus pollen in the atmosphere it is not surprising that this could be the source of contamination (Kluska et al. 2020;Sicard et al. 2021). Pollen of Pinus has been commonly reported in small percentages in samples from wide range of bee features and products: rectums, honey, pellets and propolis (Warakomska and Maciejewicz 1992;Coffey and Breen 1997;Dimou and Thrasyvoulou 2009;Pound et al. 2018;Radaeski and Bauermann 2021). It is not always present in Pine honey, which is a honeydew type honey (Tsigouri et al. 2004). Even when stands of Pinus are proximal to hive locations and have abundant pollen they are not used (Percival, 1947). In controlled experiments, Pernal and Currie (2000) showed that worker honey bees do not readily consume pollen of Pinus. They also showed it was little better than no pollen for hypopharyngeal gland and ovary development (Pernal and Currie 2000). In a recent review on pollinator use of anemophilous plants, four studies were cited showing indirect evidence for Apis mellifera using species of Pinus (Saunders 2018). Three of these studies show the presence of Pinus pollen in individual samples at quantities <1% (Pearson and Braiden 1990;Aronne et al. 2012;Girard et al. 2012) and the other study, on propolis, has a maximum Pinus pollen content of 6.5% (Warakomska and Maciejewicz 1992). Based on our comparison of airborne pollen and pollen presence in pellets we would suggest that <1% does not represent active collection of Pinus pollen. This is in line with suggestions by Brodschneider et al. (2018) for pollen in a pellet by contamination.
Considering how pollen gets into a pellet, previous workers have shown that multiple plants can be foraged on for one pellet and that any pollen present in a value >10% should be considered actively collected (Brodschneider et al. 2018). For those present in smaller amounts a range of contamination pathways were previously proposed: previous pollinator activity on flowers, residual pollen left on hairs from previous foraging trips, bee to bee contact, or contact with another contaminated surface (Betts 1935;Brodschneider et al. 2018). To this list our results add airborne contamination, either by direct contact in flight or through contaminated surfaces. Although not part of the current study, it is also possible that very small amounts of entomophilous pollen (those in a pellet in <1%) could come from the regurgitated nectar used during pellet formation (Matherne et al., 2021).
Given anemophilous pollen types have been associated with honey-induced anaphylaxis (Di Costanzo et al. 2021), understanding the incorporation of these pollen types into hives and bee products is important. Especially as experimental and observational data have shown that pollen production increases with atmospheric CO 2 concentration (LaDeau and Clark 2006; Anderegg et al. 2021). Pollen of Pinaceae has been increasing annually in the atmosphere of Europe, whilst there may be a slight decline in the amount of Poaceae pollen (Ziello et al. 2012). Creating a scenario under 21st Century climate change were the contamination of hives and bee products by non-foraged anemophilous plants will increase.
Conclusions
Pinus pollen is only found in urban and rural honey bee pellets during periods of high atmospheric concentration. Conversely when Pinus pollen is not abundant in the atmosphere it did not contaminate the pellets. Whereas Poaceae and Plantago pollen were present in single pellets in values indicative of active collection by Apis mellifera in urban areas during the 'challenging summer period'. Low percentages (<1%) of anemophilous pollen in bee products is the result of airborne contamination. Statistically we show that during, and following, high periods of atmospheric pollen content, bee products become contaminated with airborne pollen. It is therefore important to consider a threshold value for assuming actively collected pollen, rather than simply assuming presence of a taxa indicates that it was foraged on. Gate) hotel and the Ministry of Defence are thanked for facilitating access to the beehives. We thank four anonymous reviewers for comments that have greatly improved the manuscript. Dr Encarni Montoya is thanked for their support as editor.
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
This work was supported by British Beekeeper Association.
Notes on contributors
MATTHEW POUND is an Associate Professor in physical geography at Northumbria University. His research uses palynology to answer questions relating to environmental and climatic change SARAH HORNBY is currently studying an MRes in Polar and Alpine Change at the University of Sheffield. Her interests lie within contemporary climate change in alpine regions.
JONTY BENN is a recent Northumbria University graduate, where he used palynology in his dissertation project. He is currently working in occupational hygiene within the industrial sector.
RINKE VINKENOOG is a Senior Lecturer in applied biology at Northumbria University. He studies pollination ecology in the Northeast of England and occasionally further afield.
SHANNON GOLDBERG is a Masters graduate from Newcastle University in Ecology and Wildlife Conservation. She researches plant-pollinator interactions to aid projects in boosting biodiversity in urban environments.
BARBARA KEATING is an artist and beekeeper based in North East England.
FLORA WOOLLARD is a BSc Geography graduate from Northumbria university. For her dissertation she investigated the difference in urban and rural honeybee foraging. | 2022-12-14T16:14:34.985Z | 2022-12-11T00:00:00.000 | {
"year": 2023,
"sha1": "64d504a86aea7aa1f2be776b5b871e88539faefa",
"oa_license": "CCBY",
"oa_url": "https://nrl.northumbria.ac.uk/id/eprint/50802/1/Determining%20if%20honey%20bees%20Apis%20mellifera%20collect%20pollen%20from%20anemophilous%20plants%20in%20the%20UK.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "f9cfc99fa7b0c43929b1b11ffacf6443fd35587e",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
257791044 | pes2o/s2orc | v3-fos-license | Histological and Histomorphometric Evaluation of Implanted Photodynamic Active Biomaterials for Periodontal Bone Regeneration in an Animal Study
Recently, our group developed two different polymeric biomaterials with photodynamic antimicrobial surface activity for periodontal bone regeneration. The aim of the present study was to analyze the biocompatibility and osseointegration of these materials in vivo. Two biomaterials based on urethane dimethacrylate (BioM1) and tri-armed oligoester-urethane methacrylate (BioM2) that additionally contained ß-tricalcium phosphate and the photosensitizer mTHPC (meso-tetra(hydroxyphenyl)chlorin) were implanted in non-critical size bone defects in the femur (n = 16) and tibia (n = 8) of eight female domestic sheep. Bone specimens were harvested and histomorphometrically analyzed after 12 months. BioM1 degraded to a lower extent which resulted in a mean remnant square size of 17.4 mm², while 12.2 mm² was estimated for BioM2 (p = 0.007). For BioM1, a total percentage of new formed bone by 30.3% was found which was significant higher compared to BioM2 (8.4%, p < 0.001). Furthermore, BioM1 was afflicted by significant lower soft tissue formation (3.3%) as compared to BioM2 (29.5%). Additionally, a bone-to-biomaterial ratio of 81.9% was detected for BioM1, while 8.5% was recorded for BioM2. Implantation of BioM2 caused accumulation of inflammatory cells and led to fibrous encapsulation. BioM1 (photosensitizer-armed urethane dimethacrylate) showed favorable regenerative characteristics and can be recommended for further studies.
Introduction
Periodontitis is an infectious and inflammatory oral disease which is characterized by destruction of the tooth supporting tissue [1][2][3]. Clinically, periodontitis appears in signs of inflammation such as bleeding on probing, formation of periodontal pockets, and increased tooth mobility in the later stages.
A successful periodontitis treatment is characterized by a reduction in periodontal inflammation signs, a decrease in periodontal pocket depths and in long-term suppression of periodontopathogenic bacterial species.
After initial anti-infectious therapy, periodontal pockets might still persist which is often associated with the presence of deep intrabony defects [4].
Unfortunately, none of these materials currently meet all necessary clinical requirements such as providing sufficient mechanical stability paired with efficient osteo-inductive and -conductive properties as well as mechanisms to fight local infections of the implant site.
In this regard, our group already published data showing that polymers of poly(vinyl butyral-co-vinyl alcohol-co-vinyl acetate), urethane methacrylate and functionalized oligolactones have promising characteristics [16]. Due to their highly adaptable nature, synthetic polymers meet the requirements of many biomedical approaches. This includes mechanisms for regulating mechanical properties, porosity, biodegradation, surface topography, and wettability [17,18]. Furthermore, synthetic polymers provide the necessary mechanical strength while being replaced by newly formed bone [19]. The opportunity to include antimicrobial agents into the polymeric matrix is a further advantage [20].
As there is also a high clinical need for new and innovative antibiotic-free materials, the incorporation of a so-called photosensitizer is one favorable approach [21][22][23]. As already proven by various authors, illumination of photosensitizer-armed materials by light of an appropriate wavelength results in sufficient suppression of local microbes [24][25][26]. Up to now, photodynamic active materials are mainly investigated in their efficiency to treat infected wounds or are tested for their practicability in tumor therapy [27][28][29][30][31].
To the best of our knowledge, there are currently no studies available that focus on photosensitizer-armed materials for bone regeneration. To fill this gap, our group recently introduced two different photosensitizer-doted biomaterials based on urethane dimethacrylate (BioM1) and tri-armed oligoester-urethane methacrylate (BioM2) [32]. In order to ensure a sufficient photodynamic antimicrobial effect, the photosensitizer mesotetra(hydroxyphenyl)chlorin (mTHPC) was included into the matrix of both polymers. As already proven, mTHPC is of strong photodynamic activity and capable in suppressing oral pathogens to significantly high extents [32][33][34].
Up to now, osseointegration and biocompatibility of BioM1 and BioM2 were not yet observed in vivo. Therefore, the present animal study aimed to investigate the performance of both materials in non-critical bone defects after implantation for 12 months. Both materials were analyzed by histomorphometric and histological methods in order to determine their osseoinductive and bone integrative characteristics. Furthermore, newly formed tissue and the adjunctive bone were investigated for signs of adverse effects and inflammatory responses.
Results
In the present animal study, two different polymeric biomaterials armed with mTHPC were implanted in the femur and tibia of sheep. After 12 months of implantation bone samples were collected and analyzed by histological and histomorphometric methods.
After 12 months, BioM1 remnants showed a mean square area of 17.4 mm 2 , while 12.2 mm 2 were estimated for BioM2. At baseline an initial area of the bone defect was calculated with 19.6 mm 2 . Remnants of BioM2 were significant smaller in the tibia (p < 0.001) and in the femur (p = 0.007) as compared to BioM1 (Figure 1a).
Total bone was detected in the ROI by 30.3% in femur defects filled with BioM1, whereas only 8.4% was found in case of BioM2. Both values were also significantly different (p < 0.001). In the tibia defect sites a total bone value of 28.6% was estimated in the ROI for holes filled with BioM1 and 20.4% for those obturated with BioM2. The results are shown in Figure 1b. In addition to the bone volume, the percentage of fibrous soft tissue in the ROI was also determined. BioM1 showed a soft tissue value of 3.3% in the femur samples, while in case of BioM2 a higher soft tissue value was detected (29.5 %, p < 0.001). In case of the tibia defects, soft tissue was estimated by 3.8% for BioM1 and 15.8% for BioM2 (p = 0.014), (Figure 1c).
In the present study, the bone-to-biomaterial contact was also evaluated at the bone interface. It was found that BioM1 showed a bone-to-biomaterial ratio of 81.9% in the femur and 56.2% in the tibia (p = 0.005). In contrary, BioM2 showed a bone-to-biomaterial contact of only 8.5% in the femur and 16.4% in the tibia. The results for both biomaterials are displayed in Figure 1d.
As observed in the decalcified histological bone sections, BioM2 was encapsulated by soft tissue to a much higher extent as compared to BioM1. Additionally, an infiltration with fat and giant cells was only observed in defects filled with BioM2.
In case of BioM1, a pronounced formation of trabecular bone was observed. Though, only minor bone formation occurred in defects filled with BioM2. Representative decalcified and stained histological sections of BioM1 and BioM2 are shown in Figure 2. Defects filled with BioM1 showed no signs of inflammation or adverse effects. The interface of implanted BioM1 revealed homogeneous cancellous bone in close contact to the biomaterial surfaces.
On the other hand, remnants of BioM2 were enclosed by a sheath of fibrous tissue. Furthermore, osteolytic zones filled with fibrous tissue and fat cells were also discovered. Adjunctive tissue of BioM2 was affected by a strong infiltration with lymphocytes and giant cells ( Figure 3). The results of the assigned four-graded ROI evaluation score are presented in detail in Figure 4. It is clearly demonstrated that a score of grade 1 (new formed bone, totally mineralized) was detected most frequently in defects filled with BioM1 (p = 0.02). In contrast, in defects filled with BioM2 a score of grade 3 (fibrous soft tissue, uncalcified) was primarily assigned (p = 0.004). In summary, bone defects filled with BioM1 showed high amounts of mineralized bone in the ROI (96.18%), while fibrinous tissue was detected for 3.82%, only. In the case of BioM2, bone was present by 18.3% only, while the amount of uncalcified fibrous tissue was clearly increased (grade 3).
Discussion
In the present study, osteointegration and biocompatibility of two polymeric biomaterials based on urethane dimethacrylate (BioM1) and a tri-armed oligoester-urethane methacrylate (BioM2) were investigated in an ovine bone model. Both biomaterials were implanted in non-critical size defects in the femur and tibia of sheep.
Osteointegration and biocompatibility was determined by histomorphometric analysis after an implantation period of 12 months. In detail, the remaining biomaterial size, the percentage of bone and soft tissue in the ROI as well as the bone-to-biomaterial contact were evaluated.
As shown by the results, BioM1 was sufficiently osseointegrated with the highest amount of mineralized tissue in the ROI. Results from the four-graded classification scale showed bone formation by 96.18%. In contrast, implantation of BioM2 (oligoester-urethane methacrylate based) caused chronic inflammation and fibrous encapsulation. In the case of BioM2, bone in the ROI was only detected by 18.3%.
Similar results are reported from scaffolds fabricated from polymethyl methacrylate (PMMA) which were also not osseointegrated after implantation for 12 months. The same as for BioM2, the implanted material was encapsulated by a sheath of fibrous tissue. In comparison, mineralized tissue was found for titanium scaffolds by 39.1%, followed by implants manufactured from poly(D,L-lactic acid) (31.5%) and porous ultra-high molecular weight polyethylene (6.4 to 10.1%) [35].
The performance of methacrylate-based grafting was also observed by other authors. Recently, it was reported that bioscaffolds composed of Sr-containing mesoporous bioactive glass nanoparticles embedded in a gelatin methacrylate matrix present enhanced osteogenic, angiogenic, and immunomodulatory properties [36]. Furthermore, a novel graphene oxide modified expandable polymethyl methacrylate-based bone cement revealed improved physiochemical properties with sufficient cytocompatible, and osteogenic characteristics [37]. Methacrylated silk was also recently tested to verify its ability to support osteogenesis. It was shown that scaffolds from methacrylated silk are biocompatible and present reliable osteoconductive features [38]. Moreover, the performance of 3D printed gelatin methacrylate hydrogel has formerly been investigated after implantation in rat condyle defects. Whereas optimal tissue integration was observed via histology, no signs of fibrotic encapsulation or inhibited bone formation were attained [39].
Using sheep for biomaterial testing is common, especially in orthopedic research, because their parameters are similar to those of humans, such as the anatomic structure of bone and joints, body weight, mineral bone metabolism and responses to mechanical loads [40,41]. The applied model was first introduced in 2008 and revised in 2014 [42,43]. In the present investigation a modified version was introduced which allows serial sampling in the same animal with similar environmental conditions. If necessary, all stages in bone healing can easily be addressed. The applied surgical procedure was also well tolerated by the experimental animals. Although sheep cancellous bone models are now well established for the assessment of new bone substitutes, the limited availability of cancellous bone makes it difficult to find multiple comparable sites within the same animal [42]. Therefore, the described ovine model was chosen for testing biocompatibility and osseointegration of BioM1 and BioM2 in the present investigation.
In addition to large animal studies, in silico methods are of increasing interest. Computational simulation approaches for investigating mechano-biological principles behind scaffold-guided bone regeneration and the influence of the scaffold design on the regeneration process are already described [44,45]. Especially for the treatment of large bone defects with manufactured bone grafts and in joint replacement surgery, in silico analyzation methods show great predictive potential [46,47].
However, in the present investigation, both materials degraded to different extents. At the end of the study period a mean square area of 17.4 mm 2 was detected for remaining BioM1 and 12.2 mm 2 for BioM2, which was statistically significant in respect to the defect size estimated at baseline (19.6 mm 2 ).
These results are in line with findings observed by our group previously. As shown, BioM2 degrades to a much faster extent compared to BioM1. During immersion in distilled water for 28 d, BioM2 lost weight by 67%, while BioM1 degraded by only 4% [32].
The inert nature of BioM1 can thereby be referred to its hydrophobic chemical structure. Unlike BioM2, which is of higher hydrophilicity, BioM1 withstands hydrolytic cleavage to a much greater extent [48].
In the present study, the bone-to-biomaterial contact ratio was analyzed as well. As shown by the results, BioM1 presented a bone-to-biomaterial-contact of 81.9%. In contrast, a contact rate of only 8.5% was observed for BioM2.
Overall, the bone-to-biomaterial contact of BioM2 can be considered rather low. In a similar study, osseointegration of titanium and polyetheretherketone (PEEK) implants in the tibia and femur of sheep were observed. The results revealed a percentage of the bone-to-implant contact by 59.3% for titanium and 11.5% for PEEK [49].
In the present investigation, BioM2 was affected by fibrous tissue encapsulation, while BioM1 showed a close contact to the surrounding bone. Similar results as for BioM1 were reported for implants manufactured from hydroxyapatite. Here, a bone-to-implant contact of 74% was reported [50]. In this context, BioM1 showed a mean bone-to-biomaterial ratio of 81.9% in the femur and 56.2% in the tibia.
In addition, no inflammatory reactions or fibrous tissue formation was observed in the ROI of BioM1. In contrast, the implantation of BioM2 resulted in chronic inflammation. The inflammatory response associated to BioM2 can probably be referred to cytotoxic byproducts that origin from the degradation process. As it was shown, hydrolytic cleavage of polyester urethane acrylates causes the emerging of various acidic substances such as poly-(methacrylic acid), ethylene glycol, diethylene glycol, lactic acid and glycolic acid which leads to a local drop in the tissue pH [51].
In this regard, it is known that the appearance of acidic degradation products causes tissue inflammation and an impaired healing [52]. In order to increase biocompatibility and to counteract the cytotoxic effects of the acidic degradation products, calcium phosphate particles are often additionally applied to the polymeric matrix [17,52,53].
In the present study, both polymers were also additionally substituted with ß-tricalcium phosphate nanoparticles for increasing the porosity of the biomaterial body and to improve osseointegration [32,54]. In this context it was observed that especially tortuosity has a significant effect upon the scaffolds' permeability and shear stress values [55]. Morphologic parameters such as porosity, specific surface area, thickness, and tortuosity are important and hence need to be discovered for BioM1 and BioM2.
In the case of BioM2, giant cells and osteolytic bone defect zones were discovered in the adjunct tissue. The formation of foreign body giant cells is in general a result of fused macrophages that faced a frustrated process of phagocytosis [56][57][58]. The presence of foreign body giant cells, osteolytic zones and signs of a fibrous encapsulation indicate that BioM2 is of rather low biocompatibility.
As a result of the inflammation process, BioM2 also showed a lower bone-to-biomaterial contact rate as compared to BioM1. In detail, BioM2 featured a bone contact of 8.5% in the femur and 16.4% in the tibia, while implantation of BioM1 resulted in a bone contact of 81.9% in the femur and 56.2% in the tibia. After 12 months of implantation, it was recognized that BioM2 was almost entirely enclosed by a sheath of fibrous tissue, while in case of BioM1 no signs of adverse effects were observed.
The formation of a fibrotic capsule can be referred to a variety of pro-fibrotic growth factors such as PDGF, VEGF, and TGF-β, which are secreted by macrophages and also by several other immune cells. These factors cause activation of fibroblasts and endothelial cells, which start to deposit collagen and other extracellular matrix proteins on the surface of the grafted material. The deposited matrix subsequently matures into a peripheral fibrous capsule, which causes mechanical impairment and insufficient interactions of the biomaterial with the adjunct tissue [59].
Both biomaterials, also contained the photosensitizer mTHPC that enables a strong antibacterial surface effect upon illumination with light at 652 nm. As shown by various authors, antimicrobial photodynamic therapy (aPDT) is efficient in suppressing different oral pathogenic bacterial species to significant high extents [33,34,[60][61][62][63]. aPDT is also considered an alternative to the systemic treatment of biofilm-related infectious diseases with antibiotics [64][65][66]. Due to the incorporation of mTHPC into the biomaterial matrix, singlet oxygen and other reactive oxygen species (ROS) are produced upon light exposure causing destruction of adherent bacterial cells. Investigations of our group have already shown that illumination of the mTHPC-doted biomaterials with red laser light (652 nm) caused complete inhibition of Porphyromonas gingivalis and led to a significant decrease in Enterococcus faecalis [32]. The photodynamic antimicrobial activity of both implanted materials was not observed in the present investigation, which limits the overall merit. However, it still needs to be determined whether the photodynamic activity of both biomaterials is also efficient in vivo. A further limitation might be the fact that no additional controls are included. Therefore, the bone regenerative capacity is not comparable to already known grafting materials. Further, the number of tibial bone defects might be increased in order to obtain an even distribution of samples. Information upon the morphology of BioM1 and BioM2 is still limited. Parameters such as porosity, specific surface area, thickness, and tortuosity are important and need yet to be investigated in detail.
Up to now, various photodynamic active materials are already under investigation [67]. However, examinations upon the efficiency in vivo, especially in the case of periodontal lesions are hence needed.
Characterization of the Biomaterials
In the present animal study, two light curable biomaterials, BioM1 based on urethane dimethacrylate and BioM2 based on tri-armed oligoester-urethane methacrylate (BioM2) were applied. Both investigated biomaterials, BioM1 and BioM2, additionally contained β-tricalcium phosphate microparticles loaded with 20 wt% of the photosensitizer mTHPC. All chemicals were obtained from Sigma-Aldrich Chemie GmbH, Taufkirchen, Germany. The photosensitizer mTHPC was kindly provided by biolitec research GmbH, Jena, Germany. The biomechanical and antimicrobial photodynamic properties of both materials were evaluated by our group in a previous examination [32]. Structural formulas of the applied polymers (urethane dimethacrylate-BioM1, tri-armed oligoester-urethane methacrylate-BioM2) are presented below (Figures 5 and 6).
Surgical Procedure and Biomaterial Application
All experiments were conducted in accordance to the German law of animal protection and welfare. The investigation was authorized by the Thuringia Regional Office for Food Safety and Consumer Protection (protocol code: 02-036/10; date of approval: 14 October 2010).
Eight female domestic sheep (Ovis gmelini aries) obtained from a local breeder with a mean age of 12 months were used in this prospective study. Prior to surgery, all animals were acclimated for 2 weeks at the Central Animal Facility and Service Department, University Hospitals Jena, Germany. The sheep were assigned into two groups with four animals each. In the first group biomaterial 1 (BioM1) and in the second group biomaterial 2 (BioM2) was implanted.
For implantation, the femoral (distal) and tibial (proximal) epimethaphysial region of the right hind limb was chosen. Surgery was performed under general anesthesia. The animals were placed in right side recumbency and the skin of surgical side was disinfected with iodine (Braunoderm ® , B.Braun AG, Melsungen, Germany). At first, an approximately 10 cm long incision was applied at the medial side of the distal femur epiphysis 1 cm proximal of the knee joint capsule longitudinally and parallel to the bone axis. The cortical bone was reached through incision of the local muscles and by dissection of the periosteum. Two 5 × 6 mm cylindrical holes were drilled in the femoral epiphysis by using a water-cooled trephine. A minimal distance of 20 mm was kept in between the drilled holes to reduce the risk of fracture and to ensure proper healing and harvesting of the bone specimens at the end of the study.
The defects were filled with either BioM1 or BioM2. Prior to insertion of the biomaterials, the holes were dried using sterile cotton balls. When relative dryness was reached, the gel-like biomaterials were quickly injected in 2 mm thick layers and instantly photopolymerized for 40 s each using a calibrated dental light curing unit (Bluephase, 830 mW/cm 2 , Ivoclar-Vivadent, Ellwangen, Germany). Polymerzation is shown in Figure 7a. After the bone defects were completely filled, the surface of each polymerized biomaterial was whipped once with a 70% ethanolic solution (Figure 7b). The position of the filled defects sites was marked by insertion of a 4 mm long titanium pin (Geistlich Biomaterials, Baden-Baden, Germany). The position of the pin in relation to the defect sites was transferred to a transparent plastic foil which was used for relocation after euthanasia. Subsequently, the periosteum was closed and the muscle fascia, subcutaneous tissue and skin were sutured with an absorbable thread. Afterwards, a second approximately 5 cm long incision was applied at the medial side of the proximal tibial epiphysis 1 cm distal of the knee joint capsule longitudinally and parallel to the bone axis. The cortical bone was exposed as described above and another defect of identical dimension was prepared and obturated by the identical biomaterial as already implanted into the femur.
After marking the location of the implant sites using a titanium pin and plastic foil, a suture was applied. Finally, both surgical sites were dressed with an aluminum based wound spray and medio-lateral as well as dorso-plantar X-ray images were taken as controls and for documentation of the healing progress. The location and total number of filled defect sites are summarized in Table 1. Antimicrobial prophylaxis and post-surgical pain control were applied. Animals were euthanized after 12 months of biomaterial implantation using a standardized protocol. Table 1. Location and total number of bone defects filled with BioM1 or BioM2.
Sample Preparation and Histological Sectioning
After euthanasia, collected bone specimens were fixed in 5% formaldehyde solution for 5 d and subsequently cut under constant water cooling in two halves using the LEITZ 1600 microtome (Leica Microsystems GmbH, Bensheim, Germany). Each cortical halve of the bone sample was subjected to dehydration in solutions with increasing content of ethanol (50%, 70%, 80%, 2 × 96%, 3 × 100% ethanol) and afterwards embedded in Technovit 9100 ® (Kulzer GmbH Kulzer Technik, Wehrheim, Germany). The embedded specimens were then sectioned using the LEITZ 1600 microtome (Leica Microsystems GmbH, Bensheim, Germany). Subsequently, each sample was grounded to 10-20 µm of thickness using abrasive papers with different granulation from 300 to 4000 grit and subjected to Masson-Goldner staining.
The second halve of the divided bone sample was decalcified in 25% EDTA (pH 7.4) at 37 • C for 4 to 10 weeks. The decalcifying procedure was completed when the specimen could easily be penetrated by a fine needle.
After the decalcification process, samples were dehydrated in an alcoholic series and embedded into paraffin. Each paraffin block was then sectioned and the obtained 5 µm thick samples stained with hematoxylin eosin (HE). Histological allocation of the collected bone specimens can also be observed in Figure 8.
Histomorphometry
Undecalcified and stained histologic sections (n = 30) were observed using the microscope Jenaval (Carl Zeiss MicroImaging GmbH, Jena, Germany) at 10× to 125× magnification. Microscopic images were documented using the software AxioCam ® and AxioVision ® (release 4.6.3., Carl Zeiss MicroImaging GmbH, Jena, Germany). Data was analyzed using the freeware ImageJ ® (1.50i, Wayne Rasband, National Institutes of Health, Bethesda, MD, USA). An ROI (region of interest) around the implanted biomaterial of 500 µm in width was defined and the percentage of bone and soft tissue determined (Figure 9). In detail, each histological section was analyzed with regard to the square area of the biomaterial remnants, percentage of bone and soft tissue in the ROI and biomaterial-to-bone contact ratio.
Decalcified and HE stained sections were observed using the microscope Jenaval (Carl Zeiss MicroImaging GmbH, Jena, Germany) at 1× to 250× magnification. From each bone specimen, five different sections were chosen and evaluated by applying an ROI of 250 µm ( Figure 10). The ROI was examined by applying a four-graded scoring system ( Table 2). The score was adapted and modified [68] and comprises elements from the DIN EN ISO 10993-6:2017 guidelines [69]. Examples for each score (1)(2)(3)(4) are presented in Figure 11. A workflow of the study is presented in Figure 12. Table 2. Four-graded histological evaluation score adapted and modified from DIN EN ISO 10993-6 [69].
1
Completely mineralized bone with the presence of osteoblasts and/or osteocytes 2 Deposited connective tissue within the bone matrix 3 Connective tissue without signs of bone in the ROI 4 Additional appearance of univacular fat cells
Conclusions
The results of the present study revealed that BioM1 (photosensitizer-armed urethane dimethacrylate) was bone-integrated to a significantly higher extent compared to BioM2 (photosensitizer-armed oligoester-urethane methacrylate). In case of BioM1, highquality bone was formed in the ROI without any signs of adverse effects. Due to the slow degradation of BioM1, structural stability is provided for a longer period of time. In contrast, implantation of BioM2 resulted in chronic inflammation and increased fibrous tissue formation at the bone-to-biomaterial interface.
It can be concluded that BioM1 has promising regenerative and biocompatible characteristics. The material can therefore be recommended for further studies that focus on bone regeneration in regions where an additional structural support as well as stabilization is needed. Hence, it needs to be investigated, if those materials are capable of treating intrabony periodontal lesions sufficiently. Moreover, detailed information upon the antibacterial efficiency of photosensitizer-armed grafting materials still has to be obtained in vivo. Institutional Review Board Statement: The animal study protocol was approved by the Thuringia Regional Office for Food Safety and Consumer Protection (protocol code: 02-036/10; date of approval: 14 October 2010).
Informed Consent Statement: Not applicable.
Data Availability Statement: Available upon request from the corresponding author. | 2023-03-29T15:22:48.671Z | 2023-03-24T00:00:00.000 | {
"year": 2023,
"sha1": "5d9358d27de27619167a1e1fc2de43d945cd9ca8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/7/6200/pdf?version=1679899099",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "88afde5b791c501ba84b5eaccc3f056e53496d2d",
"s2fieldsofstudy": [
"Biology",
"Materials Science",
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10675271 | pes2o/s2orc | v3-fos-license | Direct regulation of Arp2/3 complex activity and function by the actin binding protein coronin
Mechanisms for activating the actin-related protein 2/3 (Arp2/3) complex have been the focus of many recent studies. Here, we identify a novel mode of Arp2/3 complex regulation mediated by the highly conserved actin binding protein coronin. Yeast coronin (Crn1) physically associates with the Arp2/3 complex and inhibits WA- and Abp1-activated actin nucleation in vitro. The inhibition occurs specifically in the absence of preformed actin filaments, suggesting that Crn1 may restrict Arp2/3 complex activity to the sides of filaments. The inhibitory activity of Crn1 resides in its coiled coil domain. Localization of Crn1 to actin patches in vivo and association of Crn1 with the Arp2/3 complex also require its coiled coil domain. Genetic studies provide in vivo evidence for these interactions and activities. Overexpression of CRN1 causes growth arrest and redistribution of Arp2 and Crn1p into aberrant actin loops. These defects are suppressed by deletion of the Crn1 coiled coil domain and by arc35-26, an allele of the p35 subunit of the Arp2/3 complex. Further in vivo evidence that coronin regulates the Arp2/3 complex comes from the observation that crn1 and arp2 mutants display an allele-specific synthetic interaction. This work identifies a new form of regulation of the Arp2/3 complex and an important cellular function for coronin.
Introduction
Many cellular processes, including cell locomotion, vesicle and organelle transport, endocytosis, cytokinesis, and polarized cell growth, require dynamic remodeling of the actin cytoskeleton. All of these processes involve spatially controlled assembly and reorganization of actin networks in response to cellular cues. However, the exact mechanisms regulating these spatial and temporal changes are not well understood. In recent years, the actin-related protein 2/3 (Arp2/3)* complex has emerged as a central effector of actin assembly that receives multiple signal inputs (for review see Higgs and Pollard, 2001).
The Arp2/3 complex is composed of seven evolutionarily conserved subunits: two actin-related proteins (Arp2 and Arp3) and five other subunits, which in yeast are called Arc40, Arc35, Arc18, Arc19, and Arc15. In all organisms examined, the Arp2/3 complex localizes to sites of dynamic actin assembly. In yeast, the Arp2/3 complex localizes to cortical actin patches, highly motile filamentous actin structures (for reviews see Pruyne and Bretscher, 2000;. Mutations in different subunits of the yeast Arp2/3 complex disrupt actin organization, actin patch motility, and actin-dependent processes such as endocytosis, cell polarity development, and organelle inheritance (for review see . The Arp2/3 complex has two established and apparently coupled activities, actin nucleation and actin filament branching (for reviews see Cooper et al., 2001;Borths and Welch, 2002;Kreishman-Deltrick and Rosen, 2002). The Arp2/3 complex can bind to the side of an existing (mother) filament and nucleate the formation of a new (daughter) filament at a 70 Њ angle, leading to the formation of branched filament networks (Mullins et al., 1998). Alone, the Arp2/3 complex has relatively weak actin nucleation activity. Activation is achieved by two complementary mechanisms: (1) association of the complex with the side of an actin filament and (2) interactions with an activator protein, such as SCAR/WASp, myosin I, Abp1, cortactin, and Pan1 (for reviews see Olazabal and Machesky, 2001;Schafer, 2002).
Coronin is a conserved component of the actin cytoskeleton found in all eukaryotes examined from yeast to mammals, where it localizes to sites of dynamic actin assembly (for review see de Hostos, 1999). In budding yeast, coronin-null mutants have no overt phenotype, but overexpression of the coronin gene ( CRN1) is lethal and disrupts actin organization. In addition, genetic interactions with act1-159 and cof1-22 suggest that Crn1 regulates some aspect of actin assembly and/or turnover (Goode et al., 1999). In Dictyostelium discoideum , coronin mutants display defects in cell migration, cytokinesis, phagocytosis, and fluid phase endocytosis (de Hostos et al., 1993). In cultured Xenopus cells, overexpression of a coronin fragment causes severe defects in cell migration and spreading (Mishima and Nishida, 1999).
The biochemical properties of coronin support the notion that it regulates actin assembly and organization. In vitro, purified coronin binds specifically to filamentous actin, bundles actin filaments, and weakly promotes actin assembly (Goode et al., 1999;Asano et al., 2001). The amino terminus of coronin contains five to six  -propeller-like WD repeats that form the actin binding domain (Goode et al., 1999). The carboxy terminus is comprised of a "unique" region, which is highly variable among species, and a short conserved coiled coil domain (residues 603-651 in yeast coronin). The coiled coil domain is required for coronin dimerization and actin filament bundling in vitro (Goode et al., 1999;Asano et al., 2001). In Xenopus cells, deletion of the coiled coil domain causes mislocalization of coronin, suggesting that dimerization, or other interactions of the coiled coil domain, is necessary for its proper localization and function (Mishima and Nishida, 1999). However, the exact function of coronin within the actin cytoskeleton has remained unclear.
Here, we identify a molecular function for yeast coronin (Crn1). We provide multiple lines of biochemical and genetic evidence that Crn1 associates with and regulates the Arp2/3 complex through an interaction of its coiled coil domain. These studies reveal an important cellular function for Crn1 and novel aspects of Arp2/3 complex regulation.
Results
Crn1 physically associates with the Arp2/3 complex To better understand the cellular function of yeast coronin (Crn1), we sought to identify Crn1-interacting partners. Wild-type cell extracts were fractionated on sucrose gradients by velocity sedimentation and the migration patterns of actin, Crn1, and numerous other actin-associated proteins were determined by immunoblotting. Fig. 1 A shows the data for actin, Crn1, and Arp2p (a component of the Arp2/3 complex). The vast majority of actin (43 kD) migrated to a position consistent with actin monomers. Crn1 migration had two distinct peaks, with ف 40% of the Crn1 peaking at a position consistent with Crn1 monomers (72 kD) and ف 60% of the Crn1 peaking at a position suggesting a complex of 250-300 kD. Immunoblotting with antibodies against numerous actin-associated proteins (Aip1, Cof1,Cap2,Pfy1,Sac6,Sla2,Srv2,Tpm1,and Twf1;unpublished data) revealed that only one, Arp2, comigrated with Crn1 in the 250-300-kD range. This raised the possibility that Crn1 and the Arp2/3 complex physically associate.
Next, we tested the ability of Crn1 to coimmunoprecipitate with the Arp2/3 complex from cell lysates. We inte-grated an epitope tag (3xHA) at the carboxy terminus of ARP2 . The tagged protein was the only source of Arp2p in cells and fully complemented growth at 16-37 Њ C with no visible defects in actin organization (unpublished data). To control for potential nonspecific interactions from coprecipitation of actin filaments with the Arp2/3 complex, we included 40 M latrunculin A in the immunoprecipitation reactions. Immunoblotting confirmed the absence of actin in the pellets (unpublished data). As shown in Fig. 1 B, over half of the Crn1 in cells coimmunoprecipitated with the Arp2/3 complex, similar to the fraction of Crn1 that comigrated with the Arp2/3 complex in sucrose gradients ( Fig. 1 A). Thus, by two independent approaches (velocity sedimentation and coimmunoprecipitation), Crn1 was found to associate with the Arp2/3 complex.
As an additional test of the interaction, we compared Arp2 migration in extracts from wild-type and crn1 -null cells frac- (A) Comigration of Crn1 and Arp2 by sedimentation velocity. Yeast cell lysates were fractionated on sucrose gradients by overnight high speed centrifugation, and then fractions were collected. Samples of each fraction were blotted and probed with antibodies for yeast actin, Crn1, and Arp2. Size standards were fractionated in parallel: BSA (60 kD), Catalase (240 kD), and Thyroglobulin (760 kD). (B) Coimmunoprecipitation of Crn1 with Arp2-HA. Yeast cell lysates expressing a carboxy-terminal-tagged Arp2-HA fusion protein were incubated with anti-HA antibody-coated beads (ϩ) or control beads with no antibody (Ϫ). Beads were pelleted, and equivalent loads of pellets and supernatants were blotted and probed with anti-Crn1 or anti-Arp2 antibodies. (C) Comparison of Arp2 migration in wild-type and crn1-null cell extracts fractionated by sedimentation velocity. Arp2 signal was quantified by immunoblotting and densitometry. The distribution was compared for wild-type and crn1-null yeast extracts fractionated on sucrose gradients. tionated on sucrose gradients. Arp2 migration exhibited a substantial shift and narrowing of its peak in the crn1 -null lysate, consistent with a loss of mass from a large subset of the Arp2/3 complex in cells (Fig. 1 C). We have determined that Crn1 and the Arp2/3 complex have a similar abundance in yeast (Arp2/3 complex is slightly more abundant than Crn1; unpublished data). This, combined with the data in Fig. 1 C, indicates that Ͼ 25% of the cellular pool of the Arp2/3 complex is stably associated with Crn1.
Next, we tested if the Crn1-Arp2/3 complex interaction is direct. To accomplish this, we purified HA-tagged Arp2/3 complex on HA antibody-coated beads (Fig. 2). The beads were washed in high salt to remove Arp2/3 complex-associated factors, such as coronin, Abp1, and Las17. The purified material has the characteristic gel band pattern of the Arp2/3 complex subunits. Further, mass spectrometry analysis of the complex released from beads verifies that it is the Arp2/3 complex, and the released complex is active in promoting actin nucleation (unpublished data). As shown in Fig. 2, purified Crn1 binds to HA-Arp2/3 complex beads, but not to control beads (HA antibody, but no Arp2/3 complex). This demonstrates that the Crn1-Arp2/3 complex interaction is direct. The binding saturated at a molar stoichiometry of ف 1:1 Crn1 to Arp2/3 complex, and the addition of higher concentrations of Crn1 to the reactions did not increase the amount of Crn1 bound (unpublished data).
The coiled coil domain of Crn1 is required for association with the Arp2/3 complex and Crn1 localization in vivo
In a two hybrid screen using the Arc35/p35 subunit of the Arp2/3 complex as bait, we identified a specific interaction with a carboxy-terminal fragment of Crn1. Sequencing of two independently selected plasmids revealed the same frag-ment of Crn1, encoding residues 466-651 (see Materials and methods). This raised the possibility that the carboxy terminus of Crn1 might be important for mediating physical interactions with the Arp2/3 complex. To test this hypothesis, we examined the ability of Crn1 fragments to coimmunoprecipitate with the Arp2/3 complex. Low copy plasmids expressing fragments of Crn1 were transformed into a crn1null strain carrying a 3xHA epitope tag integrated at the carboxy terminus of ARP2 . Cell lysates from these strains were used for immunoprecipitation assays. Full-length Crn1, Crn1 (1-600), and Crn1 (400-651) were expressed to similar levels (Fig. 3 A; whole cell extract blot). As shown in Fig. 3 A, Crn1 and Crn1 (400-651) coimmunoprecipitated with the Arp2/3 complex, but Crn1 (1-600) did not. These data show that the coiled coil domain-containing carboxy terminus of Crn1 is both required and sufficient for association with the Arp2/3 complex in vivo.
Next, we examined the localization of Crn1 and Crn1 fragments in cells by immunofluorescence with anti-Crn1 antibodies. Crn1 localized to actin patches, as expected, but Crn1 (1-600) and Crn1 (400-651) localized primarily to the cytoplasm, with only faint residual actin patch staining ( Fig. 3 B). The localization of Crn1 (1-600) to the cytoplasm was unexpected, given that this construct binds to actin filaments in vitro (Goode et al., 1999), and prompted us to examine the localization patterns of Crn1 and Crn1 (1-600) by an independent approach. We integrated GFP tags at the CRN1 locus carboxy terminus after the codons for residues 600 and 650, generating strains that express Crn1-GFP and Crn1 (1-600)-GFP fusion proteins, respectively. Immunoblotting with Crn1 and GFP antibodies confirmed that these constructs were expressed at normal levels and were the only source of Crn1 in cells (unpublished data). As shown in Fig. 3 C, Crn1-GFP localizes to cortical actin patches, and Crn1 (1-600)-GFP localizes primarily to the cytoplasm, confirming the immunofluorescence data. These results demonstrate that neither the actin binding domain nor the Arp2/3 complex-interacting carboxy terminus of Crn1 is sufficient for localization in vivo.
The coiled coil domain is required for defects in actin organization and cell growth caused by Crn1 overproduction
Deletion of the CRN1 gene in yeast causes no overt growth phenotype or defects in actin organization (Heil-Chapdelaine et al., 1998;Goode et al., 1999). However, as shown in Fig. 4 A, galactose promoter-driven overexpression of untagged Crn1 causes severe defects in actin organization and arrest of cell growth. Cells overproducing Crn1 are swollen, have depolarized actin patches, and form spiraled or looped actin structures (Fig. 4 B). The actin loops do not appear to be cable like, because they do not label with tropomyosin antibodies (a cable-specific marker) and they form in the absence of any functional formin proteins, Bnr1 and Bni1 (unpublished data). The actin loops also are distinct from the actin bars formed in cells overproducing a GST-Crn1 fusion protein (Goode et al., 1999), because unlike the bars, the loops label with rhodamine phalloidin. These aberrant actin loops were detected in 36% of cells overproducing Crn1, but never in control cells ( Ͼ 100 cells scored in three separate experiments).
To define the part of Crn1 that mediates these defects, we examined cells overproducing different Crn1 fragments from the galactose-inducible promoter. Whereas cells overexpressing full-length Crn1 showed growth arrest on galactose media, cells carrying vector alone or pGAL-Crn1 (1-600) were viable ( Fig. 4 A). Further, these cells did not contain the aberrant actin loop structures found in cells overexpressing full-length Crn1 (unpublished data). Immunoblotting confirmed that Crn1 and Crn1 (1-600) were overexpressed to similar levels in these strains, well above endogenous Crn1 expression levels ( Fig. 4 C). Thus, the coiled coil domain is required for the growth arrest and formation of actin loop structures caused by Crn1 overexpression. This finding raises the possibility that interactions between the coiled coil domain of Crn1 and the Arp2/3 complex lead to these defects in cell growth and actin loop formation. We were unable to determine if overproduction of Crn1 (400-651) was sufficient to cause the defects, because this construct was not successfully overproduced; the overexpression levels of this construct were similar to endogenous Crn1 in wild-type cells (Fig. 4 C).
To test whether Crn1 becomes mislocalized upon overproduction, we examined Crn1 localization by immunofluorescence in the Crn1-overexpressing cells (Fig. 5). In cells carrying an empty vector, endogenous Crn1 colocalized with actin patches as expected. However, in cells overproducing Crn1, Crn1 was found to associate with actin patches and loop structures ( Fig. 5 A). Treatment of these cells with latrunculin A, an actin monomer sequestering agent, caused Crn1 staining to shift to the cytoplasm, demonstrating that the localization of Crn1 to both structures depends on filamentous actin ( Fig. 5 B). Costaining with actin and Crn1 antibodies confirmed that Crn1 localizes to the same aberrant actin loops that form as a result of Crn1 overexpression (Fig. 5 C).
We also examined the localization of Arp2-YFP in cells overexpressing Crn1 (Fig. 6). In control cells, Arp2-YFP localized to actin patches, similar to Arp2 immunostaining (Moreau et al., 1996). However, in strains overproducing Crn1, Arp2-YFP also localized to looped structures. Importantly, two other actin patch components, Abp1 and capping protein, remained localized to actin patches in cells overexpressing Crn1 (Fig. 6). Similarly, Las17-GFP remained localized to actin patches and was not recruited to the actin loops in cells overexpressing Crn1 (unpublished data). These data demonstrate that recruitment of Arp2 to Crn1-induced looped structures is specific. They also provide further in vivo support for a physical interaction between Crn1 and the Arp2/3 complex.
Genetic interactions between CRN1 , ARC35 , and ARP2
Given that the lethality caused by CRN1 overexpression requires its coiled coil domain and that this region of Crn1 in- teracts with Arc35 in the two hybrid assay, we reasoned that mutations in arc35 might be able to suppress the lethal effects of CRN1 overexpression. To test this hypothesis, we used a collection of temperature-sensitive arc35 alleles generated by random mutagenesis (unpublished data). We transformed the integrated arc35 mutant strains with a GAL-CRN1 overexpression plasmid or vector alone. The transformed cells were diluted serially, spotted on glucose and galactose media, and grown for 3 d at a range of temperatures. Most of the arc35 mutant strains, independent of CRN1 overexpression, exhibited normal growth at 28 Њ C on glucose medium but grew poorly, if at all, on galactose medium relative to an isogenic wild-type ARC35 strain (Fig. 7 A, top). One allele carrying pGAL-CRN1 , arc35-26 grew well on galactose. Fig. 7 A shows the data for wild-type ARC35 , arc35-26 , and one of the many nonsuppressing arc35 alleles, arc35-12 . This result shows that arc35-26 strongly suppresses the CRN1 overexpression defects, and, reciprocally, CRN1 overexpression suppresses arc35-26 growth defects on galactose (compare with arc35-26 carrying empty vector on galactose; Fig. 7 A, top right).
We next explored the possibility of genetic interactions between CRN1 and the genes encoding other subunits of the Arp2/3 complex. We performed directed crosses be-tween a crn1 -null and a number of published arp2 alleles, the arc40-40 allele (Tong et al., 2001), and several unpublished arp2 alleles (a gift from H. Xu and C. Boone, University of Toronto). These crosses revealed an allele-specific genetic interaction between the crn1 -null mutant and arp2-21 , a temperature-sensitive mutant (Fig. 7 B). arp2-21 mutant cells exhibit normal growth at 30 Њ C, are partially compromised for growth at 34 Њ C, and are dead at 37 Њ C, whereas crn1 ⌬ -null cells exhibit normal growth at all temperatures. However, crn1 ⌬ arp2-21 double mutant cells are severely compromised for growth at 34 Њ C. Rhodamine phalloidin staining showed that crn1 ⌬ arp2-21 cells have similar defects in actin organization (highly depolarized actin patches) to arp2-21 cells (unpublished data). These data, combined with the suppression analysis above, strongly support an in vivo functional interaction between Crn1 and the Arp2/3 complex in a similar physiological process. were grown to log phase in glucose and the expression of CRN1 was induced by growth in galactose-containing medium for 4 h. Then, cells were fixed and actin organization was examined by rhodamine phalloidin staining. (C) Immunoblot of total cellular extracts from strains expressing Crn1 from a low copy plasmid (see Fig. 3) and induced to overexpress different Crn1 constructs by growth in galactose-containing medium. The blot was probed with rabbit anti-Crn1 antibodies. The carboxy terminus of Crn1 inhibits actin nucleation of the Arp2/3 complex To investigate the biochemical basis of our in vivo observations, we compared the nucleation activities of purified Arp2/3 complex in the presence and absence of Crn1. The polymerization of actin alone is slow (Fig. 8 A, curve F), reflecting an inherently poor nucleation activity of purified actin monomers. However, the addition of 20 nM Arp2/3 complex plus 200 nM activating (WA) fragment of Las17/ Bee1 (the yeast homologue of WASp) stimulated rapid actin nucleation (Fig. 8 A, curve B). The addition of 500 nM Crn1 greatly extended the lag phase and reduced the rate of WA-activated Arp2/3 complex-mediated actin assembly (Fig. 8 A, curve D). This effect is not the result of interactions of Crn1 with actin, because the addition of 500 nM Crn1 to actin alone caused a modest increase in the rate of actin assembly, consistent with previous reports (Goode et al., 1999). These data show that Crn1 directly inhibits the actin nucleation activity of WA-activated Arp2/3 complex.
To define the part of Crn1 that inhibits the Arp2/3 complex activity, we tested the effects of Crn1 fragments in these assays. Truncation of the coiled coil domain (residues 601-651) abolished the effects (Fig. 8 A, curve A), whereas a carboxy-terminal fragment of Crn1 (residues 400-651) showed a similar activity to full-length Crn1 (Fig. 8 A, curve C). As shown in Fig. 8 B, the effects of Crn1 (400-651) are dose responsive, with a half-maximal concentration of 001ف nM (Fig. 8 C). Crn1 (400-651) also inhibited Abp1-activated Arp2/3 complex (Fig. 8 D). Importantly, this Crn1 fragment has no detectable affinity for actin (Goode et al., 1999), indicating that inhibition is direct.
Crn1 recruits the Arp2/3 complex to the sides of actin filaments
What is the mechanism of Arp2/3 complex inhibition by Crn1? We observed a two hybrid interaction between Crn1 (466-651) and Arc35/p35, and there is strong evidence that this subunit mediates binding of the Arp2/3 complex to the sides of actin filaments (Mullins et al., 1997;Bailly et al., 2001;Gournier et al., 2001). Therefore, we considered the possibility that Crn1 (400-651) interactions with Arc35 might block Arp2/3 complex association with the sides of actin filaments to delay nucleation. However, we detected no difference in Arp2/3 complex affinity for actin filaments in a cosedimentation assay in the presence and absence of Crn1 (400-651) (unpublished data). This suggests that inhibition does not result from blocking Arp2/3 complex interactions with the sides of actin filaments. Further, inhibition does not appear to result from Crn1 interference with activator binding to the Arp2/3 complex, because Crn1 inhibits the basal nucleation activity of the Arp2/3 complex, in the absence of any activators (Fig. 9 A).
In contrast to Crn1 (400-651), full-length Crn1 actually increased the association of the Arp2/3 complex with the sides of actin filaments (Fig. 9 B). Purified Crn1 binds strongly to actin filaments (Kd 01ف nM) through its A collection of 17 mutant arc35 alleles and a congenic wild-type strain were transformed with pGAL-CRN1 or empty vector alone. Transformants were serially diluted, spotted onto glucoseand galactose-containing media, and grown for 3 d at 28ЊC. Data are shown for wild-type ARC35, arc35-26, and arc35-12 strains. (B) Synthetic growth defects between a crn1-null mutation and the arp2-21 temperature-sensitive mutant. Cells were grown to log phase, serially diluted, spotted onto glucose-rich plates, and grown for 3 d at 30ЊC or 34ЊC. amino terminus (Goode et al., 1999), whereas yeast Arp2/3 complex binds weakly to the sides of actin filaments (Kd 3-2ف M), similar to Arp2/3 complex isolated from other species (discussed in . As shown in Fig. 9 B, Arp2/3 complex cosedimentation with 2 M actin filaments increases significantly in the presence of 2 M Crn1. Thus, Crn1 recruits the Arp2/3 complex to the sides of actin filaments. Further, these effects do not result from actin filament bundling by Crn1, because a different actin bundling protein (Sac6/fimbrin) had no effect in this assay (unpublished data).
We also tested the ability of Crn1 to inhibit Arp2/3-mediated actin assembly in the presence of preformed actin filaments. When 500 nM preassembled actin filaments was added to 2 M actin monomers and WA-activated Arp2/3 complex, the lag phase was nearly eliminated (Fig. 9 C, curve A), consistent with previous reports (Machesky et al., 1997). When we further added 500 nM Crn1 (400-651), a concentration that dramatically inhibits Arp2/3 complex in the absence of filaments (Fig. 8 A, compare curves B and C), there was little, if any, inhibition detected ( Fig. 9 C, curve B). Thus, Crn1 suppresses the Arp2/3 complex specifically in the absence of actin filaments.
Physical and genetic interactions between Crn1 and the Arp2/3 complex
We have found a strong physical association between Crn1 and the Arp2/3 complex, as demonstrated by a variety of assays, including comigration on sucrose gradients, coimmunoprecipitation, two hybrid analysis, and direct binding of purified proteins. These interactions and the effects of Crn1 on Arp2/3 complex activity are mediated by the coiled coil domain of Crn1. Our attempts by blot overlay assays to map the specific subunit(s) of the Arp2/3 complex that binds Crn1 have been unsuccessful thus far (unpublished data). However, our two hybrid data suggest that Arc35 may be an important target of Crn1 binding, and this is supported by the observation that an arc35 allele suppresses the growth defects caused by CRN1 overexpression.
Over 50% of the cellular Crn1 is bound to the Arp2/3 complex, suggesting that the cellular functions of Crn1 and the Arp2/3 complex are closely linked. This is supported by in vivo evidence, including synthetic defects between crn1null mutants and arp2-21 and suppression by arc35-26 of defects caused by CRN1 overexpression. Further, localization of Crn1 to cortical actin patches in vivo depends on both its actin binding domain and its Arp2/3 complexinteracting coiled coil domain. A similar observation was made in cultured Xenopus cells, where deletion of the coiled coil domain caused mislocalization of coronin (Mishima and Nishida, 1999). Because coronin forms coiled coil-dependent homodimers in vitro (Goode et al., 1999;Asano et al., 2001), it was postulated that coronin dimerization may be necessary for localization. However, our findings raise the possibility that interactions of the coiled coil domain with the Arp2/3 complex may contribute to localization. Importantly, these models are not mutually exclusive; coronin localization may require both homodimerization and interactions with the Arp2/3 complex. Another important point raised here and in the above-mentioned study is that actin binding alone is not sufficient to localize coronin to actin filament structures in vivo. Therefore, associations between coronin and actin may be regulated in vivo.
A mechanism for Crn1 inhibition of Arp2/3 complex activity
We found that Crn1 inhibits WA-activated Arp2/3-mediated actin nucleation and that these effects are dose responsive and mediated by the Crn1 coiled coil domain. The addition of 0.5 M Crn1 (400-651) virtually abolishes WA-and Abp1-activated Arp2/3 complex activity. Importantly, this activity is independent of Crn1 interactions with actin, because Crn1 (400-651) has no actin binding affinity (Goode et al., 1999). Therefore, the inhibition of the Arp2/3 complex by Crn1 is direct.
To explore the mechanism of inhibition, it is helpful to review current models for Arp2/3 complex activation (for reviews see Cooper et al., 2001;Borths and Welch, 2002;Kreishman-Deltrick and Rosen, 2002). Recently, the crystal structure of the inactive Arp2/3 complex and cryo-electron micrograph structures of the activated complex alone and at filament branch points were reported Volkmann et al., 2001). Together, these studies suggest that association of the Arp2/3 complex with the sides of actin filaments and interactions with an activator converge, inducing allosteric changes in the complex that reposition Arp2 and Arp3 into a nucleation-competent actin-like dimer. The p35/Arc35 subunit is strongly implicated in physically linking the Arp2/3 complex to the sides of actin filaments (Mullins et al., 1997;Bailly et al., 2001;Gournier et al., 2001). Thus, interactions between p35 and the side of a filament may transduce one set of conformational changes, while interactions between an activator and other subunits of the complex may transduce a complementary set of conformational changes.
Initially, we considered a simple model for inhibition by Crn1, in which Crn1 competes with and displaces activators from the Arp2/3 complex. However, our results are inconsistent with this model. First, Crn1 suppresses the inherent actin nucleation of the Arp2/3 complex alone, in the absence of any activators. Second, Crn1 does not have an acidic "A" motif, found in and required for association with the Arp2/3 complex in all known activators (for review see Cooper et al., 2001). Third, Crn1 interacts genetically and by two hybrid assay with p35/Arc35, in contrast to activators, which are implicated in binding to four different subunits: Arp2, Arp3, p40/Arc40, and p21/Arc18 (Mullins et al., 1997;Machesky and Insall, 1998;Zalevsky et al., 2001). Fourth, increasing the concentration of WA in the reactions fails to override Crn1 inhibition (unpublished data). Thus, all of our data point to a functional interaction between the coiled coil domain of Crn1 and the p35 subunit of the Arp2/3 complex, via a distinct interface from the activators.
A second model we considered for inhibition was that Crn1 (400-651) might interfere with Arp2/3 complex binding to the sides of actin filaments. However, Crn1 (400-651) did not affect Arp2/3 complex association with filament sides, and, in fact, full-length Crn1 increased association of the Arp2/3 complex with the sides of actin filaments (Fig. 2 B).
What does the arc35-26 suppression data tell us about the mechanism of inhibition? The allele-specific inhibition of CRN1 overexpression defects strengthens our hypothesis that Crn1 interactions with the Arp2/3 complex occur through the Arc35/p35 subunit. Intriguingly, arc35-26 suppresses the growth defects associated with Crn1 overexpression, and, reciprocally, CRN1 overexpression suppresses the growth defects of arc35-26. This cosuppression suggests a highly specific functional interaction between Crn1 and Arc35. In future work, isolating the Arp2/3 complex from arc35-26 mutant cells and studying its activities may provide valuable insights into Crn1 action. In addition, defining the residues in p35/Arc35 that mediate Crn1 interactions may lend important clues to the mode of inhibition.
The cellular role of Crn1 in regulating the Arp2/3 complex We have shown that Crn1 inhibits the actin nucleation activity of the Arp2/3 complex specifically in the absence of actin filaments via its coiled coil domain and recruits the Arp2/3 complex to the sides of actin filaments via its actin binding domain. Both of these activities may be used in vivo to direct the Arp2/3 complex activity to the sides of preexisting actin filaments, promoting the formation of filament networks. Such a function might be important during cellular processes that rely on the rapid formation of actin networks, including cell locomotion and intracellular transport of vesicles and organelles. Consistent with this possibility, loss of coronin function in Dictyostelium and Xenopus cells has been shown to cause defects in cell migration and/or endocytosis (de Hostos et al., 1993;Mishima and Nishida, 1999). Next, it will be important to assess whether coronin-Arp2/3 complex interactions are conserved in other organisms and determine the cellular consequences of disrupting such interactions. Already, there are indications that the interaction may be conserved, because substoichiometric amounts of coronin have been shown to copurify with the Arp2/3 complex from human neutrophils (Machesky et al., 1997). Perhaps the most significant challenge for the future will be to determine how the Arp2/3 complex integrates so many different signals, from (a) multiple activators, (b) coronin, and (c) binding to the side of an actin filament, to spatially and temporally control actin nucleation in the cell.
Strains and media
The Saccharomyces cerevisiae strains used in this study are listed in Table I. Standard methods were used to generate strains with integrated tags (GFP and HA epitope) at the carboxy termini of CRN1 and ARP2 and a strain with a GFP tag integrated after residue 600 in CRN1 (Longtine et al., 1998). A strain with a CRN1 gene deletion and an HA epitope tag integrated at the carboxy terminus of ARP2 (BGY704) was generated by crossing strains BGY26 and BAY1412. The resulting diploids were sporulated, tetrads were dissected, and haploids with the BGY704 genotype were selected. The presence of the ARP2-HA epitope tag and loss of the CRN1 gene were confirmed by immunoblotting with anti-Crn1 and anti-HA antibodies. Standard methods were used for growth and transformation of yeast (Guthrie and Fink, 1991). For yeast growth assays, cultures were grown to log phase, serially diluted, spotted on plates, and grown for 3 d.
Plasmid construction
All plasmids used in this study are listed in Table II. To express full-length Crn1 and Crn1 (1-600) on CEN plasmids under the control of the CRN1 promoter, we constructed pBG290 and pBG291 and pBG296, respectively. The designated regions of the CRN1 open reading frame with 300 bp of 5Ј untranslated sequence upstream were PCR amplified using high fidelity polymerase and subcloned into the BamH1-Not1 sites of pRS316. To express different parts of Crn1 in yeast, we constructed pBG226, pBG289, pBG290, and pBG291 by subcloning BamH1-Nsi1 CRN1 fragments from plasmids pBG203 and pBG206, respectively, into p425GAL1 and p415MET25. For overexpression of full-length Crn1, Crn1 (1-600), or Crn1 (400-651) under control of the GAL promoter, we constructed pBG222, pBG223, and pBG224. The designated regions of the CRN1 open reading frame were amplified by PCR using high fidelity polymerase and subcloned into BamH1-Not1 sites of p425GAL1 or p426GAL1. For two hybrid analysis, an Arc35 insert was excised as a BamH1-Xho1 fragment from pEG202-END9 (a gift from C. Schaerer-Brodbeck, University of Basel, Basel, Switzerland) and cloned into BamH1-Sal1 sites of pAS2 to yield pAS2-ARC35. The plasmid was shown to express a functional fusion protein by complementation of the 37ЊC growth defect of arc35-1 (Schaerer-Brodbeck and Riezman, 2000). All plasmids were sequenced to confirm that the CRN1 coding sequences contained no mutations.
Antibody preparation and immunoblotting
Two different Crn1 antibodies were used, a mouse polyclonal anti-Crn1 antibody previously described (Goode et al., 1999) and a rabbit polyclonal anti-Crn1 antibody generated here (Faculty of Medicine, University of Toronto). The rabbits were immunized with a GST-Crn1 fusion protein expressed and purified from Escherichia coli (Goode et al., 1999), and the antibodies were affinity purified from serum (Measday et al., 1994). Like the mouse anti-Crn1 antibody, the rabbit anti-Crn1 antibody recognizes an 85-kD band on blots of wild-type total cellular protein that is absent from crn1-null lanes (not depicted). Proteins were detected on blots using 1:1,000 mouse anti-Crn1, 1:5,000 rabbit anti-Crn1, and 1:1,000 rabbit anti-Arp2 (Moreau et al., 1996). HA-Arp2 was detected using 1:10,000 mouse anti-HA antibody conjugated to HRP (Covance; Denver, CO). For all other blots, 1:10,000 HRP-conjugated secondary antibody was used. Signals were detected by ECL from Amersham Biosciences.
Sucrose gradient fractionation of yeast lysates
11-ml sucrose gradients (3-30%) were poured in 12-ml ultra clear tubes for an SW41 rotor (Beckman Coulter). Crude cell lysates were prepared from wild-type and crn1-null yeast as previously described (Goode et al., 1999). Lysates were precleared by centrifugation for 15 min, 70,000 rpm, 4ЊC in a TLA100.3 rotor (Beckman Coulter). 400 l supernatant or high molecular weight gel filtration size standards (Amersham Biosciences) were layered over each gradient. Samples were centrifuged for 15 h at 34,000 rpm, 4ЊC in an SW41 rotor, and 0.4-ml fractions were collected. Samples of each fraction were run on SDS-PAGE gels, blotted, and probed with antibodies to determine the positions of proteins in the gradients.
Binding interactions between Crn1 and the Arp2/3 complex
To test direct binding between Crn1 and the Arp2/3 complex, we assayed the cosedimentation of purified Crn1 with purified HA-tagged yeast Arp2/3 complex immobilized on beads. The Arp2/3 complex-loaded beads were prepared as previously described , yielding a bead suspension of 1 M Arp2/3 complex. 10 l of Arp2/3-loaded beads or control beads (no Arp2/3) was included in a 100-l reaction in HEK buffer containing 1 M Crn1. The final concentration of the Arp2/3 complex in the reactions was 0.1 M. Reactions were incubated for 20 min at 4ЊC, the beads were washed, and the bound proteins were removed with SDS sample buffer (without reducing agent). Samples were run on 12% SDS-PAGE gels and stained with Coomassie blue.
Two hybrid analysis
A yeast cDNA library in pGAD-GH was transformed into the Y190 yeast strain containing pAS2-ARC35, and a nonsaturating Gal two hybrid screen was performed as previously described (Madania et al., 1999). Transformants were selected on ϪTRP, ϪLEU, ϪHIS, ϩ30 mM 3-aminotriazole medium, and 24 clones were isolated. After a series of secondary tests, four clones remained, two of which were found to contain a fragment of Crn1 encoding its carboxy-terminal 186 residues (466-651).
Fluorescence light microscopy
Actin and Crn1 organization was examined in cells overproducing fulllength Crn1, Crn1 (1-600), and Crn1 (400-651) under control of the GAL1/10 promoter, from plasmids pBG222, pBG223, and pBG224. Cells were grown at 30ЊC in selective glucose medium to early log phase, washed, transferred to selective galactose medium, grown for 4 h at 30ЊC, fixed, and prepared for immunofluorescence (Ayscough and Drubin, 1997;Lee et al., 1998). To disrupt the actin cytoskeleton in cells, logphase yeast cultures were treated with 100 M latrunculin A for 15 min before chemical fixation. We also determined the localization of fulllength Crn1, Crn1 (1-600), and Crn1 (400-651) expressed in cells from low copy plasmids (pBG290, pBG291, and pBG298) under the control of the MET25 promoter (Fig. 3 B). Similar results were obtained for low copy plasmids expressing full-length Crn1 and Crn1 (1-600) under the control of its own promoter: pBG294 and pBG295 (not depicted). For immunofluorescence detection of Crn1 and actin, we used a 1:500 dilution of rabbit anti-Crn1 and a 1:2,000 dilution of guinea pig anti-Act1 (Mulholland et al., 1994). Cells were imaged on a Leica DM-LB microscope. Images were captured with a Micromax 1300y high-speed digital camera (Princeton Instruments) and analyzed with Metaview software (Universal Imaging Corp.). The localization of GFP and YFP was examined in yeast cells grown to log phase.
Actin assembly kinetics
Actin assembly was monitored by the pyrene-actin fluorescence assay as previously described , using a final concentration of 2 M actin in 70-l reactions unless otherwise indicated. In brief, 56.5 l of monomeric actin (10% pyrene-labeled, 90% unlabeled) in G buffer was mixed with 10 l HEKG 5 buffer (HEK buffer ϩ 5% glycerol) or different combinations of proteins in HEKG 5 buffer. The reaction was mixed immediately with 3.5 l 20ϫ initiation buffer (1 M KCl, 40 mM MgCl 2 , 10 mM ATP) in a quartz fluorometry cuvette (3-mm light path; Hellma). Pyreneactin fluorescence was monitored by excitation at 365 nM and emission at 407 nM in a fluorescence spectrophotometer (Photon Technology International) held at the constant temperature of 25ЊC. For seeded reactions, actin was preassembled to steady state for 1 h. Then, 2 M monomeric actin (10% pyrene labeled) was mixed with 500 nM preassembled actin filaments in the presence or absence of different proteins. | 2014-10-01T00:00:00.000Z | 2002-12-23T00:00:00.000 | {
"year": 2002,
"sha1": "1bb5328f903db9eb7a5e7d5e62c73fdce2f424ac",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/159/6/993/1308021/jcb1596993.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1bb5328f903db9eb7a5e7d5e62c73fdce2f424ac",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
103805802 | pes2o/s2orc | v3-fos-license | Effects of nano-grooved gelatin films on neural induction of human adipose-derived stem cells
The extra cellular matrix (ECM) and cell–cell interactions facilitate the survival, self-renewing and differentiation capabilities of stem cells. Biomaterials with specific structures such as grooves, ridges, pits, or pillars can mimic the topographic landscape of the niche. Cells can “sense” the mechanical properties and surface patterns, ranging from the microto nano-scale, of the substrate; hence, different sizes of nano-grooves on a gelatin surface were designed. The design of grooves can be systematically modified and those structures can reflect the organization of the ECM. In previous studies, polystyrene (PS) was often used because it is easy to fabricate topographic structures. For better biocompatibility, gelatin was chosen and fabricated into ideal nano-groove films. On the other hand, gelatin can be crosslinked by using several crosslinking agents, which leads to a higher mechanical strength and better flexibility. It was known that stem cells can serve as a source of neurons in transplantation therapies. The differentiation of neurons is associated with directionality of stem cell. To investigate the effect of topographic cues on stem cells, groove pattern arrays were constructed onto gelatin surfaces. Human adipose-derived stem cells (hASCs) were seeded onto the patterned gelatin films to observe cell proliferation and differentiation.
Introduction
According to a neurological disorders report from the World Health Organization (WHO) in 2006, 1 the number of people who die from neurological disorders increases year by year. It is estimated that around 9 million people will die from these disorders in 2030. In general, injured nerves can regenerate by themselves. 2 However, surgery is required if the injuries are too severe. Autologous nerve graing is the most common surgery traditionally, 3 but nerves cannot recover fully. Therefore, the aim of neural tissue engineering is to support traditional surgeries, even replace them. Recently, stem cell therapy has drawn a lot of attention. 4 Stem cell therapy uses neural cells differentiated from stem cells to enhance nerve repair. There are many kinds of stem cells that can differentiate into neural cells, but some of them are hard to harvest.
To investigate the behavior of stem cells in human tissues, scientists have constructed substrates with both physical and chemical features. These stimuli can mimic stem cell niches to affect cell proliferation, morphology and differentiation. 5 Stiffness and topographic cues are commonly used to imitate physical features of the ECM. Matrix stiffness has a large effect on cell function. It regulates cell adhesion, growth, survival and motility, and even the cell phenotype. 6 The stiffness of tissues can be measured by the elastic modulus E of a solid, which ranges from 0.5 kPa (fat tissue) to 20 000 kPa (bone tissue).
Comparing these values with a tissue culture on plastic or glass, where the elastic modulus is in the GPa range, it clearly shows that cells are not usually cultured in a non-physiological environment. As a result, many researchers have studied cells in vitro under a more physiological environment. For example, McDaniel et al. revealed that the stiffness of collagen brils can inuence the phenotype of muscle cells. 7 Synthetic substrates with controllable stiffness showed the differences in cell motility and adhesion. 8 Besides, stiffness can further regulate cell lineage commitment and differentiation state, such as the differentiation of precursor cells into osteoblasts, neurites outgrowth and the striation of muscle cells. 9 On the other hand, the effect of topography on cell behavior has been investigated since 1911. 10 Cells can respond to topographic cues as small as 5 nm. Therefore, topographic features from the micro-to the nano-scale have been widely developed. In general, surface topography is affected by roughness and patterns on the surface. 11 Many types of features, for instance, grooves, dots and pillars, are developed, because these structures can be systematically modied and reect the brillar organization of the ECM.
Among these structures, ridges and grooves have been investigated extensively. Micro-or nano-grooves can guide cells to align along the patterns. However, a previous study showed that broblasts do not align with groove depths below 35 nm or ridge widths smaller than 100 nm. 12 Yang et al. investigated the inuence of nano-grooves on the morphology of osteoblast-like cells with a groove-to-ridge ratio of 1 : 1 (90-500 nm in width and 300 nm in depth). It showed an increased cell-spreading area compared to at surfaces and elongated nuclei. 13 Sciancalepore et al. fabricated bronectin (FN) micro-patterns to drive the differentiation of adult renal progenitor cells (ARPCs) in the absence of an exogenous chemical or cellular reprogramming. 14 As mentioned above, there are lots of stem cells that can differentiate into neural cells including embryonic stem cells (ESCs), 15-17 neural stem cells (NSCs) [18][19][20] and adipose-derived stem cells (ASCs). [21][22][23][24] Among these stem cells, ASCs are commonly used because they can be harvested easily and are able to differentiate into multiple lineages including adipocytes, 25 myocytes, 26 osteoblasts, 27 chondrocytes 28 and neural cells. 29 Many studies have investigated how topography affect differentiation of human adipose-derived stem cells (hASCs). Deiwick et al. fabricated a "Lotus" structure using titanium to investigate how it affected the osteogenic differentiation of hASCs. 30 In addition, Mobasseri et al. used polymers to fabricate grooves and studied neural differentiation of hASCs. 31 However, materials used to fabricate topography are too rigid for cells.
In this study, to mimic the physiological environment of normal tissue, nano-grooves were fabricated onto a gelatin substrate, which is a natural and so biomaterial. The mechanical properties of gelatin can be adjusted through crosslinking. Grooves were chosen because they can be fabricated into different sizes easily. To study the relationship between neural differentiation of stem cells and surface topography, hASCs were cultured on different groove sizes on gelatin lms.
Materials and methods
Fabrication of nano-grooved gelatin lms. Four sizes of grooved silicon substrates with a groove-to-ridge ratio of 1 : 1 (groove width/depth (nm): 400/100, 400/400, 800/100 and 800/ 400) were used to fabricate topography onto polydimethylsiloxane (PDMS, Dow Corning, USA) molds and gelatin lms ( Fig. 1). At rst, the silicone rubber molds were fabricated by mixing the elastomer with a curing agent with a ratio of 10 : 1. The mixture was cast onto silicon substrates and degassed to remove entrapped bubbles. Aer polymerizing at 80 C for 2 h, PDMS molds with negative replicas of topography on substrates could be easily removed from the silicon substrates.
To fabricate the nano-grooved gelatin lms, PDMS molds were placed and xed in a Petri dish with the patterned side upwards. An aqueous solution of 5 wt% gelatin (Sigma-Aldrich, Germany) and 1% PSA (Sigma-Aldrich, Germany) was cast into the Petri dish, and air-dried overnight at 25 C. The gelatin samples were subsequently immersed in binary solvent mixtures containing N-(3-dimethylaminopropyl)-N 0 -ethylcarbodiimide hydrochloride (EDC, Sigma-Aldrich, Germany) and N-hydroxysuccinimide (NHS, Sigma-Aldrich, Germany). The reaction was allowed to proceed at 25 C for 96 h. 32 Crosslinked gelatin lms were cut down and thoroughly rinsed with deionized (DI) water to remove excess EDC and the urea byproduct. Before cell culturing, gelatin lms were immersed in cell culture medium for 24 h to ensure the removal of excess crosslinking agent.
Topography measurement. To verify the delity of replication, the topography of silicon substrates was examined using a scanning electron microscope (SEM, NovaTM NanoSEM 23, FEI, USA). On the other hand, PDMS molds and nano-grooved gelatin lms were examined using an atomic force microscope (AFM, Multimode 8, Bruker, USA). Gelatin lms were immersed in DI water until fully swollen before measurement.
Crosslinking extent measurement. A previously reported assay was used to determine the number of uncrosslinked 3amino groups in the crosslinked gelatin. 33 Initially, 11 mg of gelatin was mixed with 1 mL of 4% NaHCO 3 (Sigma-Aldrich, USA) and 1 mL of 0.5% 2,4,6-trinitrobenzenesulfonic acid (TNBS, Sigma-Aldrich, USA), and heated at 40 C for 4 h. 3 mL of 6 N HCl (Honeywell, USA) was added and the mixture was autoclaved for 1 h at 120 C and 15-17 psi. The hydrolysate was diluted with 5 mL of water, and then extracted with ethyl ether.
A 5 mL aliquot of the aqueous phase was removed from each sample and heated for 15 min in a hot water bath. Aer cooling to room temperature, samples were diluted again with 15 mL of water. The absorbance was measured at 346 nm in a UV/Vis spectrophotometer (Cary 100, Agilent, USA) against a blank. Four replicates were used in each determination. Blanks were prepared in triplicate by the same procedure as above, except that HCl was added before TNBS to prohibit the reaction of amino groups with TNBS.
Mechanical test. Before the mechanical test, gelatin lms with different crosslinking times were immersed in DI water until use. Gelatin lms were cut into a round shape with an 8.1 mm diameter. The thickness of lms was measured using a thickness gauge. Round gelatin lms were compressed to half of their thickness by a mechanical test instrument (ElectroForce 3200, TA, USA). The Young's modulus was calculated from the linear region of the stress-stain curve (initial slope: 0-10%).
Swelling test. Gelatin lms were weighed aer being fully airdried. They were then immersed in DI water for different periods of time. Wet samples were wiped with lter paper to remove excess liquid and reweighed. The amount of adsorbed water was calculated as W(%) ¼ 100((W w À W d )/W d ), where W w and W d are the weights of the wet and dry samples.
Contact angle measurement. Nano-grooved and at gelatin lms were immersed in DI water before measurement. At rst, gelatin lm was xed on the glass slide, and immersed in water with the patterned side down. Then, one air bubble was blown onto the patterned surface of the gelatin lm using a microsyringe. Aer snapping clear images of the lm and air bubble, the contact angle was analyzed and calculated using ImageJ soware.
Cell culture conditions. Isolation, cultivation and identication of hASCs from the stromal vascular fraction were performed as described in our previous study. The protocols were approved and maintained by the Research Ethics Committee at National Taiwan University Hospital under the guidelines of the Human Subject Research Acts of Taiwan, R.O.C. Informed consent was obtained from human donors. In this study, hASCs (from National Taiwan University Hospital) were cultured in growth medium (DMEM/F-12 (Hyclone, USA) containing 10% FBS (Biological Industries, USA), 1% PSA and supplementary 1 ng mL À1 basic broblast growth factor (bFGF, Sigma-Aldrich, USA)) in T75 ask at 37 C in an incubator containing 5% CO 2 and saturated humidity. Once the conuence was reached, hASCs were removed from the surface by trypsin-EDTA (Biological Industries, USA), and seeded onto 1 Â 1 cm 2 gelatin lms in the 24-well plate at the concentration of 10 4 cells per cm 2 . To avoid cells dropping from the gelatin surface, PDMS pads were put under the gelatin lms. Aer 3 h, PDMS pads were removed and each well was treated with 0.6 mL medium and the medium was refreshed every 3 days. The cell morphology, viability and proliferation were analyzed.
To induce neural induction, hASCs were cultured in an induction medium (DMEM/HG (Hyclone, USA) containing 1% FBS and supplementary 100 ng mL À1 bFGF) for 7 days. Then cells were cultured in the presence of 10 mM forskolin (Sigma-Aldrich, USA) for another 7 days. The expression of neural differentiation markers was analyzed by immunouorescence and qPCR.
Cytotoxicity/cell viability. To test the cytotoxicity, the medium in each well was removed and rinsed with PBS two times. Aer being xed by 3.7% formaldehyde solution (ACROS Organics, USA) in PBS for 15 min, cells were stained by using the LIVE/DEAD™ Viability/Cytotoxicity Kit (molecular probes, USA).
Samples were immersed in solution with 2 mM calcein AM and 4 mM Ethidium homodimer-1 (EthD-1) in PBS for 30 minutes.
To test cell proliferation of hASCs on gelatin lms, we cultured 5000 cells on nano-grooved and at gelatin lms for 1, 4 and 7 days. CytoScan™ WST-1 Cell Cytotoxicity Assay (G-Biosciences, USA) was used. With a nal volume of 200 mL per well culture medium, blank wells with culture medium only were prepared. 20 mL WST-1/CEC assay dye solution was added to each well and shook gently to mix the chemicals with the medium. Then, the plates were incubated for 2 h in the cell culture incubator. Aer that, the plates were shaken for 1 min on a shaker and the absorbance at 450 nm was measured using a microplate reader. The wavelength of 630 nm was set as a reference.
Cell alignment analysis. At rst, 10Â phase contrast microscope images of hASCs cultured on nano-grooved and at gelatin lms were snapped in 20 random areas. To quantify cell alignment, these images were analyzed using ImageJ soware. At least 30 cells were analyzed in each image. Cells aligned perfectly in the direction of the grooves have an angle of 0 and cells perpendicular to the direction of grooves have an angle of 90 . The full range of angles varies between 0 and 90 . Alignment of cells was determined by measuring the angle between the long axis of cells and the direction of grooves. The experiments were repeated at least three times for each group.
Immunochemistry. Aer being xed by 3.7% formaldehyde solution in PBS for 15 min and permeabilized by 1% Triton X-100 solution (J. T. Baker, USA) in PBS for 10 min at room temperature, samples were blocked by blocking buffer (5% BSA solution in PBST, Sigma-Aldrich, USA) at 37 C for 1 h. Aer blocking, samples were immersed in primary antibodies of neural markers including Tuj-1 (2 mg mL À1 , abcam, USA) and nestin (0.8 mg mL À1 , abcam, USA) in blocking buffer at 4 C overnight. Aer this, secondary antibodies including Alexa Fluor 488 and 594 (molecular probes, USA) in PBST were used to conjugate with primary antibodies at room temperature for 1 h. Finally, the nucleus was stained with a 10 mg mL À1 DAPI solution (Sigma-Aldrich, USA) in DI water for 5 min. Samples were washed extensively by PBST in between each step.
Then, cDNA, F/R primers (Yuanying, Taiwan), Power SYBR Green PCR Master Mix (Thermo Fisher scientic, USA) and DNase/RNase Free Water (Yuanying, Taiwan) were mixed in Eppendorf PCR tubes (8-tube strips). The sequence of primers is shown in Table 1. Aer sealing using clear adhesive lm, a PCR was run using a real-time PCR instrument (StepOnePlus, Applied Biosystems, USA). The expression level was analyzed and normalized to glyceraldehyde 3-phosphate dehydrogenase (GAPDH) for each sample. The relative quantity (RQ) of gene expression was calculated using the comparative C T method. The sequence of primers is shown in Table 1.
Statistical analysis. All of the results were expressed as means AE standard error of the mean. The error bar indicated the standard deviation. Comparison between the different groups was analyzed using Student's t-test in Microso Excel 2013. This determined statistically signicant as a p value < 0.05 and statistically highly signicant as a p value < 0.001.
Results and discussion
Topography measurement AFM was used to measure nano-grooves on gelatin lms (Fig. 2). The distance between adjacent peaks was around 800 nm on 400/100 and 400/400, and 1800 nm on 800/100 and 800/400. The vertical distance between the peak and valley was around 150 nm on 400/100 and 800/100, and 200 to 300 nm on 400/400 and 800/400. The groove depth of 400/400 was not as deep as 800/400, which may result from the groove with a 400 nm width being too narrow for gelatin to permeate. Besides, the soness of gelatin and soaring grooves led to a curvature in 2D AFM image of 400/400.
Crosslinking extent measurement
To determine the crosslinking degree of the gelatin lms, TNBS was used to bind with the 3-amino groups in gelatin. It showed yellow and had an absorbance at 346 nm. Less 3-amino groups refers to a higher extent of crosslinking. The result showed that the crosslinking extent of the gelatin lms aer crosslinking by EDC and NHS for 1 to 4 days rose from 42% to 57% (Fig. 3a).
The soness of the gelatin lms aer crosslinking for 4 days was more suitable for experimental operations. Although the crosslinking extent of gelatin crosslinked by EDC and NHS was not as high as glutaraldehyde ($90%), lower cytotoxicity and proper mechanical strength made it a better choice.
Mechanical test
The Young's modulus of crosslinked gelatin lms was used to determine whether the soness of gelatin is close to natural organs. The Young's modulus of gelatin lms aer crosslinking for 1 and 4 days was calculated from the linear region of the stress-stain curve (initial slope: 0-10%). It was found that the Young's modulus rose from 326 to 894 kPa (Fig. 3b), and was similar to human organs. 34 Hence, cells were cultured on so substrates whose stiffness was close to natural organs instead of stiff substrates such as polystyrene and polyurethane.
Swelling test
Crosslinked gelatin lms were bendable and would absorb a lot of water. To investigate when gelatin lms stop swelling, the swelling ratio of gelatin lms was measured. The results showed that weight of gelatin lms crosslinked for 4 days reached 260% of dry weight aer immersing in DI water for 30 min (Fig. 3c), which means gelatin lms had a high water content. On the other hand, it was found that gelatin lms can be dehydratedhydrated repeatedly. In this study, all of the gelatin lms were immersed in DI water or culture medium for at least 30 min until use to prevent topography change.
Contact angle measurement
To investigate the hydrophilicity of nano-grooved gelatin lms, the contact angle was measured (Fig. 3d). As gelatin lms were immersed in culture medium during cell culture, air bubble contact angle was measured rather than water contact angle. In general, gelatin is hydrophilic and has high air bubble contact angle. The results showed that the contact angle of the air bubble on the 400/100 surface was signicantly lower than the other groups, which means the surface of 400/100 was more hydrophobic. A previous study revealed that wettability affects cell adhesion, as cells adhered well on hydrophilic surfaces. 35
Cytotoxicity/cell viability
To test the cytotoxicity of gelatin lms, the live/dead assay was used for hASCs proliferated on nano-grooved and at gelatin lms for 7 days, and TCPS was used as the control (Fig. 4a). The image showed a massive green uorescence, which indicates that most of the cells are alive. The ratios of living cells in each group were higher than 90%. This proved that unreacted crosslinking agent had been removed successfully or has only little toxicity to cells.
In addition to cytotoxicity, cell viability was tested using the WST-1 assay (Fig. 4b). hASCs proliferated on nano-grooved and at gelatin lm for 1, 4 and 7 days were tested, and TCPS was used as the control. The results showed cell amounts increase steadily from day 1 to 7. There is no signicant difference between each group except TCPS, which may result from the difference of surface area between the 24 well and gelatin lms (1 Â 1 cm 2 ).
Cell alignment analysis
To observe the cell morphology of hASCs on nano-grooved and at gelatin lms for 1, 4 and 7 days, phase contrast microscope images were taken (Fig. 5a). On day 1, cells were aligned with the grooves on 400/400 and 800/400, whereas part of cells were aligned with grooves on 400/100 and 800/100 until day 7. It was assumed that groove depth has a greater inuence on cell alignment than width.
To quantify cell alignment, the angle between the cell and groove (Fig. 5b) was measured. As mentioned above, about 80% of cells aligned with grooves within 10 on 400/400 and 800/400 in the beginning. However, 40 to 55% of cells on 400/100 and 800/100 seemed to align with grooves within 10 until day 7, whereas cells on at gelatin lms still remained in random alignment.
hASCs morphology aer neural induction
Compared to cells proliferated on gelatin lms for 14 days, cells differentiated on gelatin lms for 14 days showed contraction and neurites outgrowth (Fig. 6a). At the rst stage of differentiation, cells were still elongating on day 7. Then, the cell body gradually contracted and neurites projected. There was no signicant difference observed for cell morphology between each group from the phase contrast microscope images.
The expression of neural markers
To study whether the cells successfully differentiate into neural cells and the effect of grooves on differentiation, we stained Tuj-1 and nestin for cells proliferated in growth medium and differentiated in the induction medium on 800/400 and at gelatin lms for 14 days (Fig. 6b). Tuj-1 is a neural marker expressed in immature neurons and nestin is an intermediate lament protein expressed during the early stage of neural differentiation. Confocal images showed a higher ratio of neural marker expression aer neural induction, nestin especially. However, there is no difference between 800/400 and at gelatin lms in these images.
To quantify the expression of Tuj-1 and nestin, we ran qPCR for cells differentiated on 800/400 and at gelatin lms for 14 days with Tuj-1 and nestin (Fig. 6c). The results showed that Tuj-1 expression of hASCs on 800/400 was three times higher than TCPS, whereas nestin expression was similar between each group. The increase of the Tuj-1 and nestin expression also correlated with our results from previous studies. 36,37 Li et al. demonstrated adult neural stem cells showed a signicant increase in neuronal differentiation on engineered anisotropic substrates (Si wafer) compared to the control. In their western blot analysis, upregulated Tuj-1 expression was also seen. 36 Béduer et al. demonstrated that pre-coated micropatterned PDMS surfaces can serve as effective neurite guidance surfaces for human NSCs. Immunocytochemistry analysis showed that the channel width can strongly impact development and differentiation. 37
Conclusions
Topography has shown a great inuence on cell behavior in previous studies. Cells were cultured on patterned materials to investigate how topography affects cells. However, most of the materials such as polystyrene (Young's modulus > 3 GPa) are not as so as natural tissues. In this study, nano-grooves on gelatin, whose Young's modulus is close to natural organs, was successfully fabricated. Gelatin lms are believed to benet cell adhesion due to their hydrophilic properties and high water content. In addition, gelatin crosslinked by EDC and NHS showed low cytotoxicity to hASCs. Cells adhered well on gelatin lms and aligned with grooves on 400/400 and 800/400. As for neural differentiation, hASCs contracted and neurites projected aer 14 days of induction and immunochemistry and qPCR results revealed that expression of neural markers (Tuj-1 and nestin) was higher aer neural induction on gelatin substrates, which are more physical and chemically relevant to real tissue, especially the nano-groove patterned gelatin. Engineered anisotropic topographical cues could improve neurite outgrowth and promote neural differentiation.
Conflicts of interest
There are no conicts to declare. | 2019-04-09T13:06:50.117Z | 2017-11-16T00:00:00.000 | {
"year": 2017,
"sha1": "01475f621715bf6640ce45f14fa6824d38542b83",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c7ra09020j",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "01475f621715bf6640ce45f14fa6824d38542b83",
"s2fieldsofstudy": [
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
213736516 | pes2o/s2orc | v3-fos-license | Snakeskin Appearance of Gastric Mucosa Compressed by Adjustable Gastric Bands: A Novel Diagnostic Marker of Band Migration
Purpose: The aim of this retrospective study is to describe changes of gastric mucosa in patients with adjustable gastric band migration, and to evaluate the diagnostic value of these changes. Materials and Methods: The postoperative endoscopies of all patients that underwent adjustable gastric band surgery at a single tertiary center were retrospectively reviewed. Gastric mucosal patterns were classified based on the appearance of gastric mucosae compressed by adjustable gastric bands, as follows; Group A: normal appearance, Group B: snakeskin (reticular) appearance without band migration, Group C: snakeskin appearance with band migration, and Group D: recuperated gastric mucosa with advanced band migration. Results: Postoperative endoscopic findings of 109 patients obtained from Jan 2012 to Oct 2018 were available, and these patients were assigned to the four groups, as follows; 82 to group A, 5 to group B, 14 to group C, and 8 to group D. Times (months) between AGB implantation and initial postoperative endoscopy evaluations were 45.2±22.3, 40.0±28.2, 36.2±18.6, and 42.1±17.0, respectively (P=0.531). Of the five patients in Group B, 3 underwent band explantation due to band migration (P=0.000). Conclusion: Conclusion A snakeskin pattern of gastric mucosa compressed by adjustable gastric band is strongly associated with adjustable band migration. The presence of this pattern might predict band migration before endoscopic confirmation and its identification might prevent complications associated with long-standing band migration.
INTRODUCTION
Obesity is associated with type 2 diabetes mellitus, hypertension, hyperlipidemia, and non-alcoholic fatty liver disease and even malignancies like colon and breast cancer. The prevalence of obesity is increasing worldwide and this increasing trend also has been observed in Asian countries. Furthermore, according to the Korea National Health and Nutritional Examination Survey (KNHANES) conducted in 2016, the prevalence of obesity in Korea had increased to 42.3% for men and 26.4% for women. Bariatric surgical procedures are known to achieve substantial weight loss and provide major secondary health benefits, and because of its ease, safety, and adjustability, laparoscopic adjustable gastric band (AGB) surgery is one of the most popular bariatric procedures. However, AGB has lost favor in recent years due to major long-term complications, such as slippage, migration, and intolerance, requiring explanation [1]. Thus, endoscopic surveillance of patients with an AGB is an integral part of postoperative management [2][3][4], because patients may have abnormal symptoms such as epigastric pain, nausea, frequent vomiting, and gastroesophageal reflux, all of which might be attributable to AGB complications. Occasionally, we have noted by postoperative endoscopy a mosaic or snakeskin appearance of gastric mucosa around bands in patients with band migration. In this study, we postoperatively reviewed the endoscopic findings of our AGB patients and investigated the clinical significance of this novel finding. [5]. AGB surgery was performed by a single surgeon (S.M.K) as previously described [6] and after surgery, patients were recommended to undergo an endoscopic examination whenever there was clinical suspicion of a complication, such as slippage, erosion, or incapacitating vomiting, reflux, or epigastric pain. In addition, asymptomatic patients were recommended to undergo biannual endoscopy for screening purposes. The exclusion criteria applied were; the performance of endoscopy at another hospital and endoscopic images too poor to interpret. Endoscopies in the study subjects were performed by one of five authors of this study. After removing saline from the band balloon, patients were transferred to our endoscopic suite. Initially, the distal esophagus was observed in forward view followed by the esophagogastric junction (EGJ) junction for pouch formation, prolapse, esophagitis, then observed down to the antrum, and a J-turn to evaluate band migration (Fig. 1). A snakeskin appearance (SSA) was defined as a mosaic or reticulated appearance of gastric mucosa compressed by gastric band with magnified view, only in case of the normal background gastric mucosa of fundus or cardia; similar findings have been reported in portal hypertensive gastropathy [7,8], eosinophilic gastritis [9], and H. pylori [10]. However, we observed this effect exclusively in the fundus of involved gastric bands, and rarely in any other part of the stomach, even in nearby fundus or cardia.
MATERIALS AND METHODS
Gastric mucosal patterns were categorized based on the appearance of gastric mucosa compressed by bands, as follows; Group A: normal appearance, Group B: SSA without band migration, Group C: SSA with band migration, and Group D: recuperated gastric mucosa with advanced band Table 1. Demographic, anthropometric, and clinical data of each group of patients that underwent postoperative gastroscopy from Jan 2012 to Oct 2018 (N=109)
DISCUSSION
Band migration (also termed erosion) is a well-known complication of AGB implantation. Unlike other complications such as slippage and intolerance, band migration always requires band system removal because long-standing inflammation associated with an eroded gastric band has been proven to be associated with a number of serious morbidities [11][12][13]. The mechanism of intragastric migration of AGB has yet to be defined. In our opinion, band migration is a consequence of a chronic infectious process initiated by infection of the AGB system caused by poor surgical technique, an abscess around an anchoring stitch or reservoir port, or tension (or ischemia) of gastric mucosa around an AGB (Fig. 2).
The clinical manifestations of band migration can range from asymptomatic to acute peritonitis and sepsis.
Asymptomatic individuals often increase food intake and gain weight due to loss of restriction by the band balloon.
In our series of patients, abdominal pain was the pre- In the present study, SSA was found to be strongly associated with band migration. The sensitivity of SSA for prediction of band migration is 60% (3/5) (Table 2). about the optimal timing of migrated AGB removal [11,[15][16][17][18][19], many case reports have been issued describing serious complications arising from long standing band migration, including liver abscess [20], small bowel injury [21], delayed bleeding [22,23], and peritonitis [13,24]. One layers. In addition, no normal, healthy undulating gastric fold was observed in gastric mucosa compressed by band associated with band migration, and unlike PHG, background gastric mucosa in fundus and cardia were not grossly affected. We also speculate band migration can be detected endoscopically before symptoms develop because it seems likely that the migration process might start from a mucosal change, tiny hole, and lead to full-blown intraluminal migration. Currently, no pathognomonic sign or symptom has been identified for band migration.
Our study has several limitations. First, it is limited by its retrospective design and the relatively small number of patients with available postoperative endoscopy findings.
In addition, given that band migration is a chronic process, than that in other regions of the gastric wall and is susceptible to injury. We consider biopsy in this situation might induce transmural injury of gastric mucosa and cause band migration, and that therefore biopsy is unethical in most cases. Further study need to be done for analysis of histologic finding of the SSA in selected cases. In conclusion, we found a SSA of gastric mucosa compressed by band was strongly associated with AGB migration.
Furthermore, we believe this finding might predict band migration before endoscopic confirmation, and thus, prevent complications associated with long-standing band migration. Therefore, we recommend careful observation of the pattern of gastric mucosa compressed by gastric band as well as gross migration of gastric band. | 2020-01-02T21:12:57.332Z | 2019-12-30T00:00:00.000 | {
"year": 2019,
"sha1": "513caf524328084f4ad644f4b6a030b2f1c4929d",
"oa_license": null,
"oa_url": "https://doi.org/10.17476/jmbs.2019.8.2.37",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6eb8f0dce081142d9030697ab0f6dc4b4c48bf17",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248171290 | pes2o/s2orc | v3-fos-license | The effects of habitual code-switching in bilingual language production on cognitive control
Abstract This study explored how bilingual code-switching habits affect cognitive shifting and inhibition. Habitual code-switching from 31 Mandarin–English bilingual adults were collected through the Language and Social Background Questionnaire (Anderson, Mak, Keyvani Chahi & Bialystok, 2018) and the Bilingual Switching Questionnaire (Rodriguez-Fornells, Krämer, Lorenzo-Seva, Festman & Münte, 2012). All participants performed verbal and nonverbal switching tasks, including the verbal fluency task, a bilingual picture-naming and colour-shape switching task. A Go/No-go task was administered to measure the inhibitory control of participants. Frequent bilingual switchers showed higher efficiency in both English to Chinese verbal switching and nonverbal cognitive shifting. Additionally, bilinguals with intensive dense code-switching experience outperformed in the Go/No-go task. In general, the study revealed the connections between bilinguals’ intensity of single-language context experience and goal maintenance efficiency, which partially supported the Adaptive Control Hypothesis’ prediction (Green & Abutalebi, 2013). Besides, it also indicated the facilitations of bilinguals’ dense code-switching experience on their conflicts monitoring and response inhibition.
Introduction
Bilingual speakers commonly select the appropriate language to use in different contexts, such as using English at work and Chinese at home, or switching between two languages in the same conversation. For a successful code-switching production, bilinguals need to access the appropriate language and resolve the competition from the unwanted (Bonfieni, Branigan, Pickering & Sorace, 2019;Green, 1998;Green & Abutalebi, 2013), a process that requires additional demand on general-domain cognitive control mechanisms (e.g., Abutalebi & Green, 2007, 2008Calabria, Costa, Green & Abutalebi, 2018).
One reason for these inconsistent findings might be the lack of standard measures of bilinguals' habitual code-switching experience. Self-reported questionnaires, measuring bilingual code-switching frequency in life, are commonly used in the available literature. However, information about sociolinguistic context, such as how languages are switched and used on a daily basis or in various situations, is seldom reported. Furthermore, lab-based experimental paradigms measuring the relationship between code-switching and cognitive control may have a reduced ecological validity (Green & Abutalebi, 2013;Green & Li, 2014;Hofweber, Marinis & Treffers-Daller, 2020;Kheder & Kaan, 2021). To measure language use habits ecologically in bilingual speakers, it is crucial to have methods, such as computing language entropy (Gullifer, Kousaie, Gilbert, Grant, Giroud, Coulter, Klein, Baum, Phillips & Titone, 2021; Gullifer & Titone, 2020), that can assess not only the quantity of bilingual switching but also the traits of switching in naturalistic settings.
Cognitive control processes in bilingual code-switching production
The current study aims to investigate the consequences of habitual code-switching practices on bilinguals' language and cognitive control within the predictions of the Adaptive Control Hypothesis (ACH; Green & Abutalebi, 2013) and the Control Process Model (CPM; Green & Li, 2014). The models predict that bilinguals' cognitive control strategies applied in performing nonverbal cognitive tasks may vary substantially based on the language control processes involved in different code-switching practices. Specifically, the frequent use of two languages separately or the use of languages more cooperatively in the same context are expected to have different impacts on cognitive control. The ACH and the CPM propose three different interactional contexts: i) single-language, ii) dual-language, and iii) dense code-switching context, and predicts that the bilinguals' language control processes and degree of cognitive control vary across the three interactional contexts. The ACH discusses how cognitive control, such as inhibitory control and cognitive flexibility, can dynamically change and adapt to facilitate efficient bilingual production. The CPM further addresses how bilingual speakers draw on available cognitive resources in processing different code-switching utterances. It proposes competitive control and cooperative control modes to describe the diversity of bilingual's language use in communication and mediating the code-switching processing. When bilinguals use their languages separately in distinct contexts, their languages are in a "competitive mode"that is, bilingual speakers will have to selectively control, or suppress, the untargeted language over the target language. This process requires increased cognitive demands on goal maintenance and interference control. In contrast, bilinguals in dense code-switching processes, where both languages are produced interchangeably within utterances, use their languages more cooperatively and presumably with relatively lighter control of both languages to enable flexible and intensive code-switching production. Bilinguals, in a dual-language context, generally use their languages alternatively to different interlocutors or switch between languages intersententially. That is, two languages are involved in the same context but switched at clauses boundaries. Compared to dense code-switching utterances, processing bilingual language in this context may require a higher cognitive demand on cognitive control components. In particular, it could actively engage salient cue detection, response inhibition and task engagement/disengagement to efficiently ignore distracting language interference, suppress ongoing language production and shift to respond in the other language (Kałamała, Szewczyk, Chuderski, Senderecka & Wodniecka, 2020;Lai & O'Brien, 2020). Hence, cognitive control is hypothesised to be intensively exercised in a dual-language context or in a long-term experience of intersentential switching practices. In sum, both models assume that code-switching processing depends on the bilinguals' habitual language control mediated by communicative demands in a specific language environment.
Effects of bilinguals' code-switching habits and language proficiency
More studies now recognise the diversity of individual difference in bilingual code-switching habits, but its effects on cognitive control remain unclear. Some studies have examined the effects of individual difference in code-switching frequency on cognitive control. For example, Soveri et al. (2011) found that a higher frequency of code-switching in daily life contributed to more efficient top-down management of competing tasks (i.e., smaller mixing cost in error rates). Similarly, enhanced inhibitory control was found to be associated with frequent bilingual code-switchers in daily communication compared with those who rarely switch between languages (Prior & Gollan, 2011). Although the available evidence suggests that code-switching frequency plays a role in facilitating cognitive control performance in bilinguals, the participants in these studies were from different communities and social backgrounds (Hofweber, Marinis & Treffers-Daller, 2016;Kheder & Kaan, 2021). Care should be taken when interpreting the results from bilingual participants who belong to different social communities, as bilinguals tend to have a homogeneous language repertoire. As Verreyt et al. (2016) mentioned, Hispanics in southern California use Spanish and English more interchangeably and engage in switching compared to Spanish-English bilinguals in other communities in the US, such as San Francisco.
As the ACH and the CPM suggest, bilinguals are able to adapt language control mechanisms, or recruit different strategies, to produce appropriate code-switching utterances for distinct purposes of communication in interactional environments. The contexts in which bilinguals habitually use languages concurrently or produce code-switching are also essential in affecting their cognitive processes in managing verbal and nonverbal related tasks.
With a rigorous measurement of the participants' habitual language use contexts, Hartanto and Yang (2020) found that bilinguals with higher intensity of dual-language context engagement had lower switching costs in a switching task than those who habitually use language in single-language contexts. Modulation effects have also been reported in levels of interference control. Ooi et al. (2018) found that in dual-language context, bilinguals were more engaged in interference control than bilinguals who habitually use language in a single-language context. Similarly, Lai and O'Brien (2020) reported that higher engagement in a dual-language context was associated with more efficient verbal shifting and nonverbal interference control. However, there are studies (e.g., de Bruin, Bak & Della Sala, 2015;Hofweber et al., 2016;Kałamała et al., 2020) which have failed to identify the impact of bilinguals' habitual language use contexts in support of the predictions of the ACH and the CPM.
Another important factor that has been shown to play a role in bilingual language processing and cognitive control is language proficiency (e.g., Declerck & Kormos, 2012;Kheder & Kaan, 2019;Pivneva, Palmer & Titone, 2012). In Mishra, Hilchey, Singh and Klein's (2012) study, proficient Hindi-English bilinguals were found to outperform bilingual peers with lower L2 proficiency level in a target detection task, reflecting the modulation effect of L2 proficiency on interference and attentional control. Similarly, Yow and Li (2015) found associations between balanced bilingual proficiency and stronger inhibitory control and cognitive shifting ability. As point out, language proficiency is closely interrelated to language usage and communicative context. Various components, such as length of L2 environment exposure, diversity of social language usage and language dominance in distinct contexts, can affect degrees of language proficiency in bilinguals. For instance, Luk and Bialystok (2013) found a significant correlation between daily language use and language proficiency, emphasising the intercorrelation of these two factors and the multifaceted feature of bilingualism. In fact, bilingual language use is closely linked to language proficiency level, so that code-switching only occurs when proficiency in both languages reaches a certain level (Kheder & Kaan, 2021, p. 3). Bilingual speakers, especially highly proficient bilinguals, are able to use their languages actively and produce code-switching on a daily basis, even if they are from different social communities.
In Verreyt et al.'s (2016) study, the effects of code-switching frequency on inhibitory control were only found among proficient bilingual speakers. Specifically, they found that frequent Dutch-French code-switchers with balanced language proficiency levels exhibited more efficient interference inhibition as compared to balanced bilingual non-switchers. tested high proficient bilinguals with different code-switching frequencies in their communication and observed a significant association between a high frequency of code-switching and reduced switching cost in task-set switching performance, reflecting the enhanced cognitive shifting skills of balanced bilinguals who habitually switch frequently between languages. Consistent with this finding, Barbu et al. (2018) found an association between frequent code-switching and better performance in task-set shifting, suggesting that code-switching frequency among proficient bilinguals is likely to boost cognitive shifting efficiency.
Noticeably, these findings show the interactive effects of bilingual language proficiency and code-switching frequency on cognitive control. The activation levels of both languages are comparable among balanced bilinguals while bilinguals with unbalanced language proficiency usually have their languages activated to different levels (Blumenfeld & Marian, 2007). Switching between languages frequently requires more effort in conflict monitoring and inhibitory control to avoid the competition deriving from co-activated languages. Hence, highly proficient and frequent code-switchers are able to display higher efficiency in conflict monitoring and inhibitory control due to their extra "training" in cognitive and language control (e.g., Kheder & Kaan, 2021).
The present study
The current study aims to understand the effects of habitual code-switching experience on Chinese-English bilingual speakers' domain-general cognitive shifting and inhibition performance. Three main research questions will be addressed: 1) What are the effects of bilinguals' code-switching habits and language proficiency on cognitive shifting and response inhibition? 2) Does increasing frequency of code-switching lead to better performance in a cued-language switching task and nonverbal cognitive control tasks? 3) Is the bilinguals' performance in verbal and nonverbal switching tasks intercorrelated?
It is predicted that: 1) Higher L2 proficiency and code-switching frequency will facilitate bilingual participants' performance in non-verbal cognitive shifting and response inhibition tasks. 2) Bilinguals with intensive experience of using languages in a single-language context will perform less proficiently in both verbal and nonverbal switching tasks.
Methods
The study met the requirements and gained the approval of the Ethics Committee of Institute of Education, University College London (UCL data protection registration number: Z6364106/ 2019/03/108), concerning empirical studies with human participants. Only individuals residing in English-speaking countries by the time of the study with daily use of Chinese and English are invited to the study. Information sheet and consent form were provided to individuals who expressed interest to this study to decide to participate or not. No data was collected until participants signed the informed consent form. Before the study started, the researcher introduced participants to the procedure of this study and instructions of each task in Chinese to make sure the participants fully understood how to complete the study. After the participants completed the whole study, they would receive debriefing explaining the goals of this study and the aims of each task they had just experienced. Participants were also asked not to share the information related to the study goals to anyone they knew who might be participating in this study.
Participants
Thirty-one (18 females; mean age: 28 years old, SD = 4.53, range 22-42 years old) healthy right-handed Mandarin-English bilinguals living in English-speaking countries (i.e., UK, US, Canada, Australia and Ireland) took part in this study. All participants are Mandarin Chinese L1 speakers and have resided in an English-speaking country for 3.81 years on average at the time of the experiment. All the participants have learned English as a second language (L2) in mainstream school settings in China, on average after the age of 9 (SD = 4.81). Participants' habitual code-switching experience was measured through the Bilingual Switching Questionnaire (BSQW, Rodriguez-Fornells et al., 2012) and the Language Social Background Questionnaire (LSBQ, Anderson et al., 2018). A LexTALE test (Lemhöfer & Broersma, 2012) was used to measure participants' English proficiency. Table 1 below shows the participants' demographic information (age, L2 AoA, L2 proficiency, L2 exposure duration) and habitual code-switching information.
At the beginning of the session, a semantic verbal fluency test adapted from Woumans, Van Herck and Struys (2019) was conducted. This test was used as an objective measure of proficiency in both languages and as a baseline language switching proficiency. In this test, participants were given 60 seconds to name words belonging to a specific semantic category (i.e., animals, vegetables and jobs). The test included English/Chinese single-language and mixed-language conditions. In the single-language condition, participants were asked to produce words belonging to the category in one specific language (Chinese or English), while in the mixed-language condition, participants were required to continuously switch between their two languages when producing words within a given category. Categories and language orders in which the categories were examined were counterbalanced across participants. The mixed-language condition was completed last. The calculation of participants' baseline switch costs was conducted following Woumans et al. (2019) instructions, i.e., calculating differences in the L1 words produced in the L1 single-language condition and the number of L1 words produced in the mixed-language condition.
Before experimental tasks, all participants completed a Chinese-English Bilingual Switching Questionnaire (BSWQ) adapted from Rodriguez-Fornells et al. (2012) to assess their habitual code-switching experience. The 12-item questionnaire measured bilingual code-switching habits from four constructs (three items for each construct): L1 switching tendencies, L2 switching tendencies, contextual switch and unintended switch. Two additional questions were added to assess the bilinguals' habitual code-switching types (see Appendix 1).
At the end of the experimental task session, all participants completed the Chinese-translated Language and Social Background Questionnaire (LSBQ, Anderson et al., 2018) to collect information about their bilingual language use experience (see Appendix 2). Participants' responses to the Likert-scale questions related to language use in different occasions, social activities and to different interlocutors in the questionnaire were summarised into three main dimensions based on this study purposes, calculating their degree of L2 use at home, in non-home situations and in daily activities (see Appendix 3). Participants' language use in their different life stages was not summarised into the three dimensions of language uses because participants in this study moved to English-speaking countries for working or higher education after their high school stages in China, and they consistently reported that for the majority of time in their different life stages (i.e., from infancy to high school) they have been exposed to a Chinese Mandarin monolingual environment (see Table 1).
Picture-naming task
The picture-naming task in the current study measured the bilingual participants' verbal response accuracy and response latency to look at both switch and mixing costs for their two languages and how these variables were affected by their language proficiency and habitual language use experience.
In this task, participants were required to name black-and-white line-drawn objects in a specific language (i.e., Chinese or English) based on specific cues as quickly and accurately as possible. Their verbal responses were automatically recorded and their response times (RTs) analysed using Praat software (Boersma & Weenink, 2018). Line-drawn objects were selected and adapted from Snodgrass and Vanderwart (1980) and pictures in the Philadelphia Naming Test (Roach, Schwartz, Martin, Grewal & Brecher, 1996). Double cues (Logan & Bundesen, 2003;Zantout, 2019) were used to instruct participants to name objects in Chinese or English. Participants needed to name an object in English if it was presented surrounded by a blue background together with the British national flag; otherwise, they needed to name the object in Chinese when they saw it presented in a red background with China's national flag. Forty-one different pictures were used in this task and repeated within and between blocks.
This task consisted of one practice session with 10 trials for both Chinese and English naming, two single-language blocks (restricted to the use of the same language) and three mixed-language blocks (choose a specific language according to the cues). Each mixed language block included 57 experimental trials with 28 switching trials (language switch from the previous trial) and 28 repeated trials (same language as in the previous trial) and one practice trial at the beginning. Half of the switch trials were English-to-Chinese in each mixed-language block; 84 trials were evenly allocated to two single-language blocks, with 42 in Chinese and 42 in English. Each picture in this task was presented on the screen for 2500 ms followed by a 500 ms white blank. The whole task lasted for 30 minutes.
In single language blocks, pictures were randomised across participants to avoid consecutive repetition. In mixed language blocks, the sequence of switch and repeated trials was pseudo-randomized by participants, so that the number of trials for each participant and type was the same. Besides, to avoid the possible effect of the sequential order of the repeated and switch trials, no more than four consecutive trials of the same type (repeated or switch) appeared sequentially. In the mixed language blocks, in order to make sure that participants are not able to predict whether the first trial is a switch or repeated trial, a dummy trial at the beginning was designed. Figure 1a below illustrates the task structure and the trial presentation in each session.
Participants' verbal response accuracy was manually analysed. Reponses were not coded as errors if they used different terms due to their language habits to indicate the same objectfor example, "jiandao" and "jianzi" both mean "scissors". In line with the data pre-processing method in Bonfieni et al.'s (2019) study, responses were coded as errors when participants named an object in the wrong language or did not answer. In this situation, the trial was marked as an error and excluded from analysis of RTs; the following trial was also deleted from the analysis. If participants hesitated, paused or made self-corrections to their answers, the trial was also marked as an error and excluded from further analysis, but its following trial was retained. 1 Practice trials and RT in error trials were not included in the data analysis. The participants' reaction times, also reported as voice onset time (VOT), were analysed using Praat phonetic software (Boersma & Weenink, 2018;Filippi, Karaminis & Thomas, 2014). An internal textgrid (silences) script in the software allows slicing each audio byte into "sound" and "silence" segments. For a segment to be considered "sound", it had to have a minimum pitch of 100 Hz, to have exceeded a -25 dB threshold and to have lasted for at least 0.1s. "Silence" segments should last for at least 0.2 s. The starting point of the first "sound" segment was regarded as the voice onset time in the picture-naming task. The response time in each trial was also manually checked to discard trials with unclear voice recording and to revise the response times in some trials due to loud noise interference during participants' utterances (for an example of VOT analysis see Appendix 4).
Nonverbal colour-shape switching task The colour-shape switching task used in the present study was adapted from Prior and MacWhinney (2010) to assess the bilinguals' shifting abilities. In this task, participants were instructed to make colour or shape judgements on visually presented stimuli based on cues by pressing specific buttons on the keyboard. The cue for shape judgements was a black heart icon, while a rainbow icon indicated colour judgements. Visual stimuli were circles and triangles, either blue or yellow. Each stimulus was presented after the cue appeared for 250 ms. Then, the cue remained on the screen and the stimulus was presented in the centre of the screen for 4,000 ms. Participants needed to use both hands to make keypressing responses during this task. Specifically, two keyboard buttons on the left-hand side, "x" and "c", and two right-hand side buttons, "n" and "m", were corresponding keys for colour and shape judgements. Emails with clear instructions of this task were sent to participants before they started the study, asking them to prepare stickers/paper in corresponding colours (i.e., yellow and blue) and shapes (i.e., circle and triangle) to label on the four targeted buttons (i.e., x, c, n, m) on their keyboards (see Figure 1b below). The labelled buttons were counterbalanced across participants.
This task was in a sandwich-design (Prior & Gollan, 2011). After 16 practice trials, there were two single-task blocks (colour and shape, order counterbalanced across participants) with 34 experimental trials and 2 initial practice trials included. Then, 16 mixed-task practice trials were followed by three mixed-task blocks. Each mixed-task block consisted of 50 trials in total, with 46 experimental trials and 4 practice trials evenly allocated at the beginning and end of the block. The ratio of switching and non-switching trials in each mixed-task block was 50:50. After the mixed-task blocks, participants performed two singletask blocks again, which were presented in the opposite order from that used in the first session. Participants' reaction time and response accuracy in each trial were automatically recorded 2 .
Go/No-go task: Whack-the-mole task A whack-the-mole task was used to measure participants' inhibitory control ability (Filippi, Ceccolini & Bright, 2021). Different kinds of moles in this task were the "go" stimuli, requiring participants to give a response (a whack!) by pressing the space bar on the keyboard. Aubergines were "no-go" stimuli, and participants were required to withhold their actions when one of them appeared on the computer screen. Each trial started with a picture of a hole in the meadow for 500 ms, then a mole or an aubergine appeared for 1800 ms (Figure 1c). Participants were instructed to respond as quickly and accurately as possible.
The task included 1 practice block, consisting of 3 no-go and 7 go-signal trials, and 4 formal blocks, including 55 no-go and 185 go-signal trials in total. The no-go withhold percentage is 23%. Participants' reaction time and response accuracy for go trials 1 If the former trial is named in a wrong language, the switching trial followed should also be excluded because the RTs for the latter trial is not primed by the targeted language. For example, trial 7 is designed to name in English and the trial followed (trial 8) is for Chinese naming. So, trial 8 RT is intended to reflect participants' naming speed in Chinese after English naming (i.e., RT for English to Chinese switching). It will be unavailable to calculate RT for English to Chinese switching once the trial 7 is wrongly named in Chinese. Similar situation also happens in RTs for repeated-language trials in the mixed language block. For example, both trial 7 and 8 were designed to be named in Chinese, however, trial 7 was wrongly named in English; therefore, trial 8 RT is not the RT for Chinese repeated trial, instead, it is the RT for Chinese naming primed by English naming.
Different from the above-mentioned situations, if a participant finally correctly named the trial in the required language, RT for the following trial was not affected, and it is possible to calculate the following trial's RTs as it was correctly primed by the required language. For example, even participants had some hesitations or selfcorrections in naming the trial 7 in English, trial 8 RT was correctly primed by English naming and was able to calculated it as RT for Chinese naming switched from English. Similar situation also applies to RTs in repeated-language trials. Considering to minimise the calculation deviation, although the participants finally named the trial (e.g., the trial 7 in above example) correctly, RT for this trial is excluded. According to the recent study for comparing lab-based and online tasks' RT (Bridges, Pitiot, MacAskill & Peirce, 2020), the online platform used in this study, PsychoPy online (version 2020.1), have achieved RT standard deviation under 3.5 ms on every browser/OS combo. Furthermore, PsychoPy in Python achieved sub-millisecond precision almost across the board. Specifically, PsychoPy for win 10 system runs on Chrome and Firefox can achieve mean timing precision of 1.36 ms and 1.84 respectively. As for MacOS, the mean timing precision for PsychoPy runs on Chrome and Firefox is 4.84 ms and 2.65 ms respectively. Therefore, to control the variance of timing variance caused by different computer OS systems, participants were required to use either Chrome or Firefox browsers only for the online tasks (Firefox browser is highly recommended if both browsers are available to use).
were recorded; furthermore, unsuccessful response withholding in no-go trials was also calculated as percentages of false alarm for data analysis. All participants provided informed consent before taking part in this online study. The study lasted about 90 minutes. Participants were instructed to join this study remotely in their quiet rooms and try to minimise noise distractions around them during the study procedure. Prior to any online tasks, participants were given enough time to test their network and set up the experiment platform. Technical problems or issues related to online task loading were detected and resolved by participants with supports from the research at this stage. Participants who still failed to get access to online experiment platform or tasks were excluded in this study. After completing online BSWQ and L2 proficiency test, participants were invited to a one-to-one online meeting with the researcher in which the verbal fluency test was administered. Afterwards, participants were allocated links for the rest three tasks, picture-naming task, Go/No-go task and the colour-shape switching task. All participants were instructed to complete the picture-naming task first, and the order of the two nonverbal cognitive tasks was counterbalanced across individuals. They were required to complete the LSBQ online at the end of the experiment session.
Statistics
Participants' reaction time (RTs) and response accuracy in the nonverbal cognitive control tasks and picture-naming task were collected. Only RTs for correct responded trials in these tasks were included into analyses. Both the parametric repeated measures ANOVAs and its corresponding nonparametric method, Friedmann tests, were conducted to explore and compare participants' RTs and response accuracy in each task.
The study applied both multiple linear regression and Bayesian regression analyses to investigate the associations between participants' performance in different tasks (i.e., RTs switch/ mixing costs in verbal and nonverbal switching tasks, RTs and response accuracy in the go/No-go task) and their bilingual language experience. Specifically, variables related to participants' bilingual language experience included in regression analyses as independent variables comprised: L2 proficiency (the LexTALE score), L2 exposure (yrs), L2 use in daily activities, L2 use in non-home situations, L2 use at home, L1 switch tendency, L2 switch tendency, frequency of contextual switches, frequency of unintentional switches, frequencies of intrasentential switching and intersentential switching. Participants' L1 and L2 verbal fluency as well as their baseline switch costs calculated in the semantic verbal fluency task were also included in the regression analyses. The correlations between variables related to bilinguals' language experience were also analysed (see Appendix 11).
Given the small percentages of error rates and participants all performed high accurately in language and task switching tasks, their response accuracy in the two tasks were not included in further analyses (Bonfieni et al., 2019). Table A2.2 in Appendix 2 showed the predictors and dependent variables pooled together in following regression analyses.
Outliers were detected before data analysis. Participants' responses in the L2 environment exposure (yrs) were not normally distributed, and there was one extreme item of data (value:17) found. Regression analyses with and without this value were conducted, and removing the extreme value in regression models did not significantly affect the final results. In stepwise regression modelling, after each step in which a variable was added, all candidate variables in the model are checked to see if their significance has been reduced below the specified tolerance level, and R 2 was reported in model selection. If a nonsignificant variable is found, it is removed from the model. Therefore, only the most significant variable is finally retained to the model, showing as the best predictor to the dependent variable. The following sections will present the results of the repeated measures ANOVAs and regression analyses sequentially.
Results
Performance in the picture-naming task
Reaction time
A 2×3 repeated-measures ANOVA was used to analyse the main effects of language (English, Chinese) and trial type (Single, Repeated, Switch) on participants' RTs. Table 2 shows the mean reaction time (RT) and mean response accuracy for naming pictures in Chinese and English.
The results showed a significant main effect for trial type on participants' language-switching performance, F (2, 60) = 23.55 p <.001 η p 2 = .44. Specifically, RTs for switch trials were significantly longer than for repeated trials, while RTs for repeated and single-language trials were comparable. Moreover, participants were 30 ms faster in naming pictures in English than in Chinese (L1), F (1, 30) =5.03, p =.03, η p 2 =.14, showing the effect of language on participants' RTs.
Analysis also revealed a significant language × trial type interaction in affecting participants' cued-language switching performance, F (2, 60) = 19.92, p < .001, η p 2 = .40. RT asymmetry between switching to English and Chinese, p = .001, was found. Participants' RT switching costs to Chinese were about 73 ms greater than to English. Although participants' RT for Chinese and English single-language trials did not differ significantly, they responded faster in English repeated trials as compared to Chinese ones in the mixed language blocks, p = .03. This finding reflected the reversed language dominance effect in bilinguals' cued-language switching productions (e.g., Christoffels, Firk & Schiller, 2007;Christoffels, Ganushchak & La Heij, 2016;Declerck, 2020;Gollan & Ferreira, 2009;Zhang, Li, Ma, Kang & Guo, 2021). That is, bilingual speakers in the mixed language conditions would apply sustained and global inhibition on the dominant language to enable the efficient language production across two languages; and this process could finally result in facilitations on the retrieval time for the less dominant language than the dominant language.
Participants' RTs for different trials within each language were also compared. Participants' RTs for non-switch trials did not differ across Chinese repeated and single-language trials ( p = 1.00). In contrast, participants responded fastest for English repeated trials ( p < .001) in the mixed language blocks; however, their RTs for English switch and single-language trials were comparable ( p = .08). Participants' improved RTs for English repeated trials in the mixed language blocks might be caused by the carryover inhibition on L1 (Jylkkä et al., 2017). It is possible that the inhibition on L1 carries over to the following L2 repeated trials in the mixed language blocks, facilitating participants' L2 productions. Besides, the unpredictable trials for language switching and stay in the mixed language blocks increased the attentional demands, requiring participants to keep prepared all the time for accurate responses. Therefore, it is potential to increase participants' threshold of concentrations and efficiency for naming the pictures in accurate language in the mixed language blocks as compared to the single-language blocks. But, as this study was conducted online with small sample size, both the effects of carryover reactive inhibition and mixed language condition on participants' L2 production need further investigations.
Participants' switch and mixing costs in the picture-naming task were analysed. Switch costs refer to differences in response time or accuracy between switching and repeated trials in the mixed language blocks, representing transit control processes; meanwhile, mixing costs represent the sustained and global control of interference, which compares differences between responses in repeated trials among the mixed language blocks and single-language trials (Barbu et al., 2018;Declerck & Philipp, 2015;Ma, Li & Guo, 2016).
Contrary to expectations, an asymmetrical pattern of RT switch costs was not found in this task, F (1, 30) = 1.60, p =.22, η p 2 = .05. One possible reason for this finding could be that the less dominant language (L2) might be more easily and strongly primed by the language switching cues (Heikoop, Declerck, Los & Koch, 2016). However, as this study was based on limited sample size, the cue-priming effect on less dominant language on bilingual language switching still remained unclear, and it is a potential direction to explore in future studies.
Besides, participants' RT mixing costs to Chinese and English differed significantly, F (1, 30) = 21.07, p <.001, ηp 2 = .41, showing an asymmetrical pattern across participants' L1 and L2. Participants' RT mixing costs to English were about 86 ms smaller than to Chinese. Since participants' RTs in Chinese and English single-language trials were comparable (shown above), the smaller RTs mixing costs to English reflected their faster responses in L2 repeated trials, suggesting that the stronger global inhibition on L1 in the mixed language block significantly facilitated bilinguals' L2 production. This finding was consistent with the finding of reversed language dominance effect, i.e., shorter RTs for L2 repeated than L1 repeated trials in the mixed language blocks, and jointly reflected the higher level of proactive inhibition on L1 during bilingual language production in the mixed language blocks.
Response accuracy
Results showed the interactive effects of language context and trial type on participants' response accuracy, F (2, 60) = 5.06, p = .01, ηp 2 = .14. It can find that participants performed more accurately for English repeated trials in the mixed language blocks than for English single-language trials ( p = .01). Additionally, significant higher response accuracy was found in English repeated trials as compared to switch trials in the mixed language blocks (p < .01). Furthermore, participants' accuracy in Chinese singlelanguage trials was significantly higher than in English singlelanguage trials, p = .03. Accuracy did not differ between trials switching to Chinese and those switching to English, p = 1.00. Switch and mixing costs in response accuracy were also analysed. The results showed that switch costs were at a similar level no matter the different switching directions, F (1, 30) = .54, p = .47, η p 2 =.02, and no asymmetry pattern was found. However, the response accuracy mixing costs in English were significantly smaller than in Chinese, F (1, 30) = 9.90, p = .004, η p 2 = .25.
Performance in the nonverbal shifting task
Participants' RTs and response accuracy in the colour-shape switching task were analysed. Table 3. shows their performance in different trials of the task. Participants' RTs significantly varied across different trials, F (1.71, 51.41) = 108.28, p < .001, η p 2 =.78. Longer RTs were found in switch trials as compared to non-switch trials (i.e., repeated and single trials), p < .00; furthermore, participants responded fastest in single-task trials, p < .001.
As participants' response accuracy was not normally distributed, a nonparametric Friedman test was used, showing that participants performed with comparably high accuracy in switch and non-switch trials, χ 2 (2) = 5.31, p = .07.
Performance in the response inhibition task
Participants' performance in the whack-the-mole task was analysed. Besides RTs and response accuracy for go trials, participants' unsuccessful rates of withholding responses to no-go stimuli (i.e., percentages of false alarms) were also analysed.
In general, participants responded quickly and accurately in the go trials, though they tended to make more errors in the no-go trials than the go trials, F (1, 30) = 86.87, p < .001, η p 2 = .74.
Regression analyses
How do participants' habitual code-switching and language proficiency affect their cued-language switching performance? Variables related to participants' habitual code-switching and RTs in the picture-naming task were correlated in the multiple linear regression model using the stepwise method and the Bayesian regression model (see Appendix 5).
It showed that bilinguals who habitually use languages separately in different contexts (i.e., single-language users) were more prone to produce greater RT switch costs to Chinese in the cued-language switching task. The result further indicated that the higher degree of single-language bilingualism was associated with less-proficient language switching, and possible to exercise bilinguals' efficiency in language inhibition rather than switching. Consistently, the best-fit Bayesian model also indicated a positive correlation between participants' frequency of contextual switching and their English to Chinese switching proficiency in the picture-naming task (BF 10 = 25.00, R 2 = .52). The models, in general, addressed the effects of intensive engagement in using language separately (single-language context) on bilingual speakers' cued-language switching performance.
Participants' switch costs to English were also analysed; however, a significant relationship (BF 10 = 135.77, R 2 = .59) between participants' Chinese to English switching proficiency and their habitual code-switching practices was only found in the Bayesian regression model (see Appendix 6). The model described the effects of bilinguals' habitual code-switching frequency and competence on their cued-language switching performance. Specifically, it indicated that participants with higher frequencies of using two languages concurrently and code-switching would perform smaller switch costs to English in the picture-naming task. As the model further shows, participant's L1 switch tendency negatively correlated with their switch costs to English. Participants' predominant use of L1 in bilingual communication indicated their high dependence on L1 in habitual language switching and unbalanced language proficiency. The smaller time costs of switching into English reflected that participants were less effortful to reactivate L2 and efficiently inhibit L1 to realize fluent L2 production. In sum, this model explained that proficient bilingual switchers who habituate to use languages concurrently could be more efficient in switching to English and reactively inhibit Chinese in communication even if their language proficiency were unbalanced.
As for participants' mixing costs to Chinese in the picturenaming task, both the frequentist (F (2, 27) = 5.95, p =.01, adjusted R 2 = .25) and Bayesian regression model (BF 10 = 7.16, R 2 = .31) reflected that participants' L2 proficiency and their frequency of using L2 in occasions outside home were significant in affecting their mixing costs to Chinese in the language switching task (see Appendix 7). Since participants in this study are Chinese Mandarin native speakers, and Chinese is the predominant language used by the majority of them to communicate with their family members (e.g., parents, cousins, and relatives etc.), the higher frequency of using L2 outside home could indicate their higher frequency of using Chinese and English separately in different occasions (i.e., higher degree of single-language context bilingualism). Together with the variable of L2 proficiency, It describes the patterns of language switching based on contextual cues; that is, instead of switching between languages in one situation, bilinguals use their two languages separately for different purposes or in different situations. This construct measured in BSWQ (Rodriguez-Fornells et al., 2012) corresponds with the term "bilinguals in single-language context" described in ACH (Green & Abutalebi, 2013) to some extends. The higher scores on contextual switch reflected the more intensively bilinguals switch their two languages across different contexts, or use languages separately in varied occasions. the models showed that the less proficient bilinguals habituated to use two languages separately in different occasions without frequent switching would perform reduced mixing costs to Chinese in the language switching task. The results revealed that controlling linguistic interferences from bilinguals' nonproficient language is less cognitively demanding, especially for those single-language context bilinguals who frequently select and control languages to use in distinct settings.
As for mixing costs to English, both regression models consistently found significant effects of participants' baseline code-switching proficiency on their mixing costs to English (F (1, 28) = 6.91, p = .01, adjusted R 2 = .17; BF 10 = 34.50, R 2 = .44). Greater values of baseline switch costs indicated participants' less balanced proficiency across two languages and limited proficiency in code-switching. The models (see Appendix 8) showed that bilinguals who are less balanced in two languages and non-proficient in language switching tended to perform greater mixing costs to English, reflecting non-proficient bilingual switchers' greater cognitive efforts on L2 sustained control in language production. The Bayesian model further suggested that participants' mixing costs seemed to steadily increase after their age of 30. However, such age effect was not found in the multiple regression model. Therefore, it is hard to confirm whether bilinguals' age is a significant factor in affecting their language switching production, since the sample size is small and participants involved in this study are not so heterogeneous in age (mean age = 28).
How do participants' habitual code-switching and language proficiency affect their performance in the colour-shape switching task?
The multiple linear regression model (F (2, 27) = 7.82, p = .002, adjusted R 2 = .32) and Bayesian model (BF 10 =33.86, R 2 = .44) consistently reported the effects of participants' frequency of using L2 in occasions outside home and L2 verbal fluency on their RTs switch costs in the nonverbal colour-shape switching task.
The models (see Appendix 9) described a negative correlation between bilinguals' L2 verbal fluency and their switch costs in the cognitive shifting task, and such correlation was more salient among participants habituated to use two languages separately (i.e., intensive single-language context engagement). Specifically, single-language context bilinguals (higher frequency of using L2 outside home but predominantly use L1 at home) with less L2 verbal fluency could perform less efficiently in cognitive shifting task. The results showed the hindered efficiency of cognitive shifting attributed to the participants' habitual language use in singlelanguage context and lower proficiency in L2.
Participants' RT mixing costs were also analysed in regression models; however, no significant effects of their habitual bilingual language use experience on nonverbal mixing costs were found.
How do participants' habitual code-switching and language proficiency affect their performance in the Go/No-go task?
The percentage of false alarms in the Go/no-go task, calculating participants' unsuccessful rates of withholding their responses in no-go trials, was analysed in regression models as an indicator of participants' response inhibition performance. Higher percentages of false alarms indicate poorer response inhibition performance.
Such finding was inconsistent with what previous studies reported (e.g., Rodriguez-Fornells et al., 2012;Soveri et al., 2011), where higher unintended switch frequency was broadly reported to reflect bilinguals' uncontrolled activation of non-target language during bilingual language production, and correlate with their worse performance in cognitive inhibition and attentional control.
To explore reasons of the finding, a correlation analysis between participants' unintended switch frequency and frequencies of inter-/intrasentential switching was conducted. It showed that participants' unintended switch frequency significantly correlated with their frequency of intrasentential switching (Pearson's r = .50, p < .01). That is, participants with intensive experience of intrasentential switching in daily communications are relatively weaker in bilingual language control, and tend to "loosely" control their co-activated languages in communications. Similarly, the Bayesian model further indicated that, besides unintended switch frequency, habitual code-switchers with higher frequency of interand intrasentential switching and intensive use of L2 in communications tended to have better response inhibition performance. Both the correlation analysis and Bayesian model reflected the relationship between bilinguals with dense code-switching experience and response inhibition performance. Given that dense code-switchers cooperatively control their languages to realise efficient bilingual communications, and linguistic items from both languages are in "open control mode" during frequent language switching back and forth (Green & Li, 2014), they are relatively less cognitively demanding on language control and could be weaker in appropriately inhibition nonintended language in language production. Therefore, the models reflected that dense code-switchers executed relatively looser control on their two co-activated languages (i.e., open control mode) to produce efficient intensive code-switching in communications, and such dense code-switching experience further facilitated their nonverbal response inhibition efficiency.
Even the facilitation effect shown in the Bayesian model became more salient with participants' age increasing, it is plausible to discuss that age is an important factor affecting bilinguals' response inhibition efficiency, considering the limited number and relatively consistent age range of these participants in the study.
Discussion
This study aimed to investigate the effects of bilingual language use experience on domain-general cognitive control in a group of 31 Mandarin-English bilingual adults. Results revealed that participants' efficiency of cognitive shifting and response inhibition was associated with their habitual code-switching frequency. Contrary to previous studies (De Baene, Duyck, Brass & Carreiras, 2015;Declerck, Grainger, Koch & Philipp, 2017;Prior & Gollan, 2011), this study did not find significant associations between bilinguals' language switching and nonverbal task switching performance (consistent with Branzi, Della Rosa, Canini, Costa & Abutalebi, 2016;Calabria, Branzi, Marne, Hernández & Costa, 2015;Gollan, Schotter, Gomez, Murillo & Rayner, 2014;Prior & Gollan, 2013). However, the findings showed the facilitations of participants' intensive practices of code-switching in daily communications on their performance in the cued-language switching task (e.g., Yim & Bialystok, 2012).
Cued-language production and relationship with habitual bilingual language experience
Results in the picture-naming task not only showed the significant mixing costs asymmetry between L1 and L2, but also reported the reversed language dominance effects on participants' language productionthat is, their RTs for L1 repeated trials were significantly longer than L2 in the mixed language blocks. Such findings reflected the consequence of the sustained inhibition on L1 in the mixed language condition to lower the proactive activation level of L1 for efficient switching to L2 production (Christoffels et al., 2007;Declerck, 2020). These findings further indicated that participants administered global and sustained inhibition to their dominant language during bilingual production, even in conditions requiring the use of both languages.
The finding of reversed language dominance effect on Chinese-English bilinguals' cued-language switching productions was consistent with some previous studies on bilinguals with two closer-distanced languages (e.g., Dutch and German, German and English). For example, Christoffels et al. (2007) tested a group of Dutch-German bilinguals' language switching performance in the mixed language condition based on cues through the picturenaming task, and their results showed that participants in the mixed language block performed longer reaction time naming pictures in Dutch (L1) than in German (L2). The result was also consistent with Heikoop et al.'s (2016) study, in which they measured German (L1)-English (L2) bilinguals' reaction time for language and cue switches as well as cue repetitions conditions in the picture-naming task. They observed that bilinguals' less dominant language could be more strongly primed by the switching cues, showing shorter L2 RTs as compared to L1 RTs in these three conditions.
The current similar finding observed among Chinese-English bilinguals provided evidence for the facilitations of proactively inhibiting L1 in bilingual contexts on L2 production. Furthermore, it reflected that such reversed language dominance effects in bilingual language production could occur in a broader scenario regardless of bilinguals' L1 and L2 patterns or distances; besides, it is reasonable to associate such effect with bilinguals' unbalanced proficiency in L1 and L2, rather than the language distance between them (Declerck, 2020).
The absence of switch costs asymmetry found among current participants with unbalanced proficiency in two languages was inconsistent with previous findings (e.g., Gollan & Ferreira, 2009;Peeters & Dijkstra, 2018;Slevc, Davey & Linck, 2016). Previous studies (e.g., Costa & Santesteban, 2004;Costa, Santesteban & Ivanova, 2006;Linck, Schwieter & Sunderman, 2012;Meuter & Allport, 1999) have discussed that the asymmetrical pattern of switch costs in L1 and L2 is associated with unbalanced bilinguals' different extents of transient control of two languages, while switch cost symmetry is assumed to associate with balanced-proficient bilinguals as their transient control of two languages during bilingual language processing is comparably strong.
However, Peeters and Dijkstra (2018) indicated that switch cost symmetry in cued-language switching production did not only exist to some extent among well-balanced bilinguals, but among less balanced bilinguals. They further addressed the facilitation of sustained dominant language inhibition on bilinguals' L2 production in bilingual co-occurrence contexts. Given that participants involved in current study are Chinese-English bilinguals residing in English-speaking countries, and most of them are university students who have intensive experience of using L1 and L2 separately in different contexts (e.g., predominantly use L2 in the classroom and read in L2 English, but speak in Chinese with family members or friends), their intensive experience of using languages in single-language contexts has equipped them relatively stronger capacities of maintaining the targeted language with controlling and inhibiting the interferences of the competing others (Green & Abutalebi, 2013). Therefore, they performed efficiently to sustained control their dominant language over competing co-activated linguistic items to facilitate L2 production in the mixed language conditions. As for the relationship between participants' habitual and cued language switching performance, the study showed that bilinguals who are more frequently engaged in language switching practices or contexts (dual-language or dense code-switching contexts), rather than single-language contexts, were more efficient in reactive inhibition on linguistic interferences in the cued-language switching task, which was in line with current study's hypothesis and previous findings (e.g., Barbu et al., 2018;Prior & Gollan, 2011).
In contrast, the smaller mixing costs to L1 were closely related to unbalanced-proficient bilinguals' intensive engagement in single-language contexts, reflecting their enhanced efficiency of sustained language control during language production in the single-language context. As languages are not co-used in a singlelanguage context, bilinguals' long-term experience of sustained control of nontargeted language to distinctively use two languages, in turn, brings them advantages in their proactive control mechanism. Therefore, single-language context bilingual speakers could perform proficiently in targeted language maintenance, especially their dominant language, which was driven by their efficiently sustained inhibition mechanism.
However, the modulation of single-language context on the efficiency of non-dominant language sustained inhibition was not observed. Bayesian model showed the interconnection between increasing mixing costs to L2 and participants' lower proficiency in code-switching. Code-switching proficiency, discussed in this study, indicates bilinguals' verbal fluency level between L1 and L2, and their familiarity level with code-switching in daily interactions. It seemed that single-language context bilinguals with limited code-switching frequency and proficiency did not show advantages in efficiently controlling their non-dominant language in communication.
Relationship between habitual bilingual language experience and cognitive shifting Switch costs in the task-set switching task reflected the costs of switching between different tasks driven by participants' local control mechanisms (Kiesel, Steinhauser, Wendt, Falkenstein, Jost, Philipp & Koch, 2010;Yang et al., 2016). Regression analyses revealed that bilinguals' higher frequency of engagement in a single-language context was related with greater switch costs in the nonverbal cognitive shifting task, showing that habitually using languages separately hindered bilinguals' cognitive shifting efficiency. According to the ACH (Green & Abutalebi, 2013), bilinguals engaged in a single-language context always keep their languages apart and do not mix them up during communication, leading to further exercising of their abilities in goal maintenance and interference control rather than cognitive shifting. Higher frequency of code-switching (e.g., Barbu et al., 2018;Prior & Gollan, 2011) and engagement in code-switching contexts (e.g., Green & Abutalebi, 2013;Lai & O'Brien, 2020) has been assumed to boost bilinguals' efficiency in shifting between different mental sets. Besides, the results further indicated that bilinguals' L2 fluency was also an important factor in affecting their cognitive shifting performance. Therefore, bilinguals who are fluent in L2 and have intensive practices of code-switching are expected to be efficient in cognitive monitoring and shifting.
Although results showed the modulations of bilinguals' habitual language switching frequency on their cognitive shifting, similar association was not found between their cued-language switching and cognitive shifting performance. This finding was in line with those studies showing little evidence for an overlap between the mechanisms of cued-language switching and cognitive shifting (e.g., Calabria et al., 2015;Klecha, 2013;Prior & Gollan, 2013). Bilinguals in cued-language switching tasks are guided by language selection cues or pictures, which is a bottom-up cognitive mechanism; however, a top-down cognitive mechanism is assumed to direct bilingual language selection when bilinguals are allowed to switch between languages voluntarily or freely (Declerck & Philipp, 2015). The modulation of frequent habitual language switching, rather than cued-language switching, on task-switching efficiency addressed the necessity of discussing the role of bilingual habitual language experience on bilingual cognitive control. Another reason, as Klecha (2013) mentioned, is that switching between languages is a complex process in nature, involving multifaceted factors related to bilingual language experience as well as executive functions; furthermore, it requires many more cognitive challenges than switching between non-linguistic schemas.
In general, the result reflected ACH's prediction that bilinguals with intensive experience of using language in single-language contexts are less efficient in switching between mental-set tasks. In addition, consistent with this study's hypothesis, the results showed the intercorrelations between improved cognitive shifting efficiency and participants with more balanced bilingual proficiency and higher frequency of using both languages concurrently in communications. Using and switching two languages concurrently requires bilinguals efficiently to distinguish stimuli from a certain abstract category (i.e., either linguistic or non-linguistic categories), which is then able to boost their language-set shifting efficiency in communications. These efficient skills could further extend to advantages in non-linguistic shifting, contributing to behavioural outcomes in cognitive shifting.
Relationship between habitual language switching and response inhibition
In this current study, a fast-paced go/no-go task was administered to examine the association between bilinguals' frequency of code-switching and their response inhibition efficiency. Results showed that bilinguals highly engaged in dense code-switching tended to perform more successfully in withholding their habitual responses to no-go stimuli, which suggested dense code-switchers' advantages in both avoiding habitual but erroneous responses and resolving response conflicts (Blackburn, 2013;Bunge, Dudukovic, Thomason, Vaidya & Gabrieli, 2002). It could be that global inhibition of untargeted language, at least in the articulatory stage (i.e., the motor level), is also employed to facilitate code-switching production, besides the process of interference suppression (Hofweber et al., 2020). The intensive dense code-switching practices trained bilinguals' efficiency in response inhibition because they have to constantly control their ongoing language before articulation and switching to appropriate language to produce.
Although the results were not strictly in line with the predictions of ACH, where inhibitory advantages are not supposed to associate with bilinguals' dense code-switching practices, there are relevant studies showing similar intercorrelations between dense code-switching practices and enhanced performance in response inhibition task (e.g., Hofweber et al., 2016Hofweber et al., , 2020. It was argued that, besides the inhibitory skills, participants in the Go/No-go task also have to constantly monitor the no-go signals among go-trials, which led to the activations of proactive monitoring. These participants, who are intensively engaged in dense code-switching practices, are relatively proficient in monitoring cross-linguistic competitions, and their feasible control of two languages further modulated their efficiency in monitoring and inhibit conflicting responses. Therefore, the outperformance in response inhibition task among dense code-switchers reflected the proficiency in monitoring, and managing the co-activations of languages during intensive code-switching practices could further contribute benefits to efficient conflict-monitoring and inhibition performance beyond language domains. In sum, the findings provided novel insights into the overlap between code-switching production and response inhibition processes, implying the involvement of motor control of prepotent response to globally inhibit the ongoing predominant language in bilingual code-switching production.
Limitations
There are limitations of this study. The outbreak of COVID-19 had severely affected participants' recruitment for this study, leading to only 31 participants finally being included in this study. The associations between bilinguals' habitual language use experience and cognitive control found in this study may only reflect the characters of the limited number of participants involved, and need to be tested with more bilingual participants involved in the future. Besides, participants in this study have great variations in their self-reported L2 AoA (Mean = 10, SD = 4.81). Although these participants shared the similar L2 learning context (that is, learning English from mainstream schools in China), the variations in L2 AoA could lead to different language experiences with regard to length of L2 exposure, language proficiency and cognitive control abilities (Gullifer, Chai, Whitford, Pivneva, Baum, Klein & Titone, 2018;Luk, De Sa & Bialystok, 2011). Participants' L2 AoA was measured through their self-reported responses to the question, asking participants to indicate at what age they learned English in the LSBQ (Anderson et al., 2018). Since this is not an objective measure and participants might have different understandings on "learned from birth", their self-rated age for L2 acquisition might not perfectly reflect their actual L2 learning experience. Objective measures or calculations to quantify variables related to bilinguals' language use experience, such as language entropy (Gullifer & Titone, 2020), are needed in future research.
In addition, conducting behavioural tasks and collecting data online meant that it was not possible to control individual participants' experiment equipment and test environment. Participants from different countries completed the tasks on different computers with different qualities of internet connections, and distractions (e.g., noises) during their study participations were hard to control. These factors may affect the study results. However, this is one significant attempt in bilingualism research to conduct behavioural experiments and collect human participants' data fully online during the pandemic period.
Conclusion and future directions
In conclusion, the study reflects the facilitation of cognitive shifting and inhibition derived from bilinguals' high frequency of code-switching production in daily life. It provided evidence for the predictions of the ACH and CPM that bilinguals habituated in a single-language context without high frequency of code-switching practices excel in goal maintenance and interference control. However, bilinguals with high frequency of dense code-switching and engaging in cooperative control of their languages are more efficient in cognitive shifting and response inhibition. In addition, this study indicates cooperation between interference control and response inhibition during code-switching production, and points out that the efficiency of response inhibition could be enhanced through intensive experience of code-switching production in life. Although the study used a small sample size, it confirms that bilingual code-switching habits, including switching frequency and context, are crucial in shaping and modulating bilinguals' skills in cognitive flexibility and inhibition.
The study, in general, is an attempt to conduct bilingualism research and test Chinese-English bilingual participants remotely. As compared to the traditional lab-based studies, running studies online could be a new trend for future research in post-pandemic era, since it offers a more efficient and economic approach to test participants from more diverse cultural and language communities. More studies conducted online are expected in future to help improve the validity and reliability of online data collection platforms; in addition, to contribute more data collected online to make cross-comparisons and evaluations.
(2018) clustering, but excluded any questions which have been marked as measuring language home and non-home uses.
In sum, questions for language social use in this study are those asking how bilinguals use their languages in any other settings beyond home and work/ study and in any interactions with people beyond the above-mentioned two settings. Below is the summary of questions in LSBQ measuring the extent of L2 uses in the three settings.
Appendix 4
The example of analysing participants' VOT in the picture naming task
Appendix 5
Analyses of the effects of habitual code-switching experience on bilinguals' RT switch costs to Chinese in the picture-naming task. Regression analyses of the associations between habitual code-switching experience and bilinguals 'nonverbal cognitive shifting performance. | 2022-04-15T15:20:42.701Z | 2022-04-13T00:00:00.000 | {
"year": 2022,
"sha1": "90ae683fb2839baa74f6f74181379d89f7288420",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/D1B7EE8C0223A085E96ACE2836E9AD2C/S1366728922000244a.pdf/div-class-title-the-effects-of-habitual-code-switching-in-bilingual-language-production-on-cognitive-control-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "77f6a2b4b3d35d04081a78dc17a1b034fe761a7f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
87427159 | pes2o/s2orc | v3-fos-license | Characterization of Rhodococcus equi Isolates from Foals with Respiratory Problems using a Multiplex PCR for the vap Genes
| Rhodococcus equi is widely associated with subacute or chronic suppurative bronchopneumonia, ulcerative lymphangitis, lymphadenitis and enteritis in foals aged 1-4 months. Identification of the virulence plasmids associated with R. equi is of great importance in ascertaining pathogenicity in equines. Virulence factor is associated with presence of 85-90kb plasmid which encodes various virulence associated proteins (VAP -A,-B,-C,-D,-E,-F,-G,-H). VAP A is essential for virulence in foals. This study was aimed at the evaluation of molecular characteristics of R. equi isolates from foals with respiratory problems by multiplex PCR assay that amplifies the vap gene family (vap-A,-B,C,-D,-E,-F,-G,-H). Twenty-eight isolates of R. equi from foals with respiratory problems were used in this study. The PCR amplification reaction was performed using specific primers for vap gene family viz. vap-A,-B,-C,-D,-E,-F,G,-H which yielded the bands of 350, 400, 450, 500, 600, 650 and 700 bp, respectively. All the 28 isolates were positive for vap A,C,D,E,F,G and H and negative for vap B gene which was characteristic of virulent R. equi isolates of equine origin. The multiplex PCR assay for the vap gene family allowed identification and characterization of virulent isolates in a simple one-step reaction. This is a significant advantage in comparison with the conventional diagnostic methodologies of R. equi strains and simultaneously evaluated the presence of the vap A gene which is responsible for virulence in foals.
INTRODUCTION
R .equi is Gram positive and facultative intracellular pathogen belonging to order Actinomycetales.These are coccobacillary, aerobic and non-motile organisms formerly known as Corynebacterium equi.Infection causes subacute or chronic abcessating or suppurative bronchopneumonia, ulcerative lymphangitis and enteritis (Meijer and Prescott, 2004) in foals aged 1-4 months (Tkachuksaad and Prescott, 1991;Yager et al., 1991;Khurana et al., 2009;Von Bargen and Haas, 2009;Khurana, 2015aKhurana, , 2015b;;Khurana et al., 2015) months old.R. equi has been reported to infect number of domestic animals including pig, sheep, camel and cattle and has been also recognized as opportunistic pathogen in both immunocompetent and immunocom-promised humans and readily cause infection in HIV positive patients (Takai et al., 1995;Mizuno et al., 2005;Napoleao et al., 2005;Khurana, 2014;Khurana et al., 2014).It is commonly isolated from soil, feces and gut from healthy and sick animals.Identification of the virulence plasmids associated with R. equi is of great importance in characterizing pathogenicity of disease in equines (Monego et al., 2009).The virulence is associated with the ability of the bacteria to prevent phagosome-lysosome fusion and replication within macrophages resisting clearance by organism defence (Kanaly et al., 1993;Krewer et al., 2008).Virulence factor is associated with presence of 85-90kb plasmid which encodes various virulence associated antigen (Byrne et al., 2001).According to many researchers virulent and intermittently virulent R. equi is presented with virulence respectively.Gene for the vapA antigen is virtually found in all clinical R. equi isolated from foals (Costa et al., 2006;Takai et al., 1996) and vap B gene on isolates from pigs and humans.Earlier studies identified pathogenicity island within the plasmid which contains seven vap genes and discovery of more vap proteins has taken this number to nine vap genes vap-A, -B, -C, -D, -E, -F, -G, H and I. Thus the presence or absence of the vapA gene can be used as a marker to show presence or absence of the vapA encoding virulence plasmid.Virulence plasmid negative strains of R. equi do not express vapA gene and are therefore incapable of causing disease ( Jain et al., 2003).The multiplex PCR assay for the vap gene family allowed identification and characterization of virulent isolates in a simple one-step reaction.This is a significant advantage in comparison with the diagnostic methodologies presented by Halbert et al. 2005 which allowed identification of R. equi strains and simultaneously evaluated the presence of the vapA gene.
Advances in
The PCR technique has the potential to identify virulent R. equi rapidly by amplification of gene sequences that are unique to the virulence plasmids.This approach is useful not only for epidemiological investigations but also for early diagnosis programs.This study was aimed at the evaluation of molecular characteristics of R. equi isolates obtained from foals and adult horses from different geographical locations in India by Multiplex PCR assay that amplifies the vap gene family (vap-A,-B,-C,-D,-E,-F,-G,-H).
baCterial iSolateS
The study was conducted on 28 bacterial isolates from foals with respiratory problems from Haryana and Rajasthan previously identified as R. equi by biochemical tests.
dna extraCtion
All twenty eight isolates were cultured 5% sheep blood agar and then transferred to nutrient agar and incubated for 24 hours at 37°C.Then the colonies of all 28 isolates were inoculated in BHI broth and incubated at 37°C for 24 hours.The genomic DNA of all isolates from overnight BHI broth culture was isolated by snap chilling.One ml of overnight BHI broth culture of all the isolates was taken in eppendorf tube and centrifuged at 7000 rpm for 5 min.at 4°C and supernatant was discarded.Again one ml of BHI is added to same tube and above step of centrifugation was repeated and supernatant was discarded.100-200 µl of nuclease free water (NFW) was added to the precipitate in the eppendorf tube and kept in water bath for 5 min.at 95°C and immediately kept in ice for 5 min.The suspension obtained was centrifuged at 7000 rpm for 5 min.at 4°C.The supernatant was stored in fresh eppendorf tube and used as DNA template for Multiplex PCR.The purity of DNA was evaluated by taking the ratio of optical densities (OD) at 260 nm to that of 280 nm, by spectrophotometer (Biorad Smart SpecTM Plus).The samples having OD ratio between 1.7-1.9 were considered having acceptable purity and used in future experiments.For computing the concentration of plasmid DNA, it is known that one OD value at 260 nm corresponds to 50 µg/ml of double stranded DNA.Hence, the concentration of DNA was calculated using the following formula: The PCR reaction was carried out as described by Monego et al. (2009) with some modifications.The virulence associated protein (vap) family genes viz.vap-A, -C, -D, -E, -F, -G and -H were amplified using gene specific primers.The details of the primers were mentioned in Table 1.
RESULTS
The PCR methodology was applied to analyse 28 isolates.The PCR amplification reaction was performed using the
DISCUSSION
The virulence associated antigens (Vap-A) and plasmids are used as epidemiological markers for R. equi virulence in foals (Cohen et al., 2005;Takai et al., 1999).Two categories of plasmids: VapA and VapB family are mainly responsible for pathogenesis and host tropism of R. equi (Takai, 1991a(Takai, , 1991b;;Oldfield et al., 2004;Ocampo-Sosa et al., 2007).R. equi isolated from clinical cases express only one of these two plasmids and carries the gene(s) encoded either in vapA or vapB families (Takai et al., 1995).The VapA+ B-type plasmid is associated with the infection in horse (Takai et al., 1995;Ocampo-Sosa et al., 2007).This unique plasmid-determined host-specificity has been found only in Rhodococci, however the precise role and mechanism of the vap antigens remain enigmatic.Monego et al. (2009) identified that none of the 32 vapA-positive isolates showed the presence of vap B gene.The presence of vap-A gene in R. equi clinical isolates and its relationship with lethality of R. equi to susceptible foal was reported by many researchers (Takai et al., 1995;Takai, 1997;Wada et al., 1997).In present study we analyzed 28 R. equi isolates from different geographical regions of India for the presence of vap genes (vap-A,-B,-C,-D,-E,-F,-G,-H) using Multiplex PCR.The molecular profile of the vap gene family found in our study presented vapA, and vapC to vapH genes.The profile all the isolates showed vapA, vapD, and vapG, which in agreement with result data of Jacks et al. (2007), which indicate that these vap genes are the most biologically relevant as they are preferentially induced during infection in the natural host.None of the 28 vap-A positive isolates showed the presence of vap-B gene.
The results obtained agreed with that of Takai et al., (1995) reported that R. equi isolates can express either vap-A or vap-B but not both.In our study all isolates were found positive for vap-A gene and negative for vap-B gene, thus providing a vap A+B-profile, which is specific for virulent R. equi isolates of equine origin.
Figure 2 :
Figure 2: Multiplex PCR amplification of vap gene family.Lane 1: Molecular Ladder; Lane 2: R. equi Standard; Lane 3: Negative control; Other Lane: Respective R. equi isolate number multiplex pCr aSSaY for vap gene familY vap gene family viz.vap A,-B,-C,-D,-E,-F,-G,-H which yielded the bands of 350, 400, 450, 500, 600, 650 and 700 bp (Figure1, 2 and 3).The length of DNA fragments amplified by multiplex PCR was in accordance with previous results ofMonego et al. (2009).The accuracy of PCR amplification of the vap gene family in our control strains was verified by matching the results with that of standard and negative control used in PCR reaction.All the 28 isolates isolated from foals and adult horses with respiratory problems were positive for vap -A,-C,-D,-E,-F,-G and-H.The amplicon specific for vap-B was absent in all isolates of R. equi.
Table 1 :
Primers utilized with multiplex PCR for vap gene family amplification | 2018-12-21T03:57:45.252Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "d20017c714daea013215eb488491c89f2a567b14",
"oa_license": "CCBY",
"oa_url": "http://nexusacademicpublishers.com/uploads/files/Nexus_AAVS_632_Chhabra.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d20017c714daea013215eb488491c89f2a567b14",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
225097826 | pes2o/s2orc | v3-fos-license | Longitudinal trajectory of disability in community-dwelling older adults: An observational cohort study in South Korea
Background Disability, which is considered a health-related condition, increases care demands and socioeconomic burdens for both families and communities. To confirm the trend of dynamic longitudinal changes in disability, this study aims to explore how disability is divided by the trajectory method, which deals with time-sequenced data. Additionally, this study examines the differences in demographics, geriatric conditions, and time spent at home among the trajectory groups in community-dwelling older adults. Home time is defined as the period during which the patient was not in a hospital or health care facility during their lifetime. Methods Records of 786 community-dwelling older participants were analyzed from the Aging Study of PyeongChang Rural Area, a population-based cohort study that took place over three years. Using 7 domains of activities of daily living and 10 domains of instrumental activities of daily living, participants were grouped into no dependency (0 disabled domain), mild (1 disabled domain), and severe (2 or more disabled domains) disability groups. The longitudinal trajectory group of disability was calculated as a trajectory method. Three distinct trajectory groups were calculated over time: a relatively-stable group (78.5%; n = 617), a gradually-aggravated group (16.0%; n = 126), and a rapidly-deteriorated group (5.5%; n = 43). Results The average age of 786 participants was 73.3 years (SD: 5.8), and the percentage of female was 52.7%. It was found that 78.5% of patients showed relatively no dependence and 5.5% of older adults in a rural area showed severe dependence. Through applying the trajectory method, it was shown that the Short Physical Performance Battery (SPPB) score was 10.2 points in the relatively-stable group and 3.1 points in the rapidly-deteriorating group by the 3rd year. Additionally, by the trajectory method, the rate of decrease in home time was 3.33% in the rapidly-deteriorated group compared to the relatively-stable group. Conclusions This study shows the difference in demographics and geriatric conditions (such as SPPB) through the examination of longitudinal trajectory groups of disability in community-dwelling older adults. Significant differences were also found in the amount of home time among the trajectory groups. Supplementary information Supplementary information accompanies this paper at 10.1186/s12877-020-01834-y.
Background
The number of countries facing an aging population is increasing worldwide [1,2]. The increase in the world's older population implicates an increase in not only the prevalence of chronic diseases, but also the burden of people with functional impairments and geriatric conditions (e.g., frailty and sarcopenia) [3][4][5][6]. Particularly, disability, which is regarded as a health-related condition, increases care demands and socioeconomic burdens for both families and communities [7][8][9].
The onset of disability varies substantially among individuals with similar chronological ages [10][11][12][13]. Studies have shown that differences exist among individuals in terms of presence of multimorbidity, frailty, and disability. Evidence also suggests that disability can be prevented or delayed when accompanied by appropriate multifactorial interventions to remove risk factors and improve functioning, making disability one of the most important outcome measures in studies targeting older populations [14].
The conventional method of assessment for disability focuses on the severity of the disability at the initial diagnosis; however, this does not reflect the individuality and time course of the disability. The trajectory method of assessment focuses on the changes in the later years of life [15]. Previous studies have shown different trajectory models based on mood, physical activity, and disability in the later years of life [8,[15][16][17]. However, the trajectory and time course of various disabilities are still not well understood. Although some studies show trajectories and subsequent mortality with disability [15], other studies demonstrate that not all older individuals with disability in the community end up being institutionalized in chronic hospitals or other long-term care facilities [16][17][18]. It is unclear whether an older individual identifying with a disability has a high probability of readmission or long-term institutionalization. Furthermore, only a few studies have focused on the dynamic trajectories of disability in community-dwelling older adults [18].
Previously, trajectories of disability were not researched in relation to patient-centered outcomes. Patient-centered outcomes are the result of a healthcare system that prioritizes a patient's needs in conjunction with the healthcare professional's medical expertise. It focuses on health status that is meaningful to patients such as quality of life, functional status, and independent living [19][20][21]. In recent studies, "home time" has been proposed as a patient-centered measure relevant to the quality of life for older people [22][23][24]. Home time, meaning the number of days alive and spent at home, comes from the concept of patients' wanting to maximize the number of days they can be at home rather than in hospital or nursing facilities at the end of their life. Home time focuses on priority values and purposes that are important to older patients or their families, and shows its relationship with self-rated health, mobility, self-care difficulties, and limited social activity [22][23][24].
The objective of this study is to explore the following: (1) how disability is divided by the trajectory method in relation to time-sequenced data in a longitudinal cohort, (2) whether the demographic and geriatric conditions differ among the trajectory groups, and (3) whether home time, a patient-centered outcome, is differentiated by the trajectory groups.
Study design and sample
Records from the Aging Study of Pyeongchang Rural Area (ASPRA) were analyzed. This population-based, prospective cohort study has been established to analyze aging-related changes and major health outcomes of the older population, as part of an academic-public health collaborative model. The details of this study are described elsewhere [25]. To summarize, older Korean adults in Pyeongchang-gun who met the required criteria were enrolled beginning in November 2014. The inclusion criteria of the ASPRA cohort included: (1) being aged ≥65 years; (2) being registered in the National Healthcare Service; (3) being ambulatory with or without an assistive device; (4) living at home; and (5) being able to provide informed consent. Those who were living in a nursing home, hospitalized, or bed-ridden and receiving nursing-home-level care at the time of enrollment were excluded [25]. The cohort had a participation rate of more than 90%. A baseline study on the ASPRA population showed that demographic characteristics in this population were in accordance with those of nationwide rural-dwelling older adults [25]. The Institutional Review board of Asan Medical Center, Seoul, Korea, approved the protocol for this study (IRB No. 2015-0673).
Assessment of disability
Trained nurses assessed disability and other geriatric conditions utilizing standardized instruments every year [7]. Disability was assessed according to a 7-item activity of daily living scale (ADL; bathing, continence, dressing, eating, toileting, transferring, and washing face and hands) [5,26], or a 10-item instrumental activity of daily living scale (IADL; food preparation, household chores, going out short distance, grooming, handling finances, laundry, managing own medications, shopping, transportation, and using a telephone) [5,27,28]. Disability was defined as being dependent in more than one domain in ADL and IADL. The severity of disability was conventionally operationalized into three groups: no dependency (disabled domain: 0), mild disability (disabled domain: 1), and severe disability (disabled domain: 2 or more) [29,30].
Assessment of geriatric conditions
Participants' baseline demographic factors (e.g., age, sex, education (in years), living alone, and medical aid) were further examined. Physician-diagnosed chronic diseases, including angina, arthritis, asthma, cancer, chronic lung disease, heart failure, diabetes mellitus, heart attack, hypertension, kidney disease, and stroke were identified [5]. Cognitive function was assessed by the Korean version of the Mini Mental State Examination-Dementia Screening [MMSE-DS; ranged from 0 (severe cognitive impairment) to 30 (no problem) [31]. Mood status was examined by the Korean version of the center for Epidemiological Studies Depression scale [CES-D; ranged from 0 (not depressed) to 60 (severely depressed)] [32]. Nutritional status was assessed by a Mini Nutritional Assessment-Short Form [MNA-SF; ranged from 0 (malnutrition) to 14 (well-nourished)] [33]. Physical function was measured using the Short Physical Performance Battery [SPPB; ranged from 0 (worst performance) to 12 (best performance)] that covered chair stand, standing balance, and gait speed [34,35]. The Korean version of a 5-item FRAIL scale was administered to screen frailty status [36]. Participants were interviewed concerning their history of falls in the past year.
Calculation of home time
Registered nurses assessed the participants' hospital use, visits to the emergency room, and institutionalization period every three months. Home time was calculated to be 365 days excluding the dates sent from the hospital and healthcare facilities [22,23].
Statistical analysis
To identify differences in home time among groups, a one-way analysis of variance (ANOVA) test was utilized for home time, which is a numeric variable. Regarding the categorical variables within the study, the difference among the variables was examined by employing a chisquare test. In order to examine the statistical association between home time and disability group, a Poisson regression model was applied. We estimated the incidence rate ratio (IRR) and 95% confidence intervals (CI) for home time with a Poisson regression model, adjusted for sex and age in the trajectory group [37]. In the conventional group, the year of measure was additionally adjusted.
Based on the discrepancy of the results, separate trajectories were identified according to the severity of disability using the Proc Traj procedure in SAS 9.4 [38]. The groups were divided according to the following criteria: (a) the lowest value in Bayesian Information Criteria (BIC), (b) the average posterior probability of group assignment (≥0.7), and (c) group size such that no less than 5% of the study sample were assigned to one trajectory group [39]. These analyses were performed with the 3.5.3 version in R. Two-sided P values of < .05 were considered statistically significant.
Total candidates and characteristics
Of the 1355 participants who received usual care in public health settings, those participants who had a follow-up period of less than three years were excluded. Among individuals, 233 people were excluded because the follow-up period was less than three years. By then, 336 participants dropped out due to either medical reasons (n = 170) or follow-up loss (n = 166). Among the medical reasons, 35 participants had died, 103 were admitted to nursing homes, and 32 moved out due to health problems. Of the 166 participants with follow-up loss, 53 moved out due to other problems, 89 declined to participate, and 24 had lost contact. Finally, 786 participants who completed routine measurements for three years were analyzed in this study ( Fig. 1). For participants with a follow-up period longer than three years, the baseline point was defined as the first measurement after enrollment.
Participants' baseline demographic factors including age, sex, education (in years), living alone (or not), and medical aid (or not) were examined according to total participants based on Fig. 1 (Table 1). Geriatric conditions such as number of comorbidities, MMSE-DS, number of regular medications, FRAIL scale, SPPB score, CES-D score, MNA-SF score, and the number of falls were included. The average age was 73.3 years (SD: 5.8), and the percentage of females was 52.7% of the total. The average education (in years) was 5.2 years (SD: 3.3), and 15.8% of the total were living alone. Among geriatric conditions, the baseline of the SPPB and MMSE-SD score was 8.8 (SD: 2.8) and 25.7 (SD: 3.9), respectively, each at baseline.
Disability trajectories
Three trajectory groups were defined according to the degree of disability by the number of impaired domains from the 1st to the 3rd years (Fig. 2). The model with three trajectory groups was the best fit for our data based on BIC, considering the proportions of each group (see Table S1 on Additional file 1). The average posterior probability was assigned to each group (p = 0.9, 0.82, and 0.96, respectively) [39].
The "relatively-stable group" (78.5%; n = 617) was characterized by the lowest levels of disability. The "gradually-aggravated group" (16.0%; n = 126) was characterized by slightly increasing levels of disability over time. The remaining 5.5% of the population (n = 43) with high baseline disability that was also rapidly aggravating over time were categorized as the "rapidly-deteriorated" group ( Fig. 2) [40].
Comparisons of characteristics among trajectory groups
We looked at factors such as baseline demographic factors and geriatric condition according to the trajectory group likewise in chapter 3.1. (Table 2).
Geriatric measurements differed significantly in the three groups, except for living alone and the number of falls in the 3rd year. In the 1st year, the relatively-stable group had a mean age of 72.1 years, 45.1% were female, the mean number of comorbidities was 1.1, the number of medications was 2.2, and the mean number of falls in the previous year was 0.1. In the rapidly-deteriorated group, mean age at 1st year was 81.1 years (which is almost nine years higher than the relatively-stable group), and 76.7% of the participants were female. This group had a mean number of 2.0 for comorbidities, 4.4 for those receiving regular medications, and 0.8 for the number of falls in the previous year.
In terms of physical performance, the SPPB score was 9.5 points in the relatively-stable group and 3.3 points in the rapidly-deteriorated group. In the 3rd year, the difference between the relatively-stable group and rapidlydeteriorated group was larger than that of the 1st year, increasing from 6.1 to 7.2, respectively.
Comparison of home time between the conventional versus trajectory-based group
Home time decreased by an incremental degree in both the conventional and trajectory-based disability groups (Table 3). Compared to the 1st year, the trend of decreasing home time took place continuously in the 2nd and 3rd year.
In the 1st year, the home time of the severe group was shorter by 8.9 days (352.2 days-343.3 days) compared to the no dependency group by conventional grouping. In contrast, the rapidly-deteriorated group had 11.7 days fewer (351.6 days-339.9 days) home time than the relatively-stable group by trajectory-based grouping in the 1st year.
In the 3rd year, the home time of the severe group was shorter by 5.5 days (350.3 days-344.8 days) compared to the no dependency group by conventional grouping. By trajectory-based grouping, the rapidly-deteriorated group stayed 8.5 fewer days in their home than in the relativelystable group (350.3 days-341.8 days, a 2.43% decrease).
Incidence rate ratio for home time according to conventional versus trajectory-based grouping of disability
After recognizing the differences in home time decrements by definitions of disability phenotype (Table 2), regression models were employed to adjust for demographic factors, including age and sex, in these observations. Additionally, the year of measurement was adjusted in the conventional group since the trajectory-based definition already took into account time sequence. In the statistical model with adjusted variables, significant differences of home time between the conventional based and trajectory-based definitions were observed in the univariate analysis (see Table S2 in Additional file 1). Fig. 1 Participant selection flow. * Among the 170 participants who dropped out for medical reasons, 35 participants (20.6%) had died, 103 participants (60.6%) were admitted to nursing homes due to deterioration of health, and 32 participants (18.8%) had moved or were withdrawn due to health problems. ** Among the 166 participants who dropped out due to follow-up loss, 53 participants (31.9%) moved due to other problems (except for health), 89 participants (53.6%) declined to participate, and 24 participants (14.5%) had lost contact The IRR for home time in the conventional groups and trajectory groups is shown in Fig. 3. Home time in the mild dependent group (IRR = 0.993; 95% CI, 0.987-0.999) was shorter than the reference group (no dependency group) by conventional grouping. Similarly, the severe-dependent group had shorter home time (IRR = 0.985; 95% CI, 0.979-0.992) compared to the no dependency group.
In the trajectory-based group, the home time of the gradually-aggravated group was shorter (IRR =0.992; 95% CI, 0.985-0.999) compared to the relatively-stable group. Similarly, the rapidly-deteriorated group had shorter home time (IRR =0.978; 95% CI, 0.967-0.988) compared to the relatively-stable group.
Incidence rate ratio of subgroup for home time according to trajectory-based grouping of disability
We also conducted subgroup analysis according to age and sex, respectively. In the case of age group, we divided age group criteria into (1) 65-74 years, and (2) 75 years or older based on [41]. According to our findings, home time in the female group was lower in the gradually-aggravated group and rapidly-deteriorated group than in the relatively-stable group (IRR, 0.989 and 0.967). However, in the male group, there were no significant results for either classification of disability and home time. In the case of the age group, the IRR for home time in the 65-74 year age group was 0.915 in the rapidlydeteriorated group compared to the relatively-stable group. The portion of the rapidly-deteriorated group was just 1.7% of the total, however. In the 75 years or older group, the disability classification was proper. But it was not statistically significant with regard to home time (see Table S3, Fig. S1, and S2).
Discussion
Disability is a major determinant of quality of life in older adults. In the present study, different trajectory groups were categorized according to the severity of disability over time. The conventional method of identifying disability shows only a snapshot of disability status and individual disability components. Therefore, the trajectory groups of disability that we identified demonstrated a more integrated approach toward defining disability.
The major finding of this study is that three trajectory groups with different severities of disability were confirmed in community-dwelling older adults. The three trajectory groups were divided into the following: a relatively-stable group (78.5%), a gradually-aggravated group (16.0%), and a rapidly-deteriorated group (5.5%). Previous studies had shown trajectory grouping using the number of disabilities in patients with underlying diseases such as cancer. In a study with cancer patients, the percentage of the severe trajectory group was 21.2% prior to receiving cancer treatment [42]. Our study is unique in that we show the percentage of the severe disability group (rapidly-deteriorated group) to be around 5.5% in relatively healthy older adults living in rural communities. Our data may serve as a basis for future reference in disability studies of the general older populations.
Another finding is that there were differences in the demographic characteristics and geriatric conditions among the different trajectory groups. Most of the variables of the demographic and geriatric conditions were significantly different trajectory groups, except for the number of falls and living alone status. We confirm that the age increased, and the years of education decreased from the relatively-stable to deteriorated group. What stands out most from this study is the change in SPPB. It is well known that SPPB is an important variable for older adults in addition to the FRAIL scale and MMSE-DS score [43]. Our results show that in the 3rd year, the SPPB score was 10.2 points (SD: 2.0) in the relativelystable group and 3.1 points (SD: 2.2) in the rapidlydeteriorated (more severe) group. In addition to a statistical difference, the numerical difference shows that there was a difference of more than three times between the relatively-stable group and the rapidly-deteriorated group. From this result, we recommend that a comprehensive geriatric assessment in clinical settings be performed, if available, in order to measure physical performance such as SPPB.
Furthermore, our study contributes to the literature by showing that the trajectory method can maximize the difference in home time compared to the conventional method. We showed that home time decreased more over time, as the disability type was severe at initial diagnosis and the increasing levels of disability were rapid. In the 2nd and 3rd year follow-up, the decrease in home time was smaller than in the 1st year, but the home time of the trajectory groups were still reduced compared to the conventional groups, and this difference was statistically significant. We found that the trajectory method decreased 3.33% in the rapidly-deteriorated group compared to the stable group. In the conventional case, the severe-dependent group decreased by 2.53% compared to the non-dependent group.
The rapidly-deteriorated group had shorter home time (IRR = 0.978; 95% CI, 0.967-0.988) compared to the relatively-stable group by trajectory method. This result was shorter than the severe-dependent group of the conventional method (IRR = 0.985; 95% CI, 0.979-0.992).
Considering subgroup, home time in the female group was lower in the gradually-aggravated and rapidlydeteriorated group than in the relatively-stable group (IRR, 0.989 and 0.967), but males and other age groups were not significant in terms of disability classification or home time reduction.
Finally, our results can inform public health professionals developing care models to detect trajectories of disability and build individualized intervention or rehabilitation programs and health policies based on the trajectories, for older, vulnerable populations.
The strengths of this study are that the enrollment rate was 90% and based on an aging cohort derived from an academic-public health collaborative model. We obtained consistent data based on internationally validated geriatric assessment tools and, therefore, the results reflect real world data. Although our data is based on rural communities where some proportions of individuals have low education and are engaged in agriculture, it is a population-based cohort and the sociodemographic characteristics were similar to those of the representative Korean national data.
This study has several limitations. Among the 1122 eligible participants, 166 people (15%) were lost to followup. This may be a limitation in constructing the trajectory model, however, this 15% follow-up loss was over the three years of analysis. Therefore, loss to follow-up occurred around 5% per year, which is less than the general percentage of population migration. Second, there Fig. 3 Forest plot of the incidence rate ratio for conventional versus trajectory group of disability. *The analysis of the trajectory group was adjusted for sex and age. The conventional group was additionally adjusted for the year of measurement. ** The reference value of the conventional group is the 'no dependency group' and the reference value of the trajectory group is the 'relatively-stable group' may be a recall bias in home time. The participants may not fully recall their hospital or emergency visits in the previous years. In order to overcome this limitation, we obtained information from Community Health Posts in Pyeonchang run by the National Healthcare Service for information if the participants were not fully aware of their hospital use in the past. Therefore, we attempted to minimize recall bias. Third, there was a relatively short follow-up term. The cohort was a three-year follow-up study and, therefore, there is a need for individuals to be examined over longer periods of time.
Lastly, it is difficult to capture the short-and mediumterm changes in disability lasting less than a year by the methods we used. Gill et al. have suggested that mechanisms underlying the different subtypes are likely to differ. While the presence of physical frailty increased the likelihood of developing long-term, recurrent, and unstable disability, it only had a modest effect on developing transient and short-term disability [44].
Conclusions
A longitudinal trajectory method was used to apply the time trend of disability to community-dwelling older adults. We verify that the demographical and clinical indexes are different according to the trajectory grouping, and the significant effect of the trajectory method on home time was also examined. Our observations provide public health professionals and policy makers with valuable information in order to set priorities for policy making and intervention. | 2020-10-29T09:02:41.818Z | 2020-03-21T00:00:00.000 | {
"year": 2020,
"sha1": "fe17857e9f8bfe8399a801e7257446ce254097ca",
"oa_license": "CCBY",
"oa_url": "https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-020-01834-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa43514e8b24845f6e223b638feaf1d8d4849f64",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237746727 | pes2o/s2orc | v3-fos-license | Synthesis of vertical WO3 nanoarrays with different morphologies using the same protocol for enhanced photocatalytic and photoelectrocatalytic performances
Tungsten trioxide (WO3) nanoarrays with different morphologies were successfully synthesized by a hydrothermal method on an FTO substrate. Various nanostructures of WO3 including nanoflakes, nanoplates, nanoflowers and nanorods were obtained by adjusting only the acidity of the precursor solution. XRD patterns confirmed that the as-prepared orthorhombic WO3·0.33H2O transformed to the monoclinic WO3 phase under annealing at 500 °C. UV-Vis absorbance spectroscopy indicated that the absorption edge of WO3 nanoflowers exhibited a slight red-shift compared to other morphologies of WO3. The obtained WO3 nanoflower arrays exhibit the highest photocurrent density and photocatalytic degradation activity towards methylene blue. Finally, the mechanism of the photocatalytic degradation of methylene blue by WO3 is discussed.
Introduction
Methylene blue (MB) is a type of water-soluble polycyclic aromatic dye, which is extensively used in dyestuff, textile and dyeing industry. 1 The dye wastewater with large chromaticity and high organic concentration discharged from the dyestuff and textile industry is considered a dangerous pollutant to the environment. Semiconductor photocatalysis (PC) and photoelectrocatalysis (PEC) have been proved to be efficient and promising for the degradation of organic contaminants. [2][3][4] It is worth noting that PEC is the combination of photocatalysis with electrocatalysis technologies by applying a biased voltage, which can maximize the utilization of photocatalysts and is a more attractive method in degrading organic pollutants. 5 Tungsten trioxide (WO 3 ) as a promising n-type semiconductor has been extensively researched in the photocatalysis eld due to its remarkable stability in acid aqueous solutions, high electron mobility, resistibility to light corrosion, moderate band gap and low cost. 6 The morphology and structure of the WO 3 materials have critical inuence on the photocatalytic and electrocatalytic properties. 7 Extensive research efforts have been done to synthesize numerous morphological WO 3 materials via different methods, such as one-dimensional (1D) nanostructured nanorods (NRs), nanowires (NMs) and nanotubes (NTs), [8][9][10] two-dimensional (2D) nanostructured nanoakes (NFs), and nanoplates (NPs). 11,12 However, there are relatively fewer studies on the synthesis of three-dimensional (3D) materials. Also, the effects of the WO 3 array crystal morphology with various dimensions on the photocatalytic properties have not been systematically studied.
At present, WO 3 crystals with different morphologies are typically obtained by selecting different synthesis methods, [13][14][15][16] or by changing the operation parameters, including modications to the precursor, structure-directing agents, surfactants and solvents. [17][18][19][20] Due to the composition of precursors and the change in the chemical reaction environment, the obtained WO 3 crystals showed poor reproducibility on the array lm quality, photocatalytic and photoelectrocatalytic performances. 21,22 Therefore, it is necessary to develop an efficient and highly reproducible method to produce morphology-controlled WO 3 crystals. The metatungstate anion [H 2 W 12 O 40 ] 6À of the ammonium metatungstate (AMT), known as a KEGGIN anion structure with a "tetrahedral cavity" in its center, endows the AMT with exceptional structural congurations. 23 The unique molecular structure makes AMT relatively stable under ambient conditions. 24 In this study, chemically stable AMT was used as the tungsten source, and a series of diversied morphology WO 3 photocatalysts were successfully synthesized in the same reaction system. The WO 3 with different morphologies including nanoakes, nanoplates, nanoowers, and nanorods were obtained by only adjusting the acidity of the hydrothermal precursor solution. Their photoelectric and photocatalytic properties were investigated. Moreover, the mechanism of photocatalytic degradation was investigated.
Preparation of WO 3 arrays with different morphologies
All reagents were of analytical grade and used without any further purication. An FTO glass substrate (2 Â 5 cm 2 ) was cleaned using acetone, absolute ethanol and deionized water, separately. 0.5 g of ammonium metatungstate hydrate ((NH 4 ) 6 H 2 W 12 O 40 $xH 2 O) was dissolved in 40 mL deionized water by magnetic stirring. 1-7 mL of 3 M concentrated hydrochloric acid (HCl) was slowly added to the above solution and stirred for 5 min. 2 mL of hydrogen peroxide (H 2 O 2 , 30%) was dropped into the solution and adjusted its volume to 50 mL using deionized water, followed by stirring for 1 h. The reaction solution was then poured into a 43 mL Teon-lined stainless steel autoclave with the FTO substrate and kept at 160 C for 4 h. Aer reaction, the autoclave was cooled down to room temperature. The product was ltered and dried at 60 C for 10 h. Finally, the samples were annealed at 500 C for 1 h. The as-prepared WO 3 arrays were referred to as W-x (x (mL) ¼ 1, 2, 4, 5, 6, and 7), where x is the volume of HCl.
Characterization
X-ray diffraction (XRD) measurements were conducted on a Bruker D8 Advanced diffractometer with Cu Ka radiation. Field emission scanning electron microscopy (FESEM) and energy dispersive X-ray spectroscopy (EDS) were determined using JEOL JSM-7800F instruments. UV-Vis diffuse reectance spectra (UV-Vis DRS) were recorded using a UV-2600 spectrometer. Photoluminescence (PL) spectra were obtained on a Hitachi F-280 uorescence spectrophotometer at the excitation wavelength at 325 nm.
Photoelectrochemical measurements
The photoelectrochemical measurements of WO 3 arrays were performed in a CHI660E electrochemical workstation using a standard three-electrode cell. The as-synthesized WO 3 arrays were used as the working electrodes, a platinum net was used as the counter electrode and Ag/AgCl was used as the reference electrode. The aqueous solution of 0.5 M Na 2 SO 4 was used as an electrolyte. The illumination source used was a 500 W Xe arc lamp (CEL-HXF300) with a AM 1.5 G lter. The measured potentials vs. Ag/AgCl were converted to the potentials vs. reversible hydrogen electrode (RHE) using the following Nernst equation: 25
Photocatalytic and photoelectrocatalytic degradation measurements
The degradation experiment was carried out using the same three-electrode system described above. The photocatalytic and photoelectrocatalytic activity of these WO 3 arrays with different morphologies were evaluated by the degradation of methylene blue (MB). The initial concentration of MB was 10 mg L À1 , and the electrolyte was 0.1 mol L À1 Na 2 SO 4 . First, magnetic stirring was carried out in dark for 30 min to achieve the equilibrium of adsorption and desorption. Aer certain time intervals during irradiation, 3 mL of the degradation was taken to analyze the concentration MB using a UV-Vis spectrophotometer. The main active species in the photodegradation process were analyzed by adding different radical scavengers. Here, 1 mmol ammonium oxalate (AO), methanol (MT), and p-benzoquinone (p-BQ) were used as a hole (h + ) scavenger, hydroxyl radical ($OH) scavenger, and superoxide radical ($O 2 À ) scavenger, respectively.
Morphology and structure
The WO 3 arrays with different morphologies aer being annealed were conrmed by FESEM, as shown in Fig. 1. It can be seen that the WO 3 nanoarrays are nearly vertically aligned on the FTO substrates, and exhibited strong adhesion to the FTO substrate. When the volume of HCl was 1 mL, it showed a uniform array of nanoake structure. The widths and thicknesses of the nanoakes were about 2.5 mm and 150 nm, respectively (Fig. 1a). With an increase in the volume of HCl, the thicknesses of the nanoakes increased, while the morphology changed to a nanoplate array structure (Fig. 1b-d). When the volume of HCl is 5 mL, regular uniform WO 3 nanoplates were obtained, and the thicknesses increased to 300 nm (Fig. 1d). As the volume of HCl is added up to 6 mL, the morphology of WO 3 turned into nanoowers and each nanoower is regularly composed of six nanoplates of equal size (Fig. 1e). However, when the amount of HCl increases to 7 mL, the morphology of WO 3 turns into a nanorod array structure (Fig. 1f). Fig. 2 shows the low-magnication FESEM image and the EDS pattern of the W-6 sample. As shown in Fig. 2b, there were no impurity elements except W and O, indicating that the sample prepared by the hydrothermal reaction is tungsten oxide. The preparation of WO 3 with different morphologies on FTO is illustrated in Fig. 3. The pH value affects the solubility of a substance and inuences the growth of the crystal in a hydrothermal reaction. 26 Therefore, different WO 3 morphologies were obtained by adjusting the pH values of the precursor solutions. Fig. 4 shows the XRD patterns of the as-prepared and annealed WO 3 nanoarrays. All the as-prepared arrays obtained by the hydrothermal reaction with different pH values were orthorhombic WO 3 $0. were indexed to the (111), (020), (002), (202) and (222) facets, respectively (Fig. 4a). Aer being annealed, the diffraction peaks of all samples are indexed to the monoclinic WO 3 (JCPDS no. 43-1035), and three peaks at 23.1 , 23.5 and 24.3 corresponded to the (002), (020) and (200) facets (Fig. 4b). The intensive and sharp diffraction feature of WO 3 samples showed excellent crystallinity. 27 The chemical composition of orthorhombic WO 3 $0.33H 2 O changed and transformed to a stable monoclinic WO 3 aer annealing. 28 According to the XRD patterns, the (002), (020) and (200) facets are the main features in monoclinic WO 3 . The surface energy order was (002) > (020) > (200), which indicates that the (002) facet in monoclinic WO 3 has the largest surface energy as the most reactive surface. 29 Based on the peak areas of the (002), (020) and (200) facets, the proportion of the (002) facet was calculated. The calculated proportions of the exposed (002) facet of WO 3 are about 6.96% (W-1), 9.77% (W-2), 9.53% (W-4), 13.06% (W-5), 14.81% (W-6), 6.92% (W-7). Therefore, the hexagonal ower-like (W-6) WO 3 exposes more (002) facets.
Optical properties
The UV-Vis absorption spectra and PL spectra of the annealed WO 3 arrays are illustrated in Fig. 5. The inset in Fig. 5a is the corresponding Tauc plots of band gap. These four different morphology arrays with nanoake (W-1), nanoplate (W-5), nanoower (W-6), and nanorod (W-7) have almost similar absorption spectra, and the absorption edges is at ca. 466 nm.
The optical band gap (E g ) of the WO 3 samples could be calculated using the equation: 30 where a is the absorption coefficient, h is the Planck's constant, n is the frequency of the radiation, and A is a constant. The calculated E g values of WO 3 nanoakes (W-1), nanoplates (W-5), nanoowers (W-6), and nanorods (W-7) are 2.66 eV, 2.65 eV, 2.63 eV, and 2.68 eV, respectively, indicating that the morphology had little effect on the band gap of WO 3 . All the samples are consistent with the energy gap of monoclinic WO 3 (2.6-2.8 eV). 31 Moreover, the absorption edge of WO 3 nano-owers exhibits a slight redshi in the UV-Vis spectrum.
The recombination of photogenerated charge carriers was evaluated via PL measurements. 32 Fig. 5b shows the PL spectra of WO 3 arrays excited at 325 nm light at room temperature. It can be seen that the WO 3 nanoowers (W-6) showed a lower uorescence peak, indicating that the recombination probability of photo-excited electron hole pairs is lower. In one word, the more (002) facet of the WO 3 nanoower (W-6) is benecial to the separation of photo-excited electrons and holes, and more photogenerated carriers participate in the photoelectrocatalytic reaction.
Photoelectrochemical performances
The linear sweep voltammograms (LSV) (Fig. 6a) and amperometry I-t curve (Fig. 6b) of the annealed WO 3 were measured as photoanode materials. The photocurrent density of the WO 3 nanorod (W-7) photoanode is 0.43 mA cm À2 at 1.23 V vs. RHE, while that of the nanoake (W-1) and nanoplate (W-5) were 0.67 and 0.72 mA cm À2 , indicating that the morphology engineering of the WO 3 arrays has important implications on PEC performance. Moreover, the nanoower (W-6) exhibits the highest photocurrent density, reaching 1.10 mA cm À2 at 1.23 V vs. RHE. The photocurrent increase is mainly due to the higher exposed (002) facets, and 3D nanoowers exhibit unique optical properties, which provide more reaction sites and enhanced charge transport.
The photocurrent density-time (I-t) curves shown in Fig. 6b are obtained to determine the photocurrent stability of the WO 3 arrays. Clearly, the photocurrent densities get close to zero when the light is off, and they increase rapidly and stabilize at a certain value under illumination. All WO 3 arrays showed fast and uniform photocurrent response. 33 The four photoanode have similar curves and more photocurrent stability, indicating that it a good method to obtain stable WO 3 arrays with ammonium metatungstate. Fig. S1 † shows the electrochemical impedance spectroscopy (EIS) of the WO 3 arrays, and the semicircle radius in the Nyquist curve reects the charge transfer at the electrode/electrolyte interface. The WO 3 nano-ower shows a smaller semicircle radius, which indicates lower charge transfer resistance and higher separation efficiency of photogenerated electron hole pairs.
Photocatalytic and photoelectrocatalytic activity
The results of MB degradation are shown in Fig. 7. Fig. 7a and b show the photocatalytic (PC) activity (illumination without bias) of the WO 3 arrays and the rst order kinetics curve tting. It can be seen that the WO 3 nanoowers have the highest photocatalytic activity, which degrade 66.39% of MB within 80 min. The degradation efficiencies of WO 3 nanoplates, nanoakes and nanorods are 58.64%, 55.09% and 45.59%, respectively, proving that the photocatalytic activity is decided by the morphology. The photocatalytic degradation of MB by WO 3 arrays follow the linear rst-order kinetics equation: 34 where k is the constant of the degradation reaction rate, which could be used to compare the performances of catalysts. The highest PC degradation reaction rate of the WO 3 nanoowers was 0.013 min À1 , which is 1.86 times higher than that of WO 3 nanorods (0.007 min À1 ). The degradation rate of methylene blue by TiO 2 nanorods reached 25% within 5 h, 35 while the pure ZnO decolorized 35% of MB dye aer 1.5 h. 36 The photocatalytic degradation activity of WO 3 to MB in visible light was higher than that of other reported metal oxide, which indicates that WO 3 has a great potential for MB degradation. Fig. 7c shows the degradation of MB with various techniques using WO 3 nanoowers. Aer 80 min, only 6% of MB was removed by direct photolysis, showing that MB is not easy to degrade under solar illumination. The photoelectrocatalysis (PEC) degradation rate of the MB increased to 94.9% aer applying an electrical bias potential of 0.8 V to the WO 3 nano-ower array electrode, which can be attributed to avoid the recombination of electron-hole by electrical bias potentials. Therefore, the synergistic effects of photocatalysis (PC) and electrooxidation (EC) can signicantly improve the degradation efficiency of MB.
The effects of different radical scavengers on the photocatalytic degradation of MB are shown in Fig. 7d. The photocatalytic efficiency noticeably decreased aer adding ammonium oxalate, indicating that h + was the dominant active species. The degradation efficiency of MB decreased by 17.59% (to 48.8%) aer methanol addition, this suggests that $OH could have some effects on photocatalytic efficiency. When p-benzoquinone was added to the reaction solution, the photocatalytic activity was similar to that of the sample without scavengers, conrming that $O 2 À was not an active species for MB degradation. The main reason may be that the conduction band of WO 3 is more positive than that of E(O 2 / $O 2 À ) (À0.33 eV/NHE), 37 so electrons could not react with O 2 to generate $O 2 À . The valence band of WO 3 is more positive than E($OH/OH À ) and E($OH/H 2 O), and the holes in the VB of WO 3 can react with OH À and H 2 O 2 to generate $OH. 38 Moreover, the photogenerated holes generated by WO 3 can degrade MB directly.
The photoelectrocatalysis stability of the WO 3 nanoower array for MB degradation was measured by the cyclic experiment (Fig. 7e). Aer ve cycles, the degradation rate of WO 3 array was 79%, decreased by 16.7%. Moreover, the XRD spectra and FESEM images of WO 3 aer cyclic degradation are shown in Fig. S2. † There is no difference in phase and morphology before and aer catalysis. The results indicate that the WO 3 arrays is stable.
Conclusions
Different morphologies of vertically WO 3 nanoarrays were successfully synthesized on FTO substrates via a one-step hydrothermal process. The morphology of WO 3 was sensitive to the acidity of the solution and changed from nanoakes, nanoplates and nanoowers to nanorods as the volume of HCl increased from 1 mL to 7 mL. Particularly, WO 3 nanoowers demonstrated the highest photocurrent density performance compared to other morphologies. The enhanced charge transport and the specic WO 3 hexagonal nanoower contributes to the signicant PEC activity. Moreover, the WO 3 nanoower array exhibited better performance than other morphologies in MB photocatalytic degradation, which demonstrated that the morphology of WO 3 plays an important role in the PEC and photocatalytic performance. The different scavenger degradation MB in WO 3 nanoowers conrm that the h + radical species play a vital role in the reaction.
Conflicts of interest
There are no conicts to declare. | 2021-08-27T17:03:11.194Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "6410fc4770ee34bb55043ffdb8729d2b2fccbc4f",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1c5b49b7da4b66c59c5baabfd4c50f415afe65d8",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
231705229 | pes2o/s2orc | v3-fos-license | Airborne eDNA Reflects Human Activity and Seasonal Changes on a Landscape Scale
Recent research on environmental DNA (eDNA), genetic material shed by organisms into their environment that can be used for sensitive and species-specific detection, has focused on the ability to collect airborne eDNA released by plants and carried by the wind for use in terrestrial plant populations, including detection of invasive and endangered species. Another possible application of airborne eDNA is to detect changes in plant communities in response to activity or changes on a landscape-scale. Therefore, the goal of this study was to demonstrate how honey mesquite, blue grama, and general plant airborne eDNA changes in response to human activity on a landscape-scale. We monitored airborne eDNA before, during, and after a rangeland restoration effort that included honey mesquite removal. As expected, restoration activity resulted in a massive increase in airborne honey mesquite eDNA. However, we also observed changes in abundance of airborne eDNA from the grass genus Bouteloua, which was not directly associated with the restoration project, and we attribute these changes to both human activity and seasonal trends. Overall, we demonstrate for the first time that activity and changes on a landscape-scale can be tracked using airborne eDNA collection, and we suggest that airborne eDNA has the potential to help monitor and assess ecological restoration projects, track changes due to global warming, or investigate community changes in response to encroachment by invasive species or extirpation of threatened and endangered species.
INTRODUCTION
Monitoring is a critical component of successful restoration before, during, and after management actions (Walters 1986;Lake 2001;Galatowitsch 2012). For example, conventional field-based plant community monitoring is generally accomplished via methods such as the line-point intercept, belt transect, and gap intercept, using quadrats, transects, or points (Herrick et al., 2005a;Elzinga et al., 1998). Monitoring methods can provide detailed information about a study site, but they can also be time consuming and withdraw logistical and financial resources from low budget projects. In addition, conventional monitoring activity typically incites elevated disturbance to target areas, which may be counterproductive to restoration, and the results can vary based on the intensity of the sampling (Herrick et al., 2005a;Herrick et al., 2005b;Garrard et al., 2008).
A novel method that could address the limitations of conventional monitoring relies upon collection and analysis of environment DNA (eDNA), the genetic material shed by an organism into its environment and collected by researchers from environmental samples such as soil, water, or air (Thomsen et al., 2012;Barnes and Turner 2016). A primary benefit of eDNA monitoring is the ability to detect organisms without the need of a captured target species or direct tissue sample. Indeed, recent reviews by Ruppert et al. (2019) and Makiola et al. (2020) have predicted an increasingly large role for eDNA analysis and metabarcoding in community surveillance and monitoring landscape changes in response to activities such as ecological restoration.
Research into eDNA has focused primarily on aquatic and sediment samples, including assessments on invasive species detection, endangered species monitoring, and the ecology of eDNA (Willerslev et al., 2007;Goldberg et al., 2011;Lodge et al., 2012;Taberlet et al., 2012;Barnes et al., 2014;Barnes and Turner 2016). Recently, Johnson et al. (2019a) demonstrated that airborne eDNA samples, genetic material collected from air, could detect both anemophilous and non-anemophilous plant species. These findings portend the utility of airborne eDNA analysis for broad conservation, management, and research applications such as whole plant community monitoring. However, the ecology of airborne eDNA, including spatial and temporal dynamics of airborne eDNA abundance relative to seasonal changes and other activity on the landscape, remain unexplored but represent critical gaps in current understanding that should be addressed before the potential management and conservation impact of airborne eDNA monitoring can be maximized.
Like eDNA research in general, the existing research addressing how eDNA responds to biotic and abiotic landscape changes primarily comes from aquatic systems. For example, Bista et al. (2017) found that macroinvertebrate eDNA levels displayed community level shifts in diversity throughout the year due to changes in species ecology, biotic, and abiotic factors. Additionally, Buxton et al. (2018) found, when examining eDNA of the great crested newt (Triturus cristatus) in aquatic and sediment samples, that detection varied throughout the year based on habitat suitability and species ecology. Work has also been done in ocean systems with studies ranging from examining shark diversity response to anthropogenic disturbance to understanding how anthropogenic activities such as oil spills and development impact micro and macro coastal communities (Bakker et al., 2017;Xie et al., 2018;DiBattista et al., 2020). Sun et al. (2019) used eDNA metabarcoding to understand Dipteria and other organism populations in human caused roadside stormwater ponds. Additionally, Klymus et al. (2017) examined how anthropogenic uranium containment ponds impacted the biodiversity of vertebrate species. Early applications also include monitoring of the re-introduction of Rhine sculpin (Cottus rhenanus) into its native range (Hempel et al., 2020) and at-risk, pre-restoration coral community monitoring (Nichols and Marko 2019).
Despite the apparent focus on aquatic systems to date, similar eDNA applications will likely also benefit conservation and management in terrestrial habitats. For example, airborne eDNA analysis could be used to track the changes in plant community composition throughout a restoration project and afterward to provide information for robust adaptive restoration. Airborne eDNA analysis could also assess the impacts of climate change, assist in identifying the spread and location of rangeland invasive species, track endangered species, and detect disease within a restored site or before a restoration (Dejean et al., 2012;Huver et al., 2015;Scriver et al., 2015;Barnes and Turner 2016). With the use of airborne eDNA, tedious surveys that require large amounts of time could be replaced by a network of airborne eDNA traps. However, a critical first step in the evaluation of such methods is to examine the extent to which airborne eDNA reflects landscape changes. Human activity on a landscape-scale encompasses a large variety of activities that could influence airborne eDNA dynamics, from building roads, farming, construction, habitat fragmentation, and invasive species introduction to name a few. We believe that airborne eDNA (both species specific and global) can be used reflect landscapescale changes from human activity.
In our study, we used ecological restoration as one example of human activity on the landscape. Our goal was to demonstrate the use of airborne eDNA as a surveillance tool during removal of honey mesquite (Prosopis glandulosa) on the Texas Tech University native short-grass prairie. Specifically, we examined whether airborne eDNA changed in response to activity on the landscape by: 1) tracking the removal of honey mesquite during a rangeland restoration project; and 2) monitoring changes within the plant community using the Bouteloua genus and global plant eDNA as surrogates. The results of this work will help quantify the feasibility of using airborne eDNA to monitor human activities such as restoration and other landscape management.
Study Site
The 130-acre Texas Tech University Native Rangeland (33.60327 N, −101.9003 W) acts as a natural area for teaching and research within the Department of Natural Resources Management ( Figure 1). The site consists of short-grass prairie, with a large variety of bunchgrasses, forbs, and cacti, and a large population of honey mesquite due to the suppression of fire and grazing (Ansley et al., 2001). This site, despite being fragmented and isolated within an urban matrix, represents a native short-grass prairie in post-climax seral stage, and has not been disked, plowed, or reseeded.
Restoration Project
A rangeland restoration project was performed by the Texas Tech Student Chapter of the Wildlife Society, students from the NRM 4309: Range Wildlife Habitat course, and Texas A&M Forest Service in November and December 2017. During the restoration, honey mesquite, a thorny shrub/tree that can re-sprout and form multi-stem thickets (Ansley et al., 1997), was targeted for removal due to its negative impact on forage production, grazing, and local grass biodiversity (Mohamed et al., 2011). The project was completed in two treatments, each lasting 3 days. The first, 1.2hectare treatment began on November 18, 2017. This treatment consisted of cutting off the entirety of the honey mesquite aboveground biomass (i.e., main stem and adjacent minor stems), and then chipping the cut material on site and treating the stump with 25% Triclopyr and 75% diesel with a 1% blue marker dye. This first effort was completed on the eastern side of our study site ( Figure 1). The second, 1.4-hectare treatment began on December 2, 2017 and was changed to a reduced treatment due to logistical constraints, where only the larger mesquite trees (i.e., main and adjacent minor stems) were targeted and removed. As with the previous treatment, the cut material was chipped, and the stump was treated with the same herbicide mix. This reduced treatment occurred directly west and adjacent to the first total treatment (Figure 1). Both treatments attempted to kill the sprout buds of the underground main stem with both cutting and herbicide (Fisher, 1950).
eDNA Collection, Extraction, and Amplification
To examine how airborne eDNA responds to human activity on a landscape-scale, we collected airborne eDNA before, during, and after the restoration project. To collect airborne eDNA, we set up three eDNA sampling locations: one south of the restoration taking place ("restoration traps"), one plot along the northwest edge of the study site ("north traps") and one plot to the east of the restoration treatments ("east traps"; Figure 1). Within each Figure 2) consisted of two triangular traps 0.914 and 0.406 m above the ground. Each trap has an opening at the tip of the triangular piece of metal and a metal mesh vent on top, allowing air to enter the through the opening and flow out through the metal mesh on top, depositing carried material into a collection tray below (Zobeck 2006). Each triangular trap is attached to a metal sheet that acts as sail to consistently orient them into the wind to maximize the amount of material collected. Traps were sampled four times between November 3 and December 8, 2017. Sampling Events I (November 10th) and II (November 17th) took place 8 and 1 day before the first total treatment restoration, respectively. Event III (November 27th) occurred 9 days after the first total treatment restoration, and Event IV (December 8th) occurred 6 days after the reduced treatment restoration. The collection of repeated samples through time allowed us to examine the response in airborne eDNA to activity associated with the restoration effort. At each sampling event, each trap was rinsed with approximately 1 L deionized water, and the water was collected into individual, sterile 1 L bottles. Since the BSNE arrays each have two collection traps, each tray at a single trap was washed and combined into a single water sample to avoid pseudoreplication (Hurlbert 1984). Rinse water samples were then transported to the laboratory within a cooler and vacuum filtered with 1 µm Isopore membrane filters. Filters were stored at −20°C for approximately 1 month. Next, DNA extractions were completed using a DNeasy PowerPlant Pro DNA Isolation Kit, which has demonstrated high efficiency in previous airborne eDNA analyses (Johnson et al., 2019b). We followed the manufacturer's protocol, except we added an extra grinding step with a sterile plastic pestle and frequent vortex agitation to ensure homogenization (Johnson et al., 2019b). Extracted genomic DNA was stored at −20°C until analysis (approximately 6 months later). To confirm that there was no contamination throughout this process, extraction blanks (i.e., sterile samples extracted alongside experimental samples as negative controls with just buffer and no filter) were processed with every extraction event. Additionally, we used sterile containers, bleached all laboratory surfaces, and used sterile gloves. Due to the nature of airborne eDNA, we could not develop confident field or filtration controls (i.e., we have not developed a method in which control samples are not exposed to the air representing our sample). As a result, we have only included extraction and PCR negative controls ("blanks").
To broadly characterize changes in airborne eDNA in response to the restoration activity, we used three different quantitative polymerase chain reaction (qPCR) assays: a honey mesquite species-specific primer, a Bouteloua genus assay, and a global plant primer. Species-and genus-specificity for honey mesquite and Bouteloua assays, respectively, were confirmed in silico using NCBI Primer-BLAST (Ye et al., 2012) as well as in-lab PCR confirmation using tissues of the nine most common plants found within our study site. We quantified both the limits of detection and quantification as the lowest amount of DNA our primers detect any of the technical replicates and where all the technical replicates were detected, respectively. Operationally, we followed the recommendations of low-copy qPCR analysis of Ellison et al. (2006) and assigned all non-detections a value of zero rather than omitting them from the analysis or missing out on information provided by samples in which fewer than all technical replicates amplified. We previously observed high rates of PCR inhibition in airborne eDNA samples (Johnson et al., 2019a;Johnson et al., 2019b), which led us to dilute samples in this study by 1:10 with pure water. All qPCR reactions were completed on a QuantStudio 3 Real-Time qPCR machine (ThermoFisher Scientific, Foster City, California). The honey mesquite assay targeted the focal species of the restoration effort using 25 µl reactions with 1x PowerSYBR Green qPCR Master Mix, 1 µM forward and reverse primer (Johnson et al., 2019a; Table 1) 2 µl diluted genomic DNA template. The thermocycling program for the honey mesquite assay began with an initial 95°C step for 10 min followed by 40 cycles of 15 s at 95°C and 1 min at 70.1°C, and a final melt curve analysis. The Bouteloua genus assay targeted grasses that represents the most common wind pollinated species on our study site. Each 25 µl qPCR reaction contained 1x PowerSYBR Green qPCR Master Mix, 1 µM forward and reverse primer (Johnson et al., 2019a; Table 1), and 2 µl diluted genomic DNA template. The thermocycling program used an initial 95°C step for 10 min followed by 40 cycles of 15 s at 95°C and 1 min at 66°C, and a final melt curve analysis. Reactions in the honey mesquite and Bouteloua genus assays were quantified using five-point standard curves based on a 1:10 serial dilution of tissue-derived DNA from honey mesquite and Bouteloua gracilis, respectively. Nondetections were treated as zeros when averaged with other technical replicates (Ellison et al., 2006). For both assays, samples were run in triplicate and non-template controls were included to ensure no contamination occurred. Finally, as an indicator of changes in the overall plant community beyond the two focal groups, honey mesquite and Bouteloua spp., we amplified all plant eDNA in our samples with a global plant assay targeting the chloroplast trnL gene (Taberlet et al., 2007; Table 1). For this assay, each 25-µl qPCR reaction contained 1x PowerSYBR Green qPCR Master Mix, 1 µM forward and reverse primer concentrations, and 2 µl diluted genomic DNA template. The thermocycling program began with an initial 95°C step for 10 min followed by 32 cycles of 2 min at 94°C, 1 min at 55°C, and 30 s at 72°C, and a final extension at 72°C for 2 min (Craine et al., 2016). Since amplification with the global plant assay could be the result of a variable mix of plant eDNA sources, we could not create a standard curve for quantification. Therefore, rather than absolute quantification of eDNA concentration in each reaction, we relied on comparison of the average number of cycles needed for the samples to display enough fluorescence to be considered positive (cycle threshold, determined using the default settings of the QuantStudio three Real-Time qPCR machine and abbreviated C T ; Heid et al., 1996).
To analyze whether airborne eDNA changed in response to restoration activity, we completed three separate repeated measures analyses of variance (rmANOVAs) with IBM SPSS statistics (IBM Corp. 2017), separately analyzing honey mesquite, Bouteloua genus, and global plant eDNA results. In each analysis, sampling units consisted of nine total BSNE traps, and three replicate traps at each plot were combined into experimental units to make comparisons between locations. Sampling event represented the repeated measure in our experimental design, we interpreted Wilks' Lambda as our test statistic, and we assumed α 0.05 for determination of statistical significance. Following significant rmANOVAs, we used Tukey-Kramer post-hoc tests to examine how the different trap locations varied from one another for each sampling event.
When collecting airborne eDNA, it is common to see very small leaf fragments in the samples; however, occasionally large plant leaves can be collected which in turn results in extremely high amounts of target DNA in the traps. While these detections are real, the extremely large amounts of airborne eDNA may obfuscate underlying trends or patterns. Therefore, we removed two outliers from consideration: one from the global eDNA assay during Event III in the east traps (>100x more eDNA than the other two traps at the same site), and the other from the honey mesquite assay from Event III, also in the east traps (600x more eDNA than replicate traps).
Honey Mesquite eDNA
We found during Events I and II that the average honey mesquite eDNA quantities for all three trap locations were consistently low ( Table 2). After the total restoration treatment, the average quantity of honey mesquite eDNA spiked in the restoration traps but remained low in the east and undetected in north traps ( Table 2). After Event IV, we observed honey mesquite eDNA in high concentrations in the restoration traps with low concentrations for the east traps and no detections for the north traps (Table 2). Overall, when comparing the amounts of airborne eDNA for each of our three sampling locations, eDNA significantly differed between trap locations (rmANOVA F 3,19 7.36, p 0.0018; Figure 3). Pairwise comparisons revealed that for Events I and II, the east traps were significantly different from both the restoration (p 0.0052 and p < 0.0001, respectively) and north trap locations (p 0.0052 and p < 0.0001), but the north and restoration traps were not significantly different from each other for either Event (Table 3). However, after the first restoration treatment (Event III), the restoration traps were significantly different from both the north traps (p < 0.0001) and the east traps (p < 0.0001), and the north and east traps were not significantly different (p 0.98; Table 3). Lastly, during Event IV, the restoration traps were again significantly different from both the north (p < 0.0001) and east (p < 0.0001) trap locations while the north and east traps did not significantly differ from one another (p 0.996; Table 3). Amplification percentage (i.e., number of samples displaying any amplification, regardless of eDNA quantification) for Events III and IV showed a large spike for the restoration traps, rising from 33% to 100% for both Events III and IV ( Figure 4A). This spike was mirrored slightly in the east traps but not in the north traps.
Across all honey mesquite qPCR plates, negative controls failed to amplify as expected, and qPCR efficiencies were on average 67% with an average R-squared of 0.99. The limit of detection and limit of quantification for the honey mesquite primer were both 3.8 × 10 −4 ng/μL. Bouteloua Genus eDNA The quantity of Bouteloua airborne eDNA changed over time as well. At Event I the quantity of Bouteloua varied greatly between trap locations, with the east traps having a larger quantity than the restoration and north traps. Between Events I and II the amount of Bouteloua eDNA declined ( Table 2). After the first full treatment restoration event, increases in the quantity of Bouteloua DNA were detected in the east and northern traps while the restoration traps stayed the same. After the reduced treatment, there were no Bouteloua detections other than by the restoration traps ( Table 2). Across all four Events, we observed significant differences in the amount of Bouteloua eDNA between each trap location (rmANOVA F 3,22 329.85, p < 0.0001; Figure 5). At Event I the east traps were significantly different from both the restoration (p 0.0069) and north (p 0.0198) traps, whereas the restoration and north traps did not significantly differ (p 0.8934; Table 3). At Event II, none of the trap locations significantly differed from one another ( Table 3). At Event III, the east traps were significantly different from both restoration (p < 0.0001) and north (p < 0.0001) traps, while the north and restoration traps did not differ from one another (Table 3). Lastly, at Event IV, the restoration traps were significantly different from both the east (p 0.0319) and north (p 0.0319) traps, while the east and north traps were not significantly different ( Table 3). Bouteloua amplification percentages were consistently 66% or higher in Events I and II, and Event III notably had 100% amplification for all groups ( Figure 4B). At Event IV, we only observed Bouteloua amplification in restoration traps ( Figure 4B).
Across all Bouteloua qPCR plates, negative controls failed to amplify as expected, and qPCR efficiencies were an average of 83% with an average R-squared of 0.99. The Bouteloua genus primer's limit of detection and limit of quantification were both 6.7 × 10 −6 ng/μl.
Global Plant eDNA
For the first two sampling events, the average cycle threshold (C T ; note that decreasing C T values indicate increasing eDNA concentrations) for all three trap locations were consistent and averaged between approximately 29.1 and 30.1 cycles ( Table 2). After the total treatment restoration, all three traps detected more global plant eDNA with a large spike in the amount of restoration trap airborne eDNA. Lastly, the average C T values for the samples taken after the reduced treatments were lower compared to those in Event III ( Table 2). We found a significant effect of sampling Event on the amount of global eDNA (rmANOVA F 3,19 8,158.88, p < 0.0001; Figure 6). Pairwise analyses revealed that at Event I, none of the sampling events were significantly different from each other ( Table 3). At Event II, the restoration traps were significantly different from both the east (p 0.0142) and north (p < 0.0001), while the north and east traps also differed significantly (p 0.0251; Table 3). At Event III, following the first restoration, the restoration traps were significantly different from both the north (p < 0.0001) and east (p < 0.0001) traps, while the north and east traps did not differ from one another (Table 3). Finally, at Event IV, the restoration traps were again significantly different from both the north (p < 0.0001) and east (p < 0.0001) traps, and the north and east traps remained non-significantly different ( Table 3). All three trap locations amplified global plant airborne eDNA 100% of the time across all four events ( Figure 4C). Across all global plant qPCR plates, negative controls failed to amplify, as expected.
DISCUSSION
Using three different qPCR assays targeting honey mesquite, Bouteloua genus, and a global plant chloroplast gene, we demonstrated that airborne eDNA is affected by human activity during a restoration event, and we argue that these changes track intuitively through time with different stages of the restoration. Notably, we found that airborne eDNA from species that were not even the target of the restoration changed over time. Collectively, our observations demonstrate that airborne eDNA reflects human activity and phenological changes on a landscape-scale and point to an expanding role that airborne eDNA surveillance may play in terrestrial conservation.
Since the restoration focused on the removal of honey mesquite, we first quantified how the amount of honey mesquite changed over the course of the restoration. We found that there was a significant difference between the restoration eDNA traps and the east and north traps. Before the total treatment restoration took place, mesquite eDNA quantity and detection was low. It is useful to consider the ecology of honey mesquite to put these results into context. Honey mesquite is insect-pollinated and flowers in the spring before losing its leaves and going dormant for the winter months (Lopez-Portillo et al., 1993;Golubov et al., 1999). At the time of our restoration effort, honey mesquite was inactive with most of their leaves gone and no flowering or pollination occurring. Accordingly, a "typical" paucity of airborne honey mesquite eDNA was illustrated by the low concentrations and detection percentages for this species in our first two sampling Events (Figure 3 and Figure 4A). However, after the first restoration event, Event III demonstrated a large increase in the concentration of honey mesquite airborne eDNA for the restoration traps. A spike in airborne eDNA occurred again at Event IV but was not as large, likely because the second restoration was farther away and lower intensity. Together, these results demonstrate the potential for airborne eDNA analysis to distinguish between different types or intensities of activity on a landscape-scale. Additionally, airborne eDNA analysis may reveal spatial information. The amplification percentage of honey mesquite detected for each restoration group trap jumped to 100% for both Events III and IV ( Figure 4A). We simultaneously observed amplification percentage in east traps (i.e., away from the site of the restoration activity), though not in as high concentrations as shown in the restoration traps, increase after the restorations, suggesting that honey mesquite eDNA traveled downwind to the east trap grouping.
On the other hand, the concentration of Bouteloua airborne eDNA, which was not the target of restoration activities, also changed over time. In general, Bouteloua eDNA decreased throughout our study. Of the four Bouteloua species on our study site -blue grama (Bouteloua gracilis), buffalo grass (Bouteloua dactyloides), sideoats grama (Bouteloua curtipendula), and sixweeks grama (Bouteloua barbata) -blue grama is by far the most common species. The steady decreasing trend that was observed corresponds to the ecology of blue grama, which is summer-active, releases pollen in early fall, and then goes dormant for the winter months (Riegal 1941;Anderson 2003). Johnson et al. (2019a) monitored the changes in Bouteloua genus eDNA for the early fall and showed there was a spike in Bouteloua airborne eDNA associated with their early fall pollination event. However, the results in the present study track the opposite direction and appear to document that Bouteloua airborne eDNA concentrations decline in alignment with approaching winter dormancy.
Therefore, if not impacted, Bouteloua airborne eDNA would likely have decreased uniformly throughout the study period. This trend was observed for the first two sampling events, especially in the eastern traps where large amounts of blue grama grow. However, after the first total treatment restoration we observed a modest increase in Bouteloua airborne eDNA concentrations during Event III. To give further evidence that Bouteloua airborne eDNA was impacted, we can examine the concentrations in conjunction with the percentage of traps that significantly detected Bouteloua DNA ( Figure 4B and Figure 5). In addition to the increase in Bouteloua eDNA concentration after the total treatment, we saw each trap location detect Bouteloua eDNA 100% of the time, which should be unlikely to occur naturally since blue grama is becoming dormant for the year at the time of this study. For Event IV, the more limited restoration activity did not promote Bouteloua detection in either north or east trap locations, but we observed 67% amplification at the restoration trap location. Again, this Frontiers in Environmental Science | www.frontiersin.org January 2021 | Volume 8 | Article 563431 8 pattern points to a spatial interpretation of airborne eDNA results, with restoration traps closest to the restoration activity demonstrating impacts from the reduced treatment and farther east and north traps remaining unaffected.
We observed that Bouteloua eDNA appeared to impact all three trap locations whereas honey mesquite eDNA, the subject of the restoration, impacted only the restoration traps and moderately the eastern traps. This could be a result of the eDNA content being released into the air. The Bouteloua DNA is assumed to come primarily from blue grama which is at the tail end of its pollination season. Any disruption would result in a plume of leftover pollen being released. Bouteloua pollen is designed to travel on the wind so it would be logical for it to travel greater distances and effect the eastern and northern trap groupings. The honey mesquite on the other hand, is insect pollinated and had sparse leaves available when the restoration took place. As a result, the DNA being released into the air was most likely living wood fragments, cells, and a small amount of leaf material. This type of DNA is not designed to travel on the wind, so only seeing a spike at the closest trap site and slight increases in detection percentage elsewhere would make sense.
Finally, we observed a significant change in the global plant airborne eDNA in our traps closest to the restoration treatments over time. In this study, the global plant airborne eDNA acts as a qualitative surrogate for the plant community. Species specific primers are still rare for most plant species so without a metabarcoding approach, global plant eDNA allows us to monitor general patterns across all plants in addition to focusing on single species (Wallinger et al., 2012). For the first two Events, the global assay showed consistent levels of airborne eDNA across all three sites. After the total treatment restoration occurred, there was a large spike in restoration trap site eDNA and smaller increases in both the north and east traps. After the limited restoration activity, the restoration trap site showed a higher amount of airborne eDNA compared to the other two trap locations, which were unaffected. As shown in the Bouteloua analysis, the amount of Bouteloua eDNA (the most common genus on the landscape) was steadily dropping over the study period which is reflected by the east and north traps showing less global eDNA amounts. This again points to spatial information contained within airborne eDNA analyses as well as the fact that airborne eDNA analysis can distinguish between different activity intensities.
Overall, we have shown that airborne eDNA can assist with the evaluation of current species on a landscape-scale, and that airborne eDNA reflects human activity and seasonal changes on a landscape-scale. In a conservation or management setting, we believe airborne eDNA analysis can aid site selection and monitoring, which will prove especially valuable if it can supplement or even provide an alternative to timeconsuming and potentially disruptive conventional plant community surveys. To maximize the conservation potential of airborne eDNA analysis, further examination of the ability of airborne eDNA analysis to detect rare species on a landscapescale is warranted. Furthermore, combination of airborne eDNA analysis and metabarcoding approaches could allow airborne eDNA to act as a plant community monitoring tool, further increasing its utility for conservation, management, and research.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
MJ designed the research, collected and analyzed the data, and wrote the manuscript. RC collected the data and commented on the manuscript. BG performed the restoration, analyzed the data, and commented on the manuscript. DL performed the restoration and commented on the manuscript. MB designed the research, collected and analyzed the data, commented on the manuscript, and provided funding.
FUNDING
This work was funded by Texas Tech University as startup funding to MB. | 2021-01-26T14:09:31.659Z | 2021-01-25T00:00:00.000 | {
"year": 2021,
"sha1": "68039141b950e8f544cd783a2cdb096582fe6644",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fenvs.2020.563431/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "68039141b950e8f544cd783a2cdb096582fe6644",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
56387517 | pes2o/s2orc | v3-fos-license | The Böhler ’ s angle in population of central Serbia – a radiological study
Background/Aim. The values of the Böhler’s angle (BA) are relevant parameters for diagnosis, management and prognosis of the calcaneal fracture and the outcome. Range of normal values of Böhler’s angle (BA) in adults varies depending on the examined population, age, gender or ethnicity. The aim of this study was to determine the range of normal values of the Böhler’s angle in the central part of Serbia. Methods. The lateral foot radiographs of 225 subjects (111 males and 114 females) without calcaneal fractures, divided into 6 age groups were observed to determine the normal values of the Böhler’s angle by using the IMPAX 6.5.2.114 Enterprise software. Obtained values for Böhler’s angle were compared among gender and groups using appropriate statistical tests. Results. The mean of Böhler’s angle in observed population was 34.06°, ranging from 25.1° to 49.5° and was higher in males than in females included in our study. Gender difference was statistically significant. The distribution of the mean BA across the age groups showed tendency of decreasing with age and the highest BA was found in the youngest group. Conclusion. The findings presented in this paper confirmed the existence of wide range of BA values as well as its gender and age differences.
Introduction
Calcaneus is the most common site of tarsal bones fractures 1 .These fractures have very variable patterns 2 ; can be divided into intra-and extra-articular and they present the most common fractures of tarsal bones (up to 75%) and thus account for 2% of all fractures 3,4 .The posterior articular surface of calcaneus is usually depressed as a result of the fracture.The Böhler's angle (BA) can be used for evaluation of the loss of calcaneal inclination when this angle is reduced and indicates the degree of proximal displacement of the calcaneal tuberosity.It is a relevant parameter for diagnosis, management and prognosis of the calcaneal fracture outcome 5 .Böhler's angle is named after surgeon Dr. Lorenz Böhler 6 (1885-1973), who introduced this angle in 1931 as a radiological method in the diagnosis of compression fractures of the calcaneus.It was noted in earlier studies that there was a reduction of this angle in intra-articular but also in the most of extra-articular fractures of the calcaneus 7 .Böhler's angle is otherwise called tuber joint angle, calcaneal angle or salient angle.The range of the normal values in adults, without presence of fractures, is from 25° to 40°, but this value varies depending on the examined population and is found to be in the range of 14° to 58.1° 1,5 .In the first paper about BA, the angles of 30°-35° were mentioned as normal values.In some other textbooks, ranges from 20° to 45° were reported as normal values of this angle [8][9][10][11][12][13][14] .The range of normal BA varies depending on the gender, age and the ethnicity of the observed population.The assessment of the BA is important in the diagnosis, determining ways of treatment and prognosis of calcaneal fractures and it can be an indicator of operative treatment success 5 .The measurement is usually performed on lateral foot radiographs at the intersection of two lines or it can be determined using computed tomography (CT).The first line that is important in the construction of this angle is obtained by merging the highest points of the anterior and the posterior articular surface of the calcaneus.Another line connects the same point on the posterior articular surface and the most prominent (the most superior) point of the tuberosity of the calcaneus.Some earlier studies have reported difficulties in precise measurement of the BA on lateral foot radiographs and the possibility of variation of its value in increasing obliquity on the lateral fluoroscopic image 1 .The aim of this study was to determine the range of normal values of the Böhler's angle of the population in the central part of Serbia.
Methods
This study included 225 randomly taken subjects (111 males and 114 females) in order to determine the normal values of the BA in the population of central Serbia.The weight-bearing lateral foot and ankle joint radiographs were observed.The recordings were made with foot placed on solid surface.The subjects were without calcaneal fractures.The average age in the study sample was 43 years (ranging from 15 to 75).The subjects were divided into 6 age groups in order to examine the changes of the BA values with age and statistical significance of differences among the groups.
The exclusion criterion was any congenital or acquired deformity of the foot or arthritic change.The study was conducted in the Clinical Centre "Kragujevac" from 1st January 2014 to 31st March 2016.
The computed radiographs were obtained on Digital x-ray system (Duo Diagnost, Philips Medical Systems, the Netherlands).Images were reviewed on a Picture Archiving and Communication System (PACS) and angles were obtained by using the IMPAX 6.5.2.114 Enterprise software (Agfa Healthcare, Belgium).The Böhler's angle was measured from the intersection of the line passing through the most prominent points of posterior and anterior articulating calcaneal facet and the line connecting posterior articulating calcaneal facet to the most prominent point of the calcaneal tuberosity (Figure 1).The precision of measurement was 0.1.All radiographs were analyzed by two independent observers.The intra-class correlation coefficient (ICC) was used for evaluation of inter-observer reliability and ICC > 0.8 was considered as excellent agreement.
Fig. 1 -Measurement of Böhler's angle.
The results were analyzed using the statistical program (IBM SPSS Statistics 20).The analysis included descriptive and analytical statistical methods.Normality of data distribution was tested by Kolmogorov-Smirnov and Shapiro-Wilk test.Mann-Whitney was applied to compare significance of difference between genders, because variables in one of the individual groups (male group) were not normally distributed (Shapiro-Wilk test: p = 0.014).One way ANOVA test was used in comparing the different age groups.Pearson's coefficient of correlation was used for measuring the correlation between the age and the value of the BA.The level of statistical significance was set at 0.05.
Results
This study included 225 participants of both gender who were classified into the different age groups.The mean Böhler's angle in total observed population was 34. 1).Analysis of the angle distribution showed that the highest frequency (41.33%), was in the range of angles between 30-34.9° as expected and it was according to results of other studies.The lowest frequency was (1.3%) in the range between 45-49.5°.
The values of the BA were compared among different age groups.As it was expected, the highest mean of the BA was in the youngest group (15-24 years) due to anatomical characteristics of calcaneus in the pre-adolescent ages (mean: 39.8° ± 4.9).The distribution of mean BA across the age groups showed tendency of decreasing with age and the lowest values were found in the group between 65 and 74 years, and the highest in the youngest age group.There was statistically significant difference among some of age groups regarding the mean BA (ANOVA; p < 0.05) (Table 2).The correlation between angle and age was significant (Pearson correlation; r = 0.581; p < 0.01).
There were several earlier studies of normal BA in different populations and the mean values and ranges of BA are given in Table 3.
Discussion
The assessment of the BA is of great importance in determining the indications for operative or non-operative treatment of the fractured calcaneus 15 and the surgical restoration with minimal anatomical and functional reduction 7 .According to the recommendation of AO Foundation, conservative treatment is indicated for the nondislocated calcaneal fractures with preserved values of BA 15 .Preoperative BA significantly correlated with the seriousness of injury and its postoperative value has a significant role in the prediction of functional recovery of the patient and the need for further surgery (e.g.subtalar fusion) 16 .It is an important prognostic factor for the outcomes of calcaneal fractures regardless of the treatment modality 17 .
This angle, known as Böhler's, calcaneal-, tuber-jointor salient angle, is also important for anthropometry and varies among the different populations 18 .Although some studies did not show any difference, the majority of evaluations of normal BA showed the gender, racial, territorial or the age differences of its values.This angle is usually measured using the lateral and axial radiographs.The variations of its normal values drew our attention to the assessment of BA in Serbian population, since the similar study has not yet been performed and this research may contribute to better knowledge of foot anatomy in this population.Taking this into consideration, findings of our study may be important in diagnosis and reconstructive surgery of calcaneal fractures and in anatomical and anthropometric studies.
Chen et al. 19 conducted the study in population of North Carolina 18 .There was no statistically significant difference between males and females, and it was not related to the side of the body.The reported mean BA and the lowest value of BA were lower than observed in this research.
Radiological study conducted in Nigerian population did not show significant gender dimorphism of BA 18 .The mean of the total population value of BA was lower than in our study.
Calcaneal angle in Ugandans was significantly sex dimorphic and it was similar to our findings.Authors also emphasized statistically significant difference between African populations of Nigeria and Uganda.There were no data about the age variations 20 .
The reported values of BA in Malawian males and females were not statistically significant.The BA of the majority of examined Malawian subjects was in the 30-34° class.Statistically significant difference was found between Malawians and Nigerians, between Malawians and the Uganda population, but not between Malawians and Caucasians 21 .In agreement with this study, the majority of subjects from our study were in the 30-35° class, but the lowest value of BA was significantly higher.
The mean value of the Böhler's angle in Saudi population was not significantly related to age, gender or side of the body.The highest mean value of this angle was in the 15-20 years age group (33.1°), and the lowest one was in 21-30 age group (29.2°) 22 .In comparison to this population, Serbian subjects had higher mean and minimal BA, and the highest BA was in the youngest age group in both studies.
In Turkish population the highest mean value was in the 41-50 years age group (35.2°), and the lowest was in the group of 61-83 years (32.3°).There were no statistically significant gender differences and the significance was found neither among age groups nor between right and left foot.Comparing to the earlier studies, there was a significant difference among Turkish, Nigerian and Saudi populations.The value of the mean BA in Turkish population is in agreement with our findings, and the difference was not statistically significant.Opposite to findings of Seyahi et al. 23 statistically significant difference was found between mean BA of males and females as well as among age groups in our study.According to our results, the lowest mean BA was in the age group between 65-75 years and this is in agreement with the findings in Turkish population.
In the study conducted in Egyptians, it was concluded that the values of BA were reduced with aging.The sex dimorphism of BA was not statistically significant.The side of the body, occupation, residence and body mass index were not significantly related to the value of the BA 24 .
The lower mean BA was also reported in Indian population.There was no report on the gender or the age variations 5 .
The lower mean BA was also found in the study conducted in the Sydney Hospital among the patients with and without calcaneal fracture.The Böhler's angle in the group of patients with fracture of calcaneus was significantly reduced 7 .
The Böhler's angle in British population was higher than in Serbians.Opposite to our findings, there were neither significant differences between the angles in males and females nor between the left and the right foot.Age was not a significant parameter for the value of the calcaneal angle 8 .
According to Schepers et al. 25 , the mean BA of the uninjured foot in the population of the Netherlands, was significantly higher than in the injured group.The mean BA in our study was higher and the lowest value of the BA, important for the fracture diagnosis, was equal.
The obtained results from the study conducted in New Zealand adult population showed that they were significantly different in comparison with our findings.The study also included children between 0-14 years and the mean BA was lower than in adults 26 .
Previous studies showed that the BA in children is lower than in adults, but this is not of general importance.This angle rapidly increases with age until adolescence.This angle has its highest values in the age of six or seven, because of the rapid growth of the posterior articular facet of calcaneus and its disproportion in relation to calacaneal tuberosity 14 .The highest BA in our study was indeed found in the youngest age group.
Results of this study revealed the sex dimorphism of the BA in examined population, with the higher mean value in males.This was in agreement with the findings in Ugandan population 20 , although the male Ugandan subjects had lower BA than female ones (opposite to our results).The other studies did not find the statistically significant gender differences, although the mean BA was higher in males 8,18,19,[21][22][23][24] .The mean values of BA were also significantly different between the age groups in Serbian population, with a negative correlation between the BA and age.This was also found in earlier studies in Egyptian and Ugandan population 20,24 .Results reported in other studies, showed the same tendency, but the difference was not significant.
Živanović
Considering the interpopulation differences, the mean BA in Serbians as well the range of this angle, our results were the most consistent with values reported for the Turkish population.In the observed population, the range of the BA value was from 24.1° to 49.5°.
The clinically important lowest value of the BA obtained from this study was similar to minimal BA in British, Dutch and Egyptian population and notably lower in the USA, Malawian and Indian population.
The limitation of this study may be the fact that measurements were not done on both feet of all the observed subjects.The reason is that the results of earlier studies as well as our small-sample test showed that difference was not statistically different.
Conclusion
The findings presented in this paper confirmed the existence of a wide range of gender and age differences in values of Böhler's angle.These findings about the Böhler's angle in Serbian population are important for the diagnostics and reconstructive surgery of the calacaneal fractures.Besides, results obtained in this study are important for anthropometric studies and forensic medicine.
Table 3 The comparison of the mean calcaneal angles to the previous studies
ranged from 25.1-49.5°.The mean of this angle in males included in our study was 35.3 ± 3.9° (ranging from 27.7° to 49.5°), while its mean value in female participants was 32.8 ± 4.1° (ranging from 25.1° to 43.5°).Gender difference was statistically significant (U = 4174.5;p < 0.05) (Table SD -standard deviation.and | 2018-12-19T10:24:01.361Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "b076332450db505f4b8492856020670e558d9e44",
"oa_license": "CCBYSA",
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0042-84501600209Z",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b076332450db505f4b8492856020670e558d9e44",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
143131918 | pes2o/s2orc | v3-fos-license | Institutional and Governing Organization of the Municipality of Shkodra during the First Half of the XIV Century ( According to “ The Statutes of Shkodra ” )
This paper is dedicated to the political and institutional organization as well as to the election of the governing bodies in the municipality of Shkodra during the first decades of the XIV century, observed mainly through the legal provisions of the juridical medieval book of this city, The Statutes of Shkodra. Modeled mainly after the configuration of the Dalmatian towns along Adriatic and under the partial influence of the Slavic Byzantine world, Statutes of Shkodra constitute a source of irreplaceable value for the "de Visu" knowledge of governing and institutional life of the city of Shkodra, North Albania's capital city since the medieval and ancient times until today. Governing bodies of the municipality of Shkodra in the first decades of the fourteenth century, have similarities with medieval municipal organization models of Italian-Dalmatian and Western Europe cities.
Background
By the end of the second decade of the thirteenth century, the city of Shkodra will be placed under the Serbian Nemanja dynasty, which would last until the dissolution of the Serbian Empire after Stefan Dušan's death in 1355.(Jire ek, 2004, p. 115;Cabanes, Chaline, Doumerci, Ducellier, Sivignon, 2005, p. 179;Xhufi, 2006, p. 288).Despite of being for more than a century under the Serbian rule, it should be noted that during the first half of the fourteenth century, Shkodra developed as an important center and autonomous civic municipality with a legislation, economy and institutions designed after the model of the most advanced Italian -Dalmatian cities along East Adriatic.
The highest expression of the municipal organization of Shkodra in the begining of late Middle Ages were the statutes, which summed up the normative acts that regulated in their entirety the organization and functioning of the city, as well as the relationship between the citizens, between them and the state, between the city itself and the surrounding environment.The existence of this document is mentioned in a 1907 Italian bibliography.This document is written on parchment and contained 40 pages.Statutes of Shkodra are fully saved, a copy of which was discovered recently in the Albanica fund of "Correr Museum" in Venice by Italian researcher Lucia Nadin, and was translated into Albanian in 2002 by the famous Albanian medievalist Pellumb Xhufi (Nadin & Za, 1997, p. 41 -45;Statuti di Scutari, 2002).They consist of 279 chapters written in the Venetian language of fifteenth century and, as the date for their design "as the terminus ante quem the right year year would be 1346", according to Swiss scholar Oliver Jens Schmitt (Historia, 2002, p. 266;Schmitt, 2007, p. 112).After this year, the juridical book of Shkodra is enriched with some amendments as well: six annexes in years 1391 -1393 and five provisions of the Venetian period 1457 -1469 (Statuti, 2002, p. 231 -235).
On the basis of the Statutes of Shkodra can be imagine the institutional development of the city and for the first time the civic life outspreads almost all its diverse fullness.Until now it was not possible to establish a clear idea of the functions structures of a medieval Albanian city, or it can be done through extrapolations of the civil rights of Budva.(Statuta, 1882 -3).So, to resolve this issue, Shkodra statute serve as a precise guide.
As the founding legal document of management and institutional organization of the city, statutes should be in two copies: Si dokumenti themeltar juridik i administrimit dhe i organizimit institucional të qytetit, statutet duhet të ishin në dy kopje: one to be kept in the treasury of the municipality, while the other one should be kept at the court in order to be used at trial.(Statuti, 2002, p. 175).Also, to the provisions of this statute were forced to obey all the people, either residents of the municipality or foreigners coming to town.(Statuti, 2002, p. 176).
Governmental Institutions
The sovereign rights over the city of Shkodra in this period belonged to the Serbian king, Stefan Dushan (1331 -1355)
MCSER Publishing, Rome-Italy
Vol 5 No 1 January 2014 166 (Statuti, 2002, p. 175, 181, 186, 193, 194, 196, 222).His royal authority was represented by the Earl, which was proved not only during the kingdom and the Serbian Empire, but also during the reign of Balshaj in northern Albania, which sheds light on the continuity of the administration in the state political formations that followed the medieval Serbian Empire (Statuti, 2002, p, 231 -232).
The Earl was a political and institutional key figure whose moral and physical integrity was preserved by the law (Statuti, 2002, p. 223, 228).The Earl as representative of the ruler of the country, often would keep half of a fine imposed by the municipality to the lawbreakers; the other half would go to the municipality or to the damaged person (Statuti,200,; while only on a single exception he would keep a quarter of the fine (Statuti, 2002, p. 217 -218).The Earl, except as the beneficiary of these payments, hardly appears elsewhere in the statutes.He was simply limited with the representation of the Serbian rule, as for the rest he should respected the self-administration of the municipality.
Another high governing body, within the separate management, was the simple popular assembly under the supervision of the bishop and the most important nobles of the municipality.The popular assembly gathered in the main square of the City, with the participation of all adult male citizens, regardless of the layer where they belonged (Historia, 2005, p. 111).The most important powers of the popular assambley were: the periodical election of the city governing bodies, the approval or modification of the statute which played the role of the basic law for city administration and also solving some particularly important issues, which required the consent of all citizens.
In Shkodra, the meeting of citizens was St. Mark's day, April 25.That day the bells of St. Stephen invited the people to gather in the square in front of the cathedral, where in the presence of the bishop and the nobles of the city would be held the selection of three judges, eight councilors and two municipal accountants (Statuti, 2002, p. 194).Each of the elected should take his functions within three days (Statuti, 2002, p. 194).The mandate of their governance in highest bodies of the municipality was a year (Statuti, 2002, p. 207).
The judges, consultants, financiers together with the most distinguished citizens (boni homines) were members of a closer assembly, The Municipal Council.The municipal council discussed the matters pertaining to the loyalty and moral integrity of the citizens and municipal employees, as well as to the election of another category of employees like the notaries, clerks of court (cancellaruis) and bailiffs (semecio; otargo) (Statuti, 2002, p. 193, 197, 212 -213).The latter were exempt from taxes and were protected by law in cases of physical violence against their person, while in municipal service (Statuti, 2002, p. 186, 223 -224).
Panel of Judges together with the Council constituted the government of the city, with decisions approved by consensus and by a majority vote (Statuti, 2002, p. 196).Approval of the king constituted the legal and political basis of Shkodra civil constitution (Statuti, 2002, p. 175).He had granted to the municipality, respectively to its judges an extended jurisdiction over the citizens, Slavic residents and ethnic Arbërs of the surrounding provinces as well as the foreigners, with the exception of four court cases that were judged by the king himself: these had to do with adultery, murder, and right on the handmaids and the killing of horses (Statuti, 2002, p. 175).These were reserves of the old Serbian Crown (Schmitt, 2007, p. 115).
Across the pyramid structure of the municipal government of Shkodra, the judges had a very great importance, which appeared in their duties and powers to control many aspects of city management.They took part in the measurement of lands and supervised the construction regulation; they would give permission for cutting of trees planted outside the rules; by their authority they could establish lower fines; they had to appoint the controller of measures, weights and lengths; they should control the butchers who would not pay the custom of the slaughterhouse; they would give permission for the sale of meat; together with the Council gave permission for the export of food and grain; together with the Council they commanded the guards of the city's public safety; each of the judges carried one of the keys of the municipality ark, where there were seals and privileges of the city; together with the Council they should welcome with appropriate honors accomodate with the money of the municipality the messengers or courtiers of the king; togehter with the Council they took part in the selection of the notary and clerk of the court; they were direct superiors of the clerk of the court; together with the Council, the judges should oversee municipal financiers; gave instructions to the enforcement officers; controled people charged with the collection of taxes by the municipality; they had to force the debtors to pay the municipal arrears and also informed with courier the defendants in court (Statuti, 2002, p. 177 -204).
For the professional activity of judges in the city of Shkodra and the principles they should follow in carrying out their judiciary functions, the statutes say: "While on duty, the judge has to offer his services to all those who submit their plights and to judge on the basis of city statutes, honestly and accurately, without being seduced by friendships.During the service as a judge, he should not assume the role of advocate for anyone.His task is to hear both parts and to record through the secretary the explanations of each part and give a right decision according to the statutes and customs of the city" (Statuti, 2002, p. Duration of their duty as judges lasted one year and insults or use of violence against them were forbidden.They were paid on the basis of the court tax "sudebina" (Statuti, 2002, p. 197).Bribes taking or involvement of judges in different corruption cases, would be severe and fatal to their professional careers (Statuti, 2002, p. 208).The same was true for notaries and other municipal officials.
In the progress of their work and the development of a fair trial, they were assisted by the court clerk.The latter was chosen by the judge and had to make the gospel oath of loyalty and institutional obedience to any order and decision (Statuti, 2002, p. 197).The main task of the court clerk, was to take notes in the most complete and accurate way of the evidences of both parties, as well as taking note the decision given by the judge.In a trial, besides the court clerk or chancellor, should be present also semeci and otargu who "should make the oath and obey to the orders of the judges during the day, night or any hour, as well as to faithfully perform the municipal services without any trick.They should inform every man is called by the trial and stand there together with the judges and hear what is said when they are doing their job" (Statuti, 2002, p. 197 -198).Also, they performed the function of bailiffs, seizing the properties or belongings of those who lost the trial; they made the announcement of the sale of properties of different citizens, as well as dealing with the sale and pricing of items that were left hostage (Statuti, 2002, p. 197, 212, 213).
On Sundays and religious holidays, judges were explicitly forbidden to develop litigation between two citizens of the municipality, between two foreigners or between a citizen and a foreigner.Suspension of court activity during important religious holidays and the feast of the saint of the city, was approved by royal authority (Statuti, 2002, p. 175).
The importance of a trial and the time of its development depended on the financial amount of the issues.Formation of the panel would be only with the consent of both parties included in conflict in the presence of two judges, and also when the amount of the issue exceeded the sum of two hyperpers.(Statuti, 2002, p. 199).The judge could be refused during a trial only in those cases where he had kinship or nepotistic relationship with one of the parties in conflict (Statuti, 2002, p. 199).
The most important moments in the development of a trial were the presentation of evidences by both parties as well as the witnesses and guarantors summon who must swear on the Gospel that they would only tell the truth.Witnesses were summoned to court by the courier and vatak, who were forced to repeat this thing three times.If the witness would not appear even after the third time, or would appear but refused to testify, he was obligated to pay all the damage to the party which lost the trial.Also, if the witness, be it man or woman, was in bad physical condition, "the judge along with Secretary would go to the patient's house and invite him to swear on the Gospel that he/she would only speak the truth" (Statuti, 2002, p. 204).Not really light punishment fines were predicted for false witnesses as well as for persons who recommended and presented them in court.A fine was stipulated also for those who wanted to participate in a trial and would make noise during its development (Statuti, 2002, p. 229).
The party which would not appear or would refuse to answer in court, would lose cause for which they had filed the lawsuit.According to the Statutes: "No one can be punished without hearing the testimony of two or three persons" (Statuti, 2002, p. 201).The judges were forced to declare the decision of the court within fifteen days after the closing of the case.
However, if the judges faced with a legal case that was not foreseen in the statutes of the city, they should not come up with a decision without having the trustful support of three or four nobles of the city.(Statuti, 2002, p. 230).Once the judges had made their decision in collaboration with the most prominent nobles of the municipality, the latters had to dispose it in the legal book of the city, so that if the same issue needed to be presented in trial again, they could proceed normally on the basis of statutes.
Also, in the Statutes of Shkodra there were cases where the judge should judge a lawsuit togethether with his bishop or pal.That was when a clergyman sued a layman/secular.The judge had no right to compel a clergyman summoned as a witness, to swear before him.Enough to swear in front of the religious authorities, even in the presence of judges (Statuti, 2002, p. 206).A similar situation was when it came to church properties too.
From all the above, we can conclude that the judges of city of Shkodra were not only the leaders of the legal system, but they also had executive authority with broad powers to militia, economy and administration.
A close collaborator of the Judges College in the municipal government, was the Council (Statuti, 2002, p. 188 -189;193 -194;196 -197)."Council ("conseglio") of Shkodra with only eight members ("conseglieri") had a modest composition compared to the Dalmatian towns councils in the first half of the fourteenth century" (Schmitt, 2007, p. 121).It did not represent, as in Venice and in a number of Dalmatian municipalities, an institution, which summarized just nobles as a politically and economically privileged closed group against the non-nobles (Lane, 1991, p. 114, 131, 134 -135;Diehl, 2004, p. 83 -84).
Members of the Council of Shkodra in many activities appeared as executive authority together with the judges (Statuti, 2002, p. 188 -189;193 -194;196 -197).Their numerical superiority in the decisions to be taken together privileged to some extent, but anyway, in the Statutes is clearly obvious that the corps of judges enjoyed a higher status and that judges, for example in the militia and municipal representation from outside, had greater powers than members of the Council.However, judges rarely would make any decision without the formal approval of the Council members.
Nowhere had the Council powers on its own, it always worked together with judges.Only three statutory provisions have extensively treated the Council, these were dealing with the ways of proceeding, such as the obligation for disclosure of secrets while exercising their duty or protection against reprisals in response to a a useful request of a member of the Council.(Statuti, 2002, p. 196).
The third political body in Shkodra was represented by two finance managers, who recorded all income and expenditure of the municipality and had to report the Council and the Judges College every three months (Statuti, 2002, p. 197).The incomes, coming mainly from: the collection of duties, the customs, the units of measurement and weight, the fines, the confiscation of traitors' properties, the assets of citizens who had no heirs, etc.., were accumulated and saved in the treasure of the municipality (Statuti, 2002, p. 175 -234).With the collection of fiscal obligations were charged people of the municipality, who had to work honestly and without subterfuge (Statuti, 2002, p. 198).For those citizens who would not repay their obligations on time or were indebted to the municipality, the coercive authority of judges would be required.A part of the incomes, was used by the municipality for its internal needs, for necessary adjustments and repairs of castles and city walls, for sending ambassadors occasionally at other municipalities of the time within and outside the country to solve special problems between their relations, to meet the salary of notaries or other specialists ranging occasionally from outside for the municipal needs, etc. (Malltezi, 1988, p. 34).
Besides the popular assembly, the nobles, the bishop,the judges, the council accountants, there is another power factor in Shkodra civil life which shoulb be considered: the notaries.The statutes were an expression of written inculcation of the constitutions of civic and legal life.For almost all legal actions they predicted the issuance of a document by a notary.This way, the notary, who often served also as school teacher, was closely linked to the economic, social and political life of municipality (Schmitt, 2007, p. 139).
His special role and importance as well as his responsibilities is indicated in a provision of the Statutes of Shkodra, where he, as the only officer, should be elected by the people and the nobles of the city along with judges, advisors and financiers (Statuti, 2002, p. 195).
The court clerk depended on him, who was elected by the Council andthe judges, in the same time accountable to the latter ones.When citizens of Shkodra would present their documents to the court, would write testaments, would ask to sell their properties and when they had other issues related to civil matters of this nature, they should definitely have the authorization by the notary written and stamped with his own hand (Statuti, 2002, p. 200, 202, 215, 230;Acta Albaniae I, 2002, doc. 744).Notaries, in many cases were preferred of foreign nationalities, in order to increase the degree of their reliability (Historia, 2002, p. 265).For this purpose, notaries were often recruited from the ranks of the clergy.
In addition to the notary, on the above political authorities was depended a large number of other civilian clerks as well, like semeci, otarg, courier, vatak, obligations collectors, gastalds (civil servants) and referees (Statuti, 2002, p. 197 -198, 200, 204).The latter two should deal with issues of sales, exchange, donation and with problems of civil nature.
All officers should serve the municipality with high dedication and fidelity.In case they would suffer any physical damage while on duty, such as injuries, characteristic of medieval society and environment, all the costs for their treatment would be paid by municipality (Statuti, 2002, p. 225).
Besides the judge and notary, a pretty significant place in the ranks of judiciary was taken by the figure of lawyer (Statuti, 2002, p. 198, 200, 202, 207).Unlike other municipal officials, lawyer's mandate lasted more than a year."No other official can stay on task more than a year, with the exception of attorney..." (Statuti, 2002, p. 207).This is probably due to the fact that he was not in the executive bodies of the city government.
In municipal service, for maintaining and ensuring the public order and the safety of the city were also the guards, who were tasked to stay on duty all night until the morning (Statuti, 2002, p. 192).The caution of the guards of Shkodra Municipality, was also mentioned in the letters of Ragusa Senate (Acta Albaniae I, 2002, doc.676).
They were commanded by captains, who should stay on alert and under the orders of the Council and the College of Judges day and night (Statuti, 2002, p. 193).If the captains of the city, took action on their own or without any order from the judges, they would be removed from their office and punished with a monetary penalty (Statuti, 2002, p. 193).This careful and constant protection of gates and walls of the municipality, day and night, through compulsory guards' service, avoided external risk, preserved the peace within the city from criminals, drunkards, thieves, vagrants, murderers and hooligans.
Meanwhile, a significant indicator testifying the force of law and of the political and executive bodies of the municipality, was the presence of a correctional institution or penitentiary, such as the prison.In Shkodra, this institution is evidenced not only at the statutes drafting time, but also at the beginning of the second half of the fifteenth century.This topic was treated in an additional provision of the statutes, which is dated April 25-th 1461, a period when the city of Shkodra was under Venetian rule (Statuti, 2002, p. 235).
In addition to the guards garrison and the prison as obligation and compulsion institutions, in cases of attack or aggression from outside or even as a result of internal conflicts with serious proportions, the municipality had its own army, which would get mobilized for war only by king's order (Acta Albaniae II, 2002, doc. 62;Statuti, 2002, p. 193) .Its soldiers were salaried and leaded by military captains, who were paid more than double their subordinates.Desertion from the army ranks was sanctioned with a fine of high monetary value (Statuti, 2002, p. 193).
With its own police and military forces, the municipality tried to protect its jurisdiction which included a number of surrounding villages from the continuous danger from neighboring municipalities.
Besides its internal politic and executive bodies of the government, the municipality had also its own structures for external relations.This instance was represented by the figure of the ambassador, who had to obey the orders of the municipality and should not refuse performing the diplomatic mission abroad (Statuti, 2002, p. 194).Ambassadors were representing their sovereign and were speaking on his behalf.Any insult to them, was directed against the sovereign; any honor to the ambassadors, was addressed to his sovereign.
Parallel to the municipal governing institutions, were also the symbols of citizen sovereignty, including the coins, stamps, units of measurement and of weight.Cutting of the coins was one of the most important symbols of citizen sovereignty.They wore the name of the city, which is an important fact showing a high stage of the development of this autonomous municipality in the first half of the fourteenth century.The coins were usually made of bronze and would hold in one side the saint of the city, while on the other the name of the municipality.
Issuance of the coins in Shkodra during the first half of the fourteenth century, was conducted during the short reign of the Serbian king Constantine (1321 -1322), the son of King Stefan Uros II Milutin (Malltezi, 1988, p. 77;Jire ek, 2004, p. 118).
The seal of the city was guarded with much adoration as the emblem of the municipality autonomy (Schmitt, 2007, p. 147).Severe sanctions were provided againts its falsification in Shkodra city (Statuti, 2002, p. 226).It should be kept together with evidence of privileges in the municipal coffers, which had three keys and each one of the judges had one (Statuti, 2002, p. 194).A document stamped with the seal of the municipality had the value of a notarial act (Statuti, 2002, p. 195).
In addition to the municipal seal, Statutes of Shkodra recognized also the legal authority of the inviting stamp of the Serbian king, by which the citizens were summoned to court and the episcopal seal as well (Statuti, 2002, p. 175, 207).Predation of the civic seal was a serious violation of municipal autonomy.
Another emblem of sovereignty were the units of measurement and weight, especially for agricultural and livestock products (Statuti, 2002, p. 188).The controllers should have the municipality mandate and should be as honest as possible.For breaches of standards of measurement and weights units by traders or different people, the provision of the statute provided a monetary fine.This way, they also represented a source of income for the municipality of Shkodra.
As for the official language used by institutions and administration in Shkodra city during the first half of the fourteenth century, we can say that it was Shkodra Dalmatian influenced by medieval Venetian, because the legal book itself of this citizen municipality is written in this language.However, there must have been present administrative acts of the citizen municipality of Shkodra in Latin and Serbian.In support of this view, the Czech albanologist of the early twentieth century, Constantine Jiriçek quoted: "In the city secretary the acts were written only in Latin.In 1330 some "Clemens filius Gini, notarius communis Scutari" was mentioned as the scribe for documents in Serbian language" (Jire ek, 2004, p. 118).
The Governing bodies of the city of Shkodra in the first decades of the fourteenth century, explained above, indicated a good organization, good functioning and harmony between institutional links of the municipality, as a territorial and political unit with the right of self-government or administration on its own.The overall picture of these institutions resembled somewhat and was near to the medieval municipal organization models of the Italian -Dalmatian cities and those of Western Europe.All these show that the city of Shkodra, located between East and West, was seeking to integrate and walk with the rhythm of time, which at that period was oriented towards the northern part of the Western Adriatic.
Conclusion
The statutes of Shkodra during the early decades of the 14-th century yield important data on the development and the institutional organisation of the urban commune of Shkodra in the middle ages.The statutes should have been in two copies to wich all the local inhabitants were obliged to act upon as well as the foreigners entering the city.
The sovereign rights on the city of Shkodra during this period belonged to the Serbian monarchy, the authority of wich was represented by the earl, an important figure that respected the communal selfadministration.Another important governing body of the commune of Shkodra was the citizens` assembly that was summoned on the day of Saint Mark on the 25 -th of April in the presence of the bishop and the local nobility during which would be selected three judges, eight counsellors and the two financiers of the city.All of them formed the Commune `s Council and the mandate for the governing of the city lasted one year.The College of the Judges and the Commune `s Council comprised the government of the city where all the decisions would be approved and implemented with consensus and a majority of votes.In all the pyramidal structure of the commune ` s governing of Shkodra, the judges had an important role.Parallel to the governing institutions of the commune, were the symbols of the civic sovereignity part of wich were the coin, the seal, and the units of measuring and weights.All these governing institutions testify that the city of Shkodra situated between East and West was aiming to integrate and to follow the rythm of the time that during this period was oriented toward the northen part of the Western Adriatic. | 2017-09-08T05:13:17.212Z | 2014-01-05T00:00:00.000 | {
"year": 2014,
"sha1": "f3b0e1393011ecb03175ce121166a3531d343a46",
"oa_license": "CCBYNC",
"oa_url": "https://www.richtmann.org/journal/index.php/mjss/article/download/1891/1890",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "f3b0e1393011ecb03175ce121166a3531d343a46",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Sociology"
]
} |
252262383 | pes2o/s2orc | v3-fos-license | Investigating the Impact of Financial Inclusion Drivers, Financial Literacy and Financial Initiatives in Fostering Sustainable Growth in North India
: The present study examines how successful we are in achieving financial inclusiveness, investigating the influence of the drivers of financial inclusion (FI), financial literacy, and financial initiatives on sustainable growth. The drivers of FI considered are digitalization, technology, and usage. This study proceeds with a difference and investigates the impact of the drivers on sustainable growth through the mediation of financial literacy. The basic purpose is to understand whether mediation assists in enhancing the impact of the drivers of FI on sustainable growth. Sustainable growth is measured by knowing customers’ perceptions regarding FI success through the achievement of the SDGs, viz., SDGs 1, 3, 5, 8, 9, 10, 11
Introduction
Across the globe, there is an increased emphasis on FI, especially in emerging economies, with the motive to enhance economic growth and decrease poverty [1].However, there are widespread disparities existing worldwide with regard to access to financial services [2].Many researchers have also highlighted how financial exclusion could hinder people from leading a normal life.According to Carbo Gardener and Molyneux [3], financial access has a robust causal association with social exclusion.Claessens [4] backed this viewpoint on social exclusion.Further, Basu and Srivastava [5] found that 70% of rural marginal/farmers lacked access to bank accounts and 87% lacked access to loans.This is prevailing despite researchers' consensus that financial inclusiveness is a basic pillar of sustainable growth.To tackle the disparity of the reach of financial services to weaker sections and unbanked areas and sectors, many countries are focusing on microfinance agencies [6].Owing to the deficient infrastructure and poor economic conditions, the rural poor in developing economies end up having lower access to financial services [7].Bhanot et al. [8] highlighted region-wise disparity and pointed to the low level of FI in the northeast region of India.They pointed at the vital role that could be played by self-help groups (SHGs) and education to improve inclusion.As suggested by Gwalani and Parkhi [9], due to diversity and prevalent diversification, there is a need in India for a more innovative and developed model for growth.Sharma [10] indicated bank branch penetration, availability, and the affordability of financial/banking services as the main dimensions of FI.Liu and Walheer [11] stress the importance of catching-up effects for countries with lower levels of FI.The authors also claim that governments have improved the climate for FI in the majority of countries.Despite this, the magnitude is relatively smaller; hence, more efforts are needed.Hence, a sustainable development goal (SDGs) has been introduced to achieve financial inclusiveness and sustainable growth in society.
According to a number of studies, poverty and a lack of knowledge about financial services have been shown as the major barriers to formal financial services access.Financial literacy is the possession of knowledge of fundamental financial concepts to manage financial resources [12,13].Financial literacy assists in the acquisition of skills essential for financial efficiency.However, it is financial knowledge, along with financial competencies, which will help to provide not only the "ability to act" but also an "opportunity to act", Huang et al. [14].There is a need to examine how financial literacy can be related to achieving FI and sustainable growth.Many financial initiatives and policy change programs were undertaken in India to enhance FI and the economy's growth.In 2014, the government of India commenced Pradhan Mantri Jan Dhan Yojana (PMJDY) for attaining effective FI.As indicated by Poonam and Chaudhry [15], the attainment of FI has improved in many states.Despite this, the country's large populace is still excluded from the formal financial system [16].Thus, in view of this, it is important to gauge the perception of bank customers to analyze how they relate the success of these initiatives and policies and associate it with sustainable growth.
Thus, our research is figuring out how FI is linked with sustainable growth, which is a crucial question demanding the attention of researchers.A few researchers investigating the relationship have suggested a strong association between financial development and economic growth [17].Researchers such as Klapper et al. [18] indicate that FI enhances accessibility to credit, encourages investment facilitation along with the entry of new firms and thus improves economic growth.In the long run, FI could generate employment opportunities and ensure economic and financial stability [19].Wang and Guan [20] highlighted the need for a sound financial system and considered financial literacy and communication technology as important determinants of FI.Greater FI may help to promote inclusive and sustainable economic development, which may result in poverty alleviation along with economic and social growth of the economy [21].
The current FI argument is based on the belief that inclusive financial institutions help people escape poverty by stimulating economic development in their societies [22].Therefore, to overcome the issue of poverty, the Indian government, with the support of the reserve bank of India, prepared the National Financial Inclusion Strategy (NFIS).The Pradhan Mantri Jan Dhan Yojna (PMJDY) plan was also propelled in 2014 to empower the under-banked/unbanked people [23].The United Nations sustainable development goals (SDGs) indicate FI as a crucial facilitator for sustainable growth.The United Nations SDGs policy has 17 significant objectives.SDGs 1, 2, 5, 8, and 9 are directly related to FI. SDG-1 stresses that the more inclusive a country's financial institutions are, the more capable its poorer portions will be in achieving their economic aspirations, such as establishing new enterprises and increasing their children's non-cognitive and cognitive development [24].SDG-2 indicates that financially included farmers can make more investments to give higher yields and better food security.FI assists in providing them with insurance to defend their assets from external shocks.SDG-5 covers gender equality, and it is also entwined with FI, as it will result in women's social-economic development.This will reduce their risk of exploitation in the informal sector and enable them to engage in productive economic activities.With financial constraints and the inability to keep collateral, women often cannot procure loans [25]; and FI will assist in potential financial development possibilities [26].This will improve household well-being and enable them to invest in the health and education of their kids, too [27].
SDG-8 promotes long-term, inclusive, and sustainable economic development; full and productive employment; and decent work for all people, regardless of their background.Therefore, the formal financial institutions around the world are taking many significant steps to provide full finance to the needy, small entrepreneurs and those unbanked.Microfinance institutions (MFIs) have been set up and helped by many development agencies all over the world so that these customers who are not banked can get financial help [28].MFIs have contributed significantly to the development of a self-sustaining financial system for the poor and increased entrepreneurial talent [29] and socio-economic development [30][31][32][33][34][35].SDG-8 focuses on fostering sustainable economic growth and full and productive employment, and SDG-9 focuses on supporting innovation and sustainable industrialization.
The fundamental purpose of the current research is to examine the prevailing research on FI and sustainable growth and suggest answers to the following research questions: RQ1: What are the significant themes of research in this domain?RQ2: Which drivers influence more in achieving financial inclusiveness?RQ3: How can drivers of FI with the mediation of financial literacy influence sustainable growth?RQ4: How are financial initiatives related with sustainable growth?
To find answers to these pertinent questions, the present research was undertaken.Using a survey technique with inputs from customers using bank services, the study examines the major drivers of FI.It attempts to understand how drivers of FI through the mediation of financial literacy (FL) influence sustainable growth.It also attempts to investigate the financial initiative's direct impact on sustainable growth.The study uses a Partial Least Squares-Structural Equation Modeling (PLS-SEM) technique to relate drivers of FI, FL, and financial initiative with sustainable growth measured through the achievement of the SDGs.
The related research objectives are: O1: To identify the impact of the drivers of FI on sustainable growth.O2: To analyze financial literacy's mediation effect between the FI drivers and sustainable growth.O3: To investigate the impact of financial initiatives on sustainable growth.O4: To design a model relating the drivers of FI, financial literacy, and financial initiatives with sustainable growth.
Section 1 introduces the concept of FI, financial literacy, and financial initiatives on sustainable growth.Based on the need for the study, it raises the research questions.Section 2 examines FI from the perspective of the drivers of FI, such as technology, usage, and digitalization.This section also reviews the financial initiatives covering financial programs and policy.Section 3 highlights the research design and methods used to achieve the objectives.Section 4 presents the measurement and structural model.Two control variables were used, and the results are reflected through the two models; the second model is with the control variables.The study designs a PLS-SEM model to examine the impact of the FI drivers through financial literacy and financial initiatives on sustainable growth.Section 5 covers the discussion and conclusions section, reporting the new findings and a comparison with research in a similar area.The last section suggests the implications, limitations, and areas for future research.
Review of Literature and Hypothesis Development
The study covers a comprehensive review to lay the foundation for the conceptual model.The review of the literature in the current research has been classified under the following headings: 2.1.Drivers of FI 2.1.1.Usage Swamy [36] applauded the FI efforts of India's government, especially from 1991 to 2005, to make banks reach out to rural areas.Bassant [37] highlighted that for achieving growth with equality, commercial banks must opt for cost-effective technology, such as zero-balance bank accounts, point of sale, mobile banking, and ATMs.Consequently, Camara and Tuesta [38] covered three dimensions of FI: usage, access, and barriers.Usage covers having a financial product, a savings account, and a loan.Access covered the approachability of ATMs, the no. of bank branches, and financial products and services.Barriers included affordability, documentation, distance, and trust.Gine and Townsend [39] revealed a positive linkage between economic development and geographic outreach.Beck et al. [40] considered outreach through access and usage dimensions, and they concluded that usage plays the most prominent role and enables customers to facilitate payments through a debit card and through a savings account, and it allows for asset purchasing, owning a home, educating children, and also to maintain reserves for retirement.Allen et al. [41] cogitate FI through the usage of formal deposit accounts.A stream of thought has focused on the usage of and access to formal financial services [42][43][44].In light of these, it is pertinent to consider usage in the present study.Therefore, the related hypothesis is: H1a.The usage indicator is positively associated with financial inclusion.
Digitalization
The introduction of information and communication technologies (ICTs) and mbanking has given a new face to digitalization [45,46].Similarly, Demombynes and Thegeya [47] concluded that m-banking with the latest financial services helped transform the lives of the Kenyan population.Many countries have initiated digitalization through ICTs to provide fast, cheap, and accessible financial services.There are many examples of countries using ICT as a medium like mobile money: CELPAY in Zambia; M-PESA in Kenya; and WIZZIT in South Africa.In India, we have the facility of cash transfer through (UIADI) Aadhar and the Unified Payments Interface (UPI).Thus, it is evident that digitalization is an essential driver of FI.GPFI [48] reported that digitalization encourages the user to access digital services and financial products efficiently.The ease of access through digitalization will remove the barriers to FI. Ghosh [49] has reaffirmed that the (Adhar) biometric identification system, with its linkage to bank accounts and other financial services, has a positive influence on FI.
Similarly, Onaolapo [50] suggested that FI can be delivered smoothly in the country through information and communication technology (ICT).Thus, the literature indicates that digitalization is playing an essential role in establishing a financial network in society.Financial technology, including digital payments and mobile money accounts, has helped boost FI [51,52].Therefore, we have taken digitalization as one of the drivers of FI.Hence, we hypothesize: H1b.Digitalization is positively associated with financial inclusion.
FinTech
Financial Technology (FinTech) is the new technology to improve and automate the delivery and use of financial services.The first wave of FinTech ushered in innovation across all phases of the customer life cycle; however, the reach was limited to the affluent sections of society.Thus, it becomes evident that without considering FinTech as a driver of FI, the research may not be complete.Point-of-sale devices and networks communicate between the post office agent, retail agent, and financial service provider.Fintech, along with fund transfer and the payment of bills, also facilitates online trading and mutual fund investment [53].Though massive efforts are being taken to push digital payments, the picture is rather gloomy as only 2% of merchants enabled point-of-sale-based cashless payments [54].Thus, as technology changes very fast, it was thought to understand from the customer's perspective how relevant FinTech was in inducing a change in FI.As the target population approached a rural segment too, it was pertinent to include their opinion and draw a unified perception of urban and rural customers.
Moreover, the focus of FinTech is changing from facilitating e-payments or transactions to building a relationship.Based on these views, in the current study, it was considered a separate driver of FI and digitalization was taken to have customers' perceptions regarding digital financial services.Kass-Hanna et al. [55] suggest that national FI strategies continue to lean toward digital finance with the FinTech movement.
Therefore, we hypothesize that: H1c.FinTech is positively associated with financial inclusion.
Thus, the first hypothesis is: H1.Usage, digitalization, and FinTech are positively associated with FI.
After reviewing the drivers of financial inclusion, the following section deals with financial literacy.
Review of Financial Literacy
Financial literacy (FL) enables financial planning and also assists in making effective financial decisions [56].In view of Lusardi and Mitchell [57], financially sound people were more effective in financial planning and debt management.Lusardi et al. [12] opined that financially literate individuals have better knowledge about how to generate, spend, invest, and save money.Similarly, Grohmann et al. [58] related the expansion of bank branches in rural and urban areas to be associated with improved financial literacy and enhanced FI.Researchers across the globe believe that FI can be achieved through financial competencies by improving financial literacy.However, Atkinson and Messy [59] considered a low level of financial skill and knowledge as the major reason for lower levels of FI in any economy.They recommended that policymakers induce banks and financial institutions to conduct training programs to improve the FI level.Ramakrishna and Trivedi [60] recognized that technology positively influences FI.This was also reverberated by Rastogi and Ragabiruntha [61].Innovation and technology through literacy can intensify FI, because it can circumvent prevailing structural and infrastructural challenges and directly reach the needy ones [62].Thus, that is the reason we have taken financial literacy as a mediating variable.Okello et al. [63] have also used financial literacy as a mediator between social networks and FI.Both the direct and indirect effects of FL with FI emerge as significant, which indicates the important role played by FL in FI.Taking this as a pointer for future research, we want to examine the mediation effect of FL between the drivers and sustainable growth in this study.The drivers of FI with the mediation of FL should lead to sustainable growth.Hence, we hypothesize that: H2.Financial literacy mediates between the drivers of financial inclusion and sustainable growth.
Next, the research examines the relation of the financial initiative on sustainable growth.
Financial Initiatives
The financial initiatives may play a critical part in the development of FI by allowing the nation to be financially accessible to all people.In the study, financial schemes and policies have been examined to provide enabling environments that are financially well sound.The literature related with financial policy and financial schemes has been presented in this section.
Financial Schemes
Many national and international institutions are leading major policy initiatives and schemes to bridge the gap between FI.Around 35 countries have adopted a National Financial Inclusion Strategy (NFIS) to accelerate sustainable growth.Some countries have modified and restructured their NFIS [64].In India, major steps have been initiated by RBI in Basil-III norms.Along with increased regulations and supervision of financial institutions, there is a need for the expansion of bank branches in unbanked/rural areas.Policy changes are being introduced for safer banking, risk management, and for accelerating liquidity [65].
Moreover, Italy is an example where poverty levels are reduced through various schemes [66].Other schemes related with easy access to financial services and zero-balance savings accounts offered by the Nepal government to female heads of households led to around 84% of women opening their B/As [67].Similarly, the Indian government has initiated several programs like Pradhan Mantri Jan Dhan Yojna.Therefore, we have analyzed whether financial initiatives taken in India have been helpful in achieving FI and sustainable growth.
Kaboski and Townsend [68] indicated that the Thailand government has taken the initiative to provide micro-credit loans to rural areas by introducing the "Village Fund Program".The Reserve Bank of India, in 2006, permitted banks to use intermediaries as business facilitators (BFs) or business correspondents (BCs) for delivering financial/banking services.Joshi [69] has highlighted a significant role played by financial intermediaries in FI.As indicated by Dugyala [70], reinforcing the initiatives of financial intermediaries such as microfinance institutions and banks is needed.RBI initiated to encourage savings for the Chiller bank program in 2015 to encourage children to open and operate savings bank accounts independently.
Financial Policy
FI has been widely accepted as a goal for the financial sector and economic growth during the last several years by policymakers throughout the world.Cohen [71] opined that government and financial institutions should make effective policies, especially on FL in rural and urban areas, for financial intermediaries' involvement.The financial intermediaries and banking channels can deliver financial literacy programs effectively [72].The Reserve Bank of India focuses on unique programs and policies to successfully achieve FI in the country.It employs a bank-led approach, such as Basic Savings Bank Deposits (BDSD) accounts for the economically disadvantaged, simple Know Your Client (KYC) norms, and directions to open more bank branches in rural areas.The common service centers (CSC) have been set up in rural areas, providing electronic commercial services and e-governance to rural residents.Therefore, financial policies play an essential role in attaining FI and fostering sustainable growth.
The related hypotheses are: H3. Financial schemes and financial policy have a positive relation and are sub-dimensions of financial initiatives.
H4.
There is a positive relation between financial initiatives and sustainable growth.
The current research has used sustainable growth measured through SDGs 1, 3, 5, 8, 9, 10, 11, and 17 as a dependent variable.Thus, examining the existing literature on sustainable development goals and how the drivers of FI, financial literacy and financial initiatives are related to sustainable growth is mandatory.
Sustainable Growth
The basic purpose of any economy is to have sustainable growth, which offers basic financial services to unbanked and rural areas and reduces disparities.SDG-1 focuses on eliminating extreme poverty.It also states that the poor and the vulnerable should have equal rights to access financial services, including microfinance.Similarly, SDG-5 is about promoting gender equality.Access to financial services, such as credit, helps women assert their economic power [25].We would also like to refer to SDG-9, promoting innovation and sustainable industrialization.Sustainable growth advocates equitable opportunities for people during economic growth.It ensures benefits for all income groups.
Examining the researchers' perspectives on economic development in sustainable and inclusive growth is necessary.McKinnon and Shaw [73] concluded that expanding bank branches in rural/urban areas positively affects economic growth.Levine [74] and Beck et al. [40] also found a well-established financial system to be positively linked with the economy's growth.Khan [75] supported that a well-defined financial system encourages investment and promotes growth.Indeed, Bertram et al. [76] concluded that FI served as a prerequisite for inclusive economic development in Nigeria.Hariharan and Marktanner [77] supported the impact of FI economic growth and development as they observed a high positive correlation between FI and total factor productivity (TFP), which translates to growth.The same thoughts were reverberated by Kim et al. [78] in their research of 55 member countries of the Organization of Islamic Cooperation (OIC), where a positive relation of FI was observed with economic growth.Park and Mercado [79] found FI to be positively correlated with per capita income.Ibor et al. [80], in a study on Bangladesh, concluded that financial inclusiveness has helped in alleviating poverty and an improvement of living standards.
However, Zins and Weill [81] used a probit model on 37 African countries and found that educated, richer, and older individuals are more financially included.Access to formal financial services in an economy provide new and equal opportunity for investment for individuals/businessmen [82].Increased FI improved indicators such as income, the standard of living, health, education, and poverty reduction [83].Thus, it becomes essential to find out how FI drivers and financial initiatives have helped in achieving sustainable growth.Sustainable growth has been measured by the customer's perception regarding how FI helps in achieving dimensions covering aspects from reducing inequalities and enhancing health to fostering growth and innovation through SDGs 1, 3, 4, 5, 8, 9, 10, 11, and 17.SDGs were adopted in 2015 by the United Nations (UN) with the aim of ending human poverty in all of its forms in the world.Access to formal financing assists in achieving broader goals, such as ending poverty (SDG-1), improving health and education (SDGs 3 and 4), reducing gender inequality (SDG-5); improving entrepreneurial activity and innovation and growth (SDGs 8 and 9) [84 -86].SDG-10 is about reducing inequality SDG-11; making cities and other places where people live safe, resilient, and sustainable is related to SDG-17, i.e., reinvigorating the global cooperation for sustainable development by strengthening the implementation mechanisms.The study will also be able to focus on which SDG has a higher loading in sustainable growth as per the customer's perception.There are SLR studies covering FI and SDGs; however, such a study covering a survey-based analysis has not been undertaken.Thus, the related hypothesis is: H5.Sustainable growth is measured through the consumer's perception of how FI helps in achieving dimensions covering aspects from reducing inequalities and enhancing health to fostering growth and innovation through various SDGs, viz., SDGs 1, 3, 5, 8, 9, 10, 11, and 17.
H6. Drivers of FI with the mediation of financial literacy and financial initiatives positively influence sustainable growth.
Research Design and Methodology
Research design and methodology section covers research framework, data, research methodology, and operationalization.
Research Framework
There are no synergies regarding FI drivers, financial literacy, financial initiative, and sustainable growth.The outcomes of drivers of FI and their impact on sustainable growth vary significantly, which was the prime reason for undertaking the current study.We theorize that drivers of FI are positively related to sustainable growth.This relation is strengthened through the mediation effect of financial literacy.We also theorize that there is a positive relation between financial initiatives and sustainable growth.The research framework is presented in Figure 1.
Data
A cross-sectional design was used for this research and data were collected through a structured questionnaire from customers.The data collection timeline was from August 2019 to October 2020.The population for this research was drawn from customers from different north Indian states, which are Haryana, Punjab, Himachal Pradesh, New Delhi, Chandigarh, and Uttarakhand.The study used a 5-point Likert scale survey.To ensure the questionnaire's content validity, it was first delivered to a convenience sample of 90 persons.Academicians and business professionals were included in this pilot group.The feedback from the pilot group was used to improve the questionnaire.The pilot group also recommended adding a few items in drivers of FI to cover developing nations still in their development stage.We distributed 1993 surveys and received 1325 replies, resulting in a response rate of 66.4 percent.To represent the overall population, the sample included both urban and rural locations, genders, graduates and postgraduates, service class persons, and self-employed people.The attempt was made via revisits to increase sample participation.This was possible after the researcher personally visited banks to collect customer data, and a third party was not employed.
Table 1 summarizes the characteristics of the customers surveyed.Out of the total 1325 users, 51% were males and 49% were females.Among the respondents, 37% were from rural and 63% from urban sectors.Regarding age group, the people above 51 years were less.The majority of respondents were from private sector banks.There was a dominance of urban respondents in the sample.However, the sample is a representative sample as per the North India statistics, where there is a dominance of the urban and male population.
Methodology
The current research has used the variance-based Partial Least Squares-Structural Equation Modeling (PLS-SEM).It is a multivariate analysis method based on a series of ordinary least squares regressions and has higher levels of statistical power than covariancebased SPSS-AMOS [87].The study uses Smart PLS 3.2.0[88] to compute the path model.The further bootstrapping technique has been used to examine the loadings' significance.The next section discusses the results.Initial results are based on factor analysis.This is followed by a model designed using PLS-SEM
Operationalization
The study used structured questionnaires to collect data from respondents.The survey was conducted in three rounds from August 2019 to December 2019, February 2020 to August 2020, and November 2020 to December 2021.To see whether there is a nonresponse bias, the mean differences in critical variables across early (n = 714) and late respondents (n = 611) were tested.There were no significant changes between the two samples, which means there was no non-response bias.
Normal distribution plots, skewness, and kurtosis have been used to evaluate the assumption of normal distribution (Table 2).For the normal distribution, the skewness should be near zero and a negative value indicates skewness toward the left.Similarly, the kurtosis values are less than 3; thus, the data fulfill the criteria for normal distribution.
Data Analysis and Results
The data analysis process is divided into two sections.The first confirms the factor structure of the measurement items of the drivers of FI, financial literacy, financial initiatives, and sustainable growth.The second stage investigates the relative importance of FI, financial literacy, and financial initiatives in explaining sustainable development.The measurement model helps to decide the properties of the scales and the structural model to establish the relationships among the variables.
Measurement Model
The results are represented through a measurement model to check the reliability and for validation in Section 4.1.This is followed by the structural model highlighting the results in Section 4.2.The measurement model could be examined through construct reliability, convergent validity, and discriminant validity.
As depicted in Table 3, the composite reliability (CR) values are more significant than the recommended threshold criterion of 0.70 [89].The Cronbach alpha value for all constructs is between 0.770 and 0.893.The composite reliability values have a range of 0.881 to 0.948 (Table 3).This highlights that the construct validity and the reliability of the model are good and acceptable.According to Fornell and Larcker [90], the convergent validity of the constructs is examined by factor loadings and the average variance extracted (AVE).The value of the factor loadings and average variance extracted (AVE) should exceed the minimum requirement of 0.50 [91] for the explained variance to be greater than the measurement error.In the current study, the resulting value of the factor loadings is 0.611 to 0.914, and the AVE lies between 0.502 and 0.813.This condition is also satisfied.The indicators in the reflective measurement model show satisfactory levels of indicator reliability.As shown in Table 3, the outer loadings are greater than 0.70 for most of the items.However, in the case of SDG3, the value of the factor loading is 0.644; for SDG 5, it is 0.648; and for SGD 9, it is 0.611.As these are important for research, few researchers have suggested retaining the items if the values are greater than 0.60.Hence, we have retained them for further analysis.
The average variance extracted (AVE) greater than 0.50 supports the measures' convergent validity.The discriminant validity [90] was measured by comparing the values of the square root of AVE.It is recommended that the value of the square root of AVE should be larger than the inter-construct correlations (Table 4).The results confirm that the reflective constructs exhibit discriminant validity.The next step was to check the outer and inner variance inflation factor (VIF).The VIF values are presented (Table 5).As highlighted, the outer and Inner VIF values are less than 3 and in the acceptable range [92].Thus, the collinearity is low, as indicated by a VIF value lower than 3; thus, no indicator was removed.
Structural Model
The results of the measurement model highlight that the construct reliability, convergent validity, and discriminant validity are all in the acceptable range.When the measurement model had been verified, the relationship dimensions of the model and sustainable growth were performed.The structural model results, as depicted in Figure 2, show that the beta value between the drivers of FI and financial literacy is 0.877 and between financial literacy and sustainable growth is 0.370.The indirect effect is 0.324 (0.877 × 0.370), while the direct effect of the drivers of FI and sustainable growth is 0.152.Further financial initiatives are positively and directly related to sustainable growth, and the beta value is 0.472.The results indicate that with the mediation of FL, the impact of the drivers on sustainable growth improved and was significant too.Figure 2, along with Table 6, will help understand the status of the hypotheses.The outer loadings of usage are 0.860 and are the highest amongst the drivers of FI.Hence, we accept H1a, that usage is positively associated with FI.The outer loading of digitalization is 0.893; thus, we accept H1b: Digitalization is positively associated with FI.For FinTech, the outer loading of FinTech is 0.840.Thus, H1c: FinTech is positively associated with FI and has also been accepted.Thus, the first hypothesis that H1: Usage, digitalization and technology are positively associated with FI has been accepted as all the dimensions have high outer loadings.The next hypothesis was that H2: Financial literacy mediates between the drivers of FI and sustainable growth.Financial awareness and financial competency had outer loadings greater than 0.850.Hence, it can be inferred that financial literacy comprises FL awareness and FL competency.The literature suggests that financial literacy will have a positive impact on sustainable growth.This study tries to analyze whether financial literacy mediates between the drivers of FI and sustainable growth.For this, we need to access the direct path of FI's influence on sustainable growth and the indirect path through financial literacy as a mediator.The results indicate that the FI drivers influence the economy's sustainable growth.The direct path coefficient is 0.152 (t-statistics 32.490) and is significant (p < 0.001).The indirect path co-efficient is (0.877 × 0.370) and the t-statistics are also significant (p < 0.001).The strength of the relationship has improved with the mediation of financial literacy.Thus, H2: Financial literacy mediates between drivers of financial inclusion and sustainable growth has been empirically validated.
The next hypothesis is H3: Financial schemes and financial policy have a positive relation and are sub-dimensions of financial initiatives.As the loadings of both the dimensions, financial policy (0.889) and financial schemes (0.914), are high, we accept H3: Financial schemes and financial policy have a positive relation and are sub-dimensions of financial initiatives.It is now important to examine the relation between financial initiatives and sustainable growth.A beta value of 0.472 and a t-value of 11.763 and (p < 0.001) support the acceptance of the hypothesis, viz., H4: There is a positive relation between financial initiatives and sustainable growth.
The results of the present study highlight that the drivers of FI, financial literacy, and financial initiatives influence sustainable growth.These three predictors explain 78.6 percent of the variation in sustainable growth.These results indicate that all the predictors considered in the study influenced sustainable growth, although the degree of influence is varied.The results confirm H5: Sustainable growth is measured through the consumer's perception of how FI helps in achieving dimensions covering aspects from reducing inequalities and enhancing health to fostering growth and innovation through various SDGs, viz., SDGs 1,3,5,8,9,10,11,and 17, as all outer loadings are high for the undertaken SDGs.The findings highlight that the drivers of FI with the mediation of financial literacy emerge as an important predictor.An important is that emerging financial initiatives also significantly impact sustainable growth.This lends support to H6: Drivers of FI with the mediation of financial literacy and financial initiatives positively influence sustainable growth.
Structural Model with Control Variables
In the next stage, we introduced the control variables and checked the structural model results again (Figure 3).Region and gender were introduced as the control variables.The results were almost similar.The beta value between financial initiatives and sustainable growth (SDG) was 0.472.The values were significant for relations between the drivers of FI and financial literacy, and between financial literacy and sustainable growth (SDG).The results were also significant for financial initiatives and sustainable growth (SDG).The model also depicts that results were not significant for gender and sustainable growth (SDG) and also for the region and sustainable growth.Furthermore, the beta value for gender is "−0.018"(p-value: 0.379), indicating that the results are supportive for males rather than females.Similarly, the beta value for the region is 0.016 (p-value: 0.430), indicating a positive relation with the urban rather than rural sector.Women with access to financial services may control personal and have productive expenditures [93].Thus, we accept H7: Gender and region are the control variables and do not influence the endogenous variable, viz., sustainable growth.However, the results of the current study highlight the advantage for males.Thus, this may be taken as a lacuna and FL may be provided to females to avail advantages of financial inclusiveness and its transmission to sustainable growth.
Discussion and Conclusions
The aggregative result of the study in terms of the status of hypotheses has been shared in Table 7.The results indicate that all the hypotheses have been accepted.Starting primarily with the drivers of FI, viz., usage, digitalization, and FinTech, the results suggest that these are significant drivers of FI and are positively influencing FI.Bhandari [94] has also highlighted penetration and usage as important dimensions of FI.An earlier study by Gu, Lee, and Suh [95] emphasized trust and usage as important indicators inducing m-banking in emerging economies.The results of the current study emphasize the digitalization indicator emerges as the most important driver, followed by usage and FinTech.The empirical findings of Duncombe and Boateng [96] and Barbu et al. [97] reveal that technological innovations, viz., connectivity, improve access to financial products for the public.This has also been reflected in the current research as FinTech emerges.The thoughts of Kim et al. [78] support the FI-sustainable development goal nexus for the Organization of Islamic Cooperation (OIC) economies and similar results were also reverberated by Sharma [10].FI is related to sustainable growth [75,98,99].This is also endorsed by Ryu and Ko [100] suggesting customers' hesitancy to adopt FinTech, suggesting effort is needed to promote it.
Chithra and Selvam [101] supported a positive relation between deposit and credit penetration on FI in India.Financial initiatives help boost FI.The present study highlights a positive relation of financial initiatives on sustainable growth.This has been indicated in earlier studies by Sarma and Pais [102] and Fungáčová and Weill [103].The present research depicts a holistic picture by relating drivers of FI, financial literacy, and financial initiatives with sustainable growth measured through customers' perceptions regarding the success of FI through the achievement of the SDGs considered in the model.The strategic collaboration of FI and financial education leads to the financial stability of society and the economy [104].The present study underlines the importance of FI drivers with the mediation that financial literacy enhances sustainable growth.
A strategy toward meaningful FI is needed to unlock the potential for reducing gender inequalities for dynamizing and sustaining growth.The recent works on FI underscore that through access to financial services and products, and the marginalized population can also manage income in a better and more conducive manner [105,106].This will diminish poverty [107] and enhance economic activity [108].However, as indicated by Bateman and Chang [109], caution may be followed by undue reliance on a traditional model of MFI.This along with reliance on financial initiatives is essential for sustainable growth.This also underlines the importance of financial literacy, as with literacy, the essence of FI drivers can be achieved, and thus is an important step toward sustainable growth.
Hence, from the above analysis, it can be concluded that drivers of FI, viz., the usage indicator, digitalization, and FinTech, are positively associated with financial inclusion and with the mediation of financial literacy, and they positively influence sustainable growth.Sustainable growth has been measured through customers' perceptions regarding the success of FI through the achievement of selected SDGs, viz., SDGs 1, 3, 5, 8, 9, 10, 11, and 17.Further, it can be concluded that there is also a positive relation between financial initiatives and sustainable growth.The study has added importance as it considers gender and region as control variables and creates a model taking all predictors along with the control variables.
Implications of the Study
The empirical findings of the present study specify valuable implications for practitioners.Understanding the constructs in the proposed research model is crucial for promoting financial inclusiveness for bankers in India and bankers in emerging economies.To enhance financial inclusiveness and its transmission to sustainable growth, there is a need to continue informing customers about changes in digitalization and FinTech.This study examines the impact of drivers on sustainable growth through the mediation of financial literacy on sustainable growth.The research has empirically corroborated the significant and positive impact of the drivers of FI with the mediation of FL for achieving sustainable growth as measured through the impact on various SDGs.In addition, the study has a rich contribution as it depicts a positive effect of financial initiatives on sustainable growth.This will help other economies to have proper initiatives for enhancing growth through financial initiatives focusing on financial policy and schemes.The results also highlight the importance of using the mentioned FI drivers, financial literacy, and financial initiatives for the achievement of financial inclusiveness success and sustainable growth.
The major purpose of the research was to assess the impact of FI on sustainable growth.Sustainable growth is measured by asking for customers' perceptions about FI success through the achievement of the mentioned SDGs.This study moves beyond the systematic literature covering FI and SDGs and empirically validates the relevance of FI for attaining sustainable growth.The results reflect that customers considered that FI helped in achieving sustainable growth with respect to SDG-8, i.e., improve entrepreneurial activity and innovation and growth, which had the highest loading, followed by SDG-17, to strengthen the means of implementation and revitalizing global partnership for the sustainable development goal.SDG-8, reducing inequalities, was the next priority for consumers.The results were also good for SDG-1: ending poverty.However, there is a need to improve in terms of SDG-3, improving health and education, and SDG-5, reducing gender inequality.
Further, there is a need to focus on the drivers of FI to enhance the success of FI.These implications will help to highlight the interdependence of the drivers of FI and financial literacy for achieving sustainable growth.The relation, as highlighted through the findings, supports the impact on sustainable growth.Thus, segregated policy needs to be intertwined with a dose of financial literacy to enhance financial inclusiveness and sustainable growth.
Figure 2 .
Figure 2. PLS-SEM bootstrapping model relating drivers of FI, financial literacy, and financial initiatives with sustainable growth.Source: Author's calculation through the help of PLS-SEM.
Figure 3 .
Figure 3. PLS-SEM model with a control variable.Source: Author's calculation through the help of PLS-SEM.
Source: Self-calculated through SPSS.
Source: Self-calculated through SPSS.
Table 5 .
Outer and Inner VIF.
Table 6 .
Structural model analysis with control variables.
Table 7 .
Status of hypotheses. | 2022-09-15T17:01:35.666Z | 2022-09-05T00:00:00.000 | {
"year": 2022,
"sha1": "0c34564092c2a56b1f43667cf3eaba9fa0ffa2c9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/14/17/11061/pdf?version=1662371124",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3c859426f744b39312091afb555d977a80efc9a5",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": []
} |
2565716 | pes2o/s2orc | v3-fos-license | Disruption of the microtubule network alters cellulose deposition and causes major changes in pectin distribution in the cell wall of the green alga, Penium margaritaceum
Application of the dintroaniline compound, oryzalin, which inhibits microtubule formation, to the unicellular green alga Penium margaritaceum caused major perturbations to its cell morphology, such as swelling at the wall expansion zone in the central isthmus region. Cell wall structure was also notably altered, including a thinning of the inner cellulosic wall layer and a major disruption of the homogalacturonan (HG)-rich outer wall layer lattice. Polysaccharide microarray analysis indicated that the oryzalin treatment resulted in an increase in HG abundance in treated cells but a decrease in other cell wall components, specifically the pectin rhamnogalacturonan I (RG-I) and arabinogalactan proteins (AGPs). The ring of microtubules that characterizes the cortical area of the cell isthmus zone was significantly disrupted by oryzalin, as was the extensive peripheral network of actin microfilaments. It is proposed that the disruption of the microtubule network altered cellulose production, the main load-bearing component of the cell wall, which in turn affected the incorporation of HG in the two outer wall layers, suggesting coordinated mechanisms of wall polymer deposition.
Introduction
Plant cell walls are composites of polymers that are assembled and organized into intricate structures that surround the protoplast, where they serve multiple roles including defence, turgor resistance and controlled cell growth, water and mineral uptake, and communication (Baskin et al., 2004;Cosgrove, 2005;Sarkar et al., 2009;Keegstra, 2010;Fry, 2011). Cell wall architecture is highly dynamic, and synthesis, assembly, and any subsequent remodelling require precisely coordinated interactions between the cell endomembrane system, cytoskeletal network, plasma membrane, and multiple cross-talking signal transduction pathways. Cell wall production and maintenance therefore involve not just a substantial amount of the total photosynthate, but also a major portion of the genetic repertoire (Popper et al., 2011).
The structural and developmental characteristics and functional competency of the plant wall are also fundamentally affected by complex multipolymeric associations. The nature of these interactions, especially during development and in response to environmental stresses, is poorly understood and only recently has this been the focal point of detailed study. For example, cellulose microfibrils are generally described as being tethered by xyloglucan and other hemicellulosic (crosslinking glycan) polymers, and these have been proposed to influence microfibril slippage during wall and cell expansion Fry, 2005, 2008;Fry, 2011); the nature, extent, and significance of this cross-linking have recently been discussed (Cosgrove and Jarvis, 2012;Park and Cosgrove, 2012). There is also recent evidence that the neutral sugar side chains (e.g. arabinans and galactans) of the pectin class rhamnogalacturonan-I (RG-I) may be directly bound to cellulose (Zykwinska et al., 2005(Zykwinska et al., , 2007. Yoneda et al. (2010) further suggested that pectin cross-bridges support and maintain the direction of cellulose microfibril orientation and slippage during cell expansion. However, there are doubtless many other interpolymeric associations that are critical for wall architecture and function, but that have yet to be recognized and characterized.
Evaluating such interactions within the context of multicellular plants is very challenging, and the extraction of cell wall polymeric complexes inevitably disrupts or abolishes a number of the molecular associations. Moreover, the physical restriction of specific polymer probes in dense tissues and the inability to use live material in many labelling and analytical protocols effectively further limit dissection of interpolymeric interactions. In contrast, the identification and use of a unicellular plant system, particularly one with clearly defined cell wall polymer domains, would significantly enhance such studies.
A unicellular taxon of the Charophycean green algae (CGA or Streptophyta; i.e. the group of green algae most closely related to land plants; Lewis and McCourt, 2004;Wodniok et al., 2011), Penium margaritaceum, has a number of characteristics that suggest it would provide a potentially valuable model system for the study of cell wall development, including interpolymeric associations. First, Penium only produces a permanent primary cell wall, comprising two prominent polymeric domains that are easily identified by microscopy: a pectic domain primarily consisting of homogalacturonan (HG) organized into a lattice-like network in the outer layer of the wall; and an inner domain consisting mostly of cellulose, together with smaller amounts of other glycan classes (Sørensen et al., 2010Domozych et al., 2011). Secondly, the focal point of HG secretion, which in Penium appears to drive cell wall growth and cell development, is a clearly defined narrow band located at the cell centre or isthmus, or the isthmus band (Domozych et al., 2009b). This facilitates visualization of wall polymer secretion in a spatially well-defined area. Thirdly, Penium can be grown in large, fast-growing cultures, enabling extraction of substantial amounts of cell wall material for biochemical and immuno-based screening (Møller et al., 2007;. Fourthly, wall polymer dynamics can be conveniently monitored by live cell labelling utilizing probes such as monoclonal antibodies (mAbs) directed against higher plant wall polymers or carbohydrate-binding modules (CBMs; Domozych et al., 2011). Finally, the cell cultures can be readily treated with agents that promote or disrupt cellular processes, including enzymes and pharmacological inhibitors, at precise concentrations and over controlled time periods.
In this study, the structural and developmental dynamics of the pectin and cellulose domains during Penium cell wall expansion and cell morphogenesis following treatment with the dinitroaniline herbicide, oryzalin, were analysed. This compound blocks microtubule polymerization and consequently inhibits cell wall development and anisotropic growth (Hugdahl and Morejohn, 1993). A combination of high resolution microscopy, polysaccharide microarray analysis, and experimental manipulation was used to study oryzalininduced changes to the cell wall. Distinct effects of oryzalin on the pectin and cellulose domains of the cell wall and concurrent alterations to the cytoskeletal system are described, and the implications of the results for the control and coordination of cell wall disassembly are discussed.
General
Penium margaritaceum ('Skd-8' clone, Skidmore College Algal Culture Collection) was grown in liquid Woods Hole medium (WHM; Domozych et al., 2007) under the following conditions: 5400 lux of cool white fluorescent light, 18 ± 1 °C, 16 h light/8 h dark photocycle. Subcultures were made every 2 weeks and cells used for experiments were collected after 5-7 d in culture. Cells were harvested and washed as previously described (Domozych et al., 2007).
Oryzalin was obtained from AccuStandard (New Haven, CT, USA) and the final concentration chosen for experimental procedures was 280 nM as this concentration provided the most evident phenotypes. Specific experiments were conducted in 5 ml aliquots of culture medium, each containing 500 cells ml -1 . After the addition of oryzalin (from a stock solubilized in methanol), the cells were cultured under the conditions described above. Control experiments included growing cells in 0.01% methanol. Reversibility experiments entailed harvesting oryzalin-treated cells at various time intervals, washing five times in fresh WHM, and culturing in fresh WHM. Washed cells were then monitored via microscopy over the next 72 h. Total reversibility of effects could be visualized in cells incubated in oryzalin for ≤96 h. Cells were also treated with isoxaben (10 μM) or dichloronitrobenzile (DCB; 0.2 μM; Sigma Chemical, St Louis, MO, USA) and monitored after 24 h.
Live cell labelling
Treated and untreated cells were harvested, washed with WHM, and labelled with the following mAbs, as previously described (Domozych et al., 2007): JIM5 [specificity for HG with a relatively low degree of esterification (DE); Clausen et al., 2003]; JIM7 (specificity for relatively high DE HG; Clausen et al., 2003), and INRA-RU2 [specificity for (1→2)-α-l-rhamnose (Rha) (1→4)-α-dgalacturonic acid (GalA) p-(1→7) with at least two Rha-GalA acid repeats; Ralet et al., 2010]. All primary antibodies were obtained from Plant Probes (Leeds, UK), with the exception of INRA-RU2, which was a generous gift from Dr M.-C. Ralet (INRA Nantes, France). Secondary antibodies for immunofluorescnce studies included antirat or anti-mouse antibodies conjugated with tetramethylrhodamine isothiocyanate (TRITC) or fluorescein isothiocyanate (FITC) (Sigma). For labelling with CBM3a (specificity for crystalline cellulose), the protocols recommended by the supplier (Plant Probes; also see Blake et al., 2006) were employed with the modification that WHM was used as the labelling buffer. Labelled cells were either viewed via light microscopy (LM) or washed with WHM and placed back into culture. Aliquots of cells were subsequently removed at various time intervals and viewed via fluorescence light microscopy (FLM) either on an Olympus BX-60 LM (NY-NJ Scientific, New Jersey, USA) equipped with fluorescence optics, or an Olympus BX-61 LM equipped with a Fluoview 300 confocal laser scanning microscopy (CLSM) system. In order to ascertain mAb labelling patterns relative to the chloroplast, which fills most of the protoplast, some mAb-labelled cells were initially imaged to assess wall labelling and then with the argon laser which provided the autofluorescence exhibited by the chloroplast in the background. The image stacks from each were superimposed to yield dual-labelled images. Similarly, some cells were first labelled with JIM5, placed back in culture, and then labelled with JIM7. For general morphological studies, cells were observed with differential interference contrast light microscopy (DIC-LM).
Enzyme pre-treatment Aliquots of washed cells were treated for 24 h or 48 h with pectate lyase (PL) (Megazyme, IR; E-PECLY, final concentration 1.2 U) or cellulase (Sigma Chemical; #0615; 500 μg ml -1 ). The cells were then collected and resuspended in 280 nM oryzalin in either the PL or cellulase solutions for 24-48 h. Cells were collected and viewed with DIC-LM, or labelled with JIM5 and observed with FLM or CLSM.
Microtubule and actin labelling
Immunolocalization of microtubules was performed using the freeze shatter technique of Wasteneys et al. (1997). Rhodamine-phalloidin labelling was performed using the method described by Holzinger et al. (2002).
Quantitative measurements
The surface area (SA) of a cell covered by new cell wall, as recognized by new HG in relation to whole cell SA, was calculated for JIM5-labelled cells incubated in oryzalin for 48 h or 72 h, or in control cultures. The cylindrical morphology of Penium and the constant cell width (17 μm) of each cell allows for SA measurements to be obtained using the standard formula for determining the SA of a cylinder: SA=2 (π×r 2 )+(2π×r)×L, where r=radius of the cell, L=length of the designated area (i.e. length of the cell or length of the cell area with newly deposited HG). For L, the length of specific areas with new cell wall was calculated as the non-fluorescent zones produced post-initial JIM5 labelling. Measurements were made using standard Cell B software (Olympus). Triplicate samples of 100 cells each were measured and a 0.98 (SA) curvature factor employed to account for the blunt rounding of the cells at the poles. For calculating SA of the swollen, spherical isthmus regions of oryzalintreated cells, the diameter of the central, spherical, swollen zones was measured, in addition to the adjacent cylindrical polar regions. The SA of the spherical regions was determined using the standard formula for a sphere: SA=4πr 2 , where, r=the radius of the sphere. This SA was added to the surface areas of the cylindrical regions at the poles to determine whole cell SA of treated cells.
Polysaccharide microarray analysis
Polysaccharide microarray analysis was performed as described by Møller et al. (2007). Supernatants of extracted cell wall material were spotted in three replicates and three dilutions, and three independent analyses were carried out. Mean spot signals from the three experiments are presented as a heatmap created using the online tool available at http://bar.utoronto.ca/ntools/cgi-bin/ntools_heatmapper.cgi, with the values normalized to the highest value (set to equal 100). A cut-off of 5% of the highest mean signal value was imposed and values below this are represented as 0. Antibodies and CBM3a were obtained from PlantProbes, CCRC (University of Georgia, Athens, GA, USA), Dr M.-C. Ralet (INRA, Nantes, France), or BioSupplies (Parkville, Australia).
Transmission electron microscopy (TEM)
For TEM analyses, cells were harvested, washed, and spray frozen in liquid propane cooled with liquid nitrogen (see Domozych et al., 2009b). Samples were then freeze substituted at -80 °C for 72 h in 0.5% glutaraldehyde/1% osmium tetroxide (EMS, Ft. Washington, PA, USA). The samples were then slowly warmed to room temperature over 8 h, washed with acetone, and infiltrated/embedded between two plastic Aclar (EMS) sheets in Spurrs low viscosity epoxy plastic (EMS). After polymerization of the plastic, the Aclar was removed, and individual cells were selected, excised from the thin plastic sheet with a razor blade, and mounted with super glue onto a blank plastic mould. Sections (60-80 nm) were cut on a Reichert Ultracut ultramicrotome (MOC, Valley Cottage, NY, USA), stained with conventional uranyl acetate/lead citrate, and viewed with either a JEOL 1010 transmission electron microscope (Peabody, MA, USA) or a Zeiss Libra 120 transmission electron microscope (Peabody). In order to enhance HG imaging, some cells were treated prior to fixation with the chelator cyclohexanediaminetetraacetic acid (CDTA; 2 h; room temperature). For immunogold labelling, the protocol of Domozych et al. (2009a) was employed. For enhancement of general ultrastructural and immunogold labelling, some sections were analysed with darkfield optics.
Variable pressure scanning electron microscopy (VPSEM) Cells were harvested, washed, and 100 μl aliquots of dense cell suspensions from the pellet were placed on circular 0.8 cm diameter nitrocellulose sheets. The cells were allowed to settle on the membrane and excess growth medium was removed with filter paper. Each sheet was plunge-frozen in liquid nitrogen and then placed on a JEOL cryostub, which had been pre-cooled with liquid nitrogen. Cells were viewed on a JEOL 6480 variable pressure scanning electron microscope under the following conditions: 30 Pa, 10 kV, and 60 spot size.
Field emission scanning electron microscopy (FESEM)
Harvested and washed cells were frozen in liquid nitrogen, freeze dried, and placed on stubs coated with double-sided sticky tape. Cells were sputter coated with gold/palladium and imaged using a Zeiss Neon-40 EsB FIB-B scanning electron microscope.
Morphology and immunolabelling patterns of cultured Penium
Under normal growth conditions, P. margaritaceum is an elongate cylinder with rounded poles. Each cell is ~17 μm wide and cell length varies from 150 μm to 220 μm (Figs 1A, 2A). Live cells may be labelled with mAbs with specificity for epitopes present in land plant cell wall polymers (see also Domozych et al., 2011) or CBMs, and placed back into growth medium where they continue division/expansion and retain the label for 10 d or more, depending on the polymer in question. JIM5, an mAb that recognizes HGs with a relatively low DE, labels a lattice-like structure over most of the cell surface ( Fig. 1A; Domozych et al., 2011) except for a narrow, non-labelled band in the isthmus region (Fig. 1B). This band represents the major HG secretion zone during pre-division (i.e. the isthmus band; or 'HGSB', Domozych et al., 2009b) and is labelled by the mAb JIM7, which recognizes HG with relatively high DE (Fig. 1C, D). RG-I was also identified in the cell wall using INRA-RU2 ( Fig. 1E) but, unlike JIM5, INRA-RU2 localized in a layer below the outer wall lattice and in a more homogenous labelling that was interrupted by dark puncta. This was determined by CLSM optical sectioning. CBM3a, a CBM with specificity toward crystalline cellulose, also labelled most of the cell wall except for the isthmus band ( Fig. 1F). CLSM-based optical sectioning through the wall layers revealed CBM3a labelling at the innermost region of the wall that is generally uniform, but interrupted by unlabelled puncta (Fig. 1F). Approximately 10-12 of these puncta were found per square micrometre, matching the number and pattern of the outer wall layer lattice projections observed in the JIM5-labelled cell walls. These unlabelled puncta were interpreted as the shadows of the unlabelled HG of the outer and medial layers embedded in the cellulose domain. Control labelling experiments for the initial immunocytochemical screening included elimination of the primary mAb ( Fig. 1G) or CBM3a (Fig. 1H).
Morphology and growth dynamics of oryzalin-treated Penium
Penium has a uniform cylindrical morphology consisting of two 'equal' sized semi-cells attached at the central isthmus.
During most of the cell cycle, the nucleus resides at the isthmus and is flanked by two chloroplasts housed within each semi-cell ( Fig. 2A). After 24 h of treatment with 280 nM oryzalin, noticeable swelling occurred at the isthmus zone ( Fig. 2B), which increased further after 36 h (Fig. 2C). The nucleus remained in the swollen isthmus and became ensheathed by the chloroplast filling this zone. After 48 h, the swelling increased dramatically (Fig. 2D), creating a large spherical central zone within the cell that was sandwiched between the two cylindrical polar zones, suggesting that oryzalin affects the new but not the pre-existing cell wall. Cells did not divide when treated with oryzalin but remained alive for up to 96 h, after which the protoplast and cell wall often ruptured at the isthmus. Recovery experiments, involving removal of oryzalin and transfer of the cells to fresh WHM, resulted in a return to the cylindrical morphology. The time taken for recovery was dependent on the time taken for elimination of already internalized oryzalin (Sampathkumar et al., 2011) and, after 12 h of recovery, new expansion yielded a narrow, cylindrical morphology arising at the isthmus flanked by the swollen regions that arose during incubation in oryzalin (Fig. 2E). Cells were also able to divide during recovery, yielding products that have narrow cylindrical morphology at the poles (i.e. formed during preand post-oryzalin-treatments) and a swollen central region (i.e. formed while incubated in oryzalin; Fig. 2F). The unlabelled puncta (arrows) most probably represent shadows of HG of the outer layer. Scale bar=3 μm. (F) CBM3a (specificity toward crystalline cellulose) labelling of the cellulosic layer (small arrows) revealing the thin unlabelled isthmus band at the isthmus (arrow). Scale bar=3.0 μm. (G) mAb control labelling where primary antibody was eliminated from the labelling process. Scale bar=12 μm. (H) CBM3a control where the CBM3a was left out of the labelling process. Scale bar=12 μm. All images were taken using CLSM.
VPSEM was used to analyse the severe morphological changes to the cell and alterations to the cell wall surface resulting from oryzalin treatment (Fig. 3A, B). More specifically, isthmus-based swelling occurred from 6 h to 24 h after the treatment (Fig. 3C-E), during which time the HG lattice of the outer wall became disrupted, before ultimately disappearing at ~48 h treatment (Fig. 3F). During the recovery experiments, a narrowing of the isthmus region became apparent (Fig. 3G), similar to that observed by LM (Fig. 2E). Cell division reinitiated during recovery (Fig. 3H) and the outer HG lattice reappeared in the growing zone at the polar tip of the expanding daughter cell (Fig. 3I).
Wall compositional modifications induced by oryzalin treatment
Polysaccharide microarray analysis was performed in order to compare the relative amounts of epitopes of wall polymers in untreated cells with those treated with oryzalin for 48 h (Fig. 4). This semi-quantitative technique has been used successfully with several CGA species and involves sequential extraction of cell wall polysaccharides using CDTA, followed by sodium hydroxide, and cadoxen (diaminoethane and cadmium oxide), prior to spotting onto a nitrocellulose membrane and probing with mAbs or CBMs with specificity for a range of cell wall epitopes. Differences in the mean spot signal intensities were observed after probing with several mAbs, but one particularly striking change was an increase in oryzalin-treated cells of the relative levels of HG epitopes, as recognized by the mAbs LM18 and LM19 which represent new mAbs that label HG epitopes in a similar fashion to JIM5 (Verhertbruggen et al., 2009). Conversely, the relative levels of the RG-I backbone epitope, recognized by the mAbs INRA-RU1 and INRA-RU2, AGP epitopes, recognized by the mAb JIM8, and LM16 and extensin epitopes, recognized by mAbs JIM20 and LM1 all decreased in oryzalin-treated cells. Another notable finding was that the RG-I epitopes were almost as abundant in material extracted with NaOH as in material extracted with CDTA. It should be noted that equal volumes of extract are used for each spot, which allows comparisons between treatments (i.e. with or without oryzalin) but not between extracts (i.e. CDTA, NaOH, and cadoxen) and so the extractability of the epitopebearing polymers is not considered here. The results of this microarray study were used as a guide for choosing specific targeted polymers for subsequent labelling.
Immunocytochemical examination of alterations to the HG lattice in oryzalin-treated cells
JIM5 was used as a marker to monitor lower DE HG during wall development. This antibody has previously been successfully employed for live cell immunolabelling of Penium in the past (Domozych et al., 2009b. Oryzalin treatment (Fig. 5A) initially (~2 h) resulted in a narrow region of disruption to the HG lattice of the outer wall layer at the isthmus band (Fig. 5A). After 24 h of treatment (Fig. 5B), distinct breaks in the HG lattice were visible and these progressively (E) After 12 h of recovery, the cell returned to its typical cylindrical morphology. This is seen with a narrowing of the cell at the isthmus band (arrow). Two older swollen regions remaining from oryzalin treatment flank the isthmus. Scale bar=14 μm. All images were taken using DIC. expanded over 36 h and 48 h of treatment (Fig. 5C, D), at which point the lattice was severely disrupted. When cells were allowed to recover for 12 h, the HG lattice and cylindrical shape reappeared at the isthmus band (Fig. 5E) and at the expanding polar zone of recently divided daughter cells (Fig. 5F). Quantitative analysis of SA coverage by the HG lattice showed that the percentages of cell SA covered by new cell wall in cells treated for 48 h and 72 h and untreated cells were approximately equal (Table 1). However, in untreated cells, new wall material was found primarily in new cylindrical growth at the isthmus zone whereas in treated cells the majority of new wall material was present in the spherically swollen isthmus. Experiments were carried out to determine if cellulose alteration caused by treatment with the known cellulose synthesis-disrupting agents, isoxaben and DCB, affected the expansion zone and cell shape. When treated with isoxaben (10 μM; Fig. 5G) or DCB (0.2 μM: Fig. 5H), similar swelling of the isthmus was observed. These effects were also reversible by extensive washing of the cells and removal of the disrupting agent.
FESEM imaging of HG lattice alteration
FESEM was employed to provide detailed surface imaging of the cells. During cell swelling at the isthmus band upon 24 h treatment with oryzalin, the HG lattice began to tear apart (Fig. 6A). The lattice of the pre-existing cell wall consisted of an inner fibre-based network and outward-extending projections (Fig. 6B). The wall of the swollen isthmus was highlighted by irregular patterns of the HG fibres interspersed with wall regions possessing no lattice (Fig. 6C). These observations corresponded well with JIM5-labelled cells displaying lattice alterations (Fig. 5B-D). At 12 h after recovery, the typical HG lattice begins to regenerate at the isthmus (small arrows) while the disrupted lattice of the oryzalin-induced swollen zones remains (large arrows). Scale bar=4.0 μm. (F) A recently divided daughter cell, 24 h after recovery, initiating a return to the cylindrical shape. The typical HG lattice at the expanding pole (small arrow) is apparent, as is the remnant of the disrupted swollen zone (large arrow) and the original wall of the cell prior to oryzalin treatment (*). Scale bar=15 μm. All images taken with CLSM. (G) Twentyfour hours of treatment with 10 μM isoxaben results in swelling of the isthmus region and disruption of the HG lattice (arrow). Scale bar=15 μm. (H) Twenty-four hours treatment with 0.2 μM DCB results in swelling of the isthmus region and disruption of the HG lattice (arrow). Scale bar=15 μm.
Effects of oryzalin on enzymatically treated cells
In order to elucidate further the effects of oryzalin treatment on the pectin and cellulosic domains, cells were treated with different wall-degrading enzymes prior to oryzalin treatment. When cells were treated with PL for 24 h and then incubated for 24 h with oryzalin and PL, the isthmus-based swelling was still apparent (Fig. 7A) and the lattice of the outer wall layer, as labelled by JIM5, was disrupted (Fig. 7B). When cells were pretreated with PL for 48 h (2X pre-treatment time), only small strips of lattice remained (Fig. 7C) and cell morphology was similar to that seen when cells were treated with oryzalin alone. However, when cells were pre-treated with cellulase for 24 h followed by oryzalin/cellulase treatment, the isthmus-based swellings became highly pronounced (Fig. 7D). The pectin lattice was disrupted (Fig. 7E) and ~20% of these cells ruptured at the isthmus upon pressure of the coverslip. Cells return to normal morphology after recovery via washing (not shown).
Immunocytochemical analysis of high esterified HG, RG-I, and cellulose during oryzalin treatment
Other types of labelling were performed in order to obtain a more complete picture of wall alterations. Labelling of treated cells with JIM7 revealed that oryzalin also affected the distribution of pectins with a higher DE. After 12 h of treatment, the typical distribution of label in the narrow isthmus band was observed, even in the swollen isthmus zone (Fig. 8A) but, after 36 h, the signal became more diffuse and was irregularly displaced over the central part of the swollen zone (Fig. 8B). Other aspects of the pectin network were disrupted by oryzalin, as evidenced by INRA-RU2 labelling of the RG-I backbone. This showed intensely labelled striations after 12 h (Fig. 8C) and then a highly irregular pattern in the swollen isthmus after 36 h (Fig. 8D). The spatial distribution of crystalline cellulose, as detected using CBM3a, was similarly perturbed by the oryzalin treatment (Fig. 8E), resulting in a shredded appearance at the swollen isthmus region.
Cytoskeletal changes induced by oryzalin treatment
Oryzalin has previously been demonstrated to be a potent microtubule-affecting agent in land plants (Hugdahl and Morejohn, 1993;Morrissette et al., 2004). Likewise, cortical microtubule and actin microfilament networks have been shown to be closely associated with cell wall synthesis, secretion, and development. (Mutwil et al., 2008;Paradez et al., 2008). In this study, tubulin immunolabelling and rhodaminephalloidin labelling of actin cables were used to observe the two cytoskeletal networks in order to elucidate any changes to these cytoskeletal components upon treatment with oryzalin. In untreated cells, the cortical microtubule network was highlighted by distinct rings of microtubules aligned perpendicular to the long axis of the cell (Fig. 9A). The isthmus region contained the largest ring, consisting of a network of 10-20 parallel-aligned microtubules (Fig. 9B). After 36 h of oryzalin treatment, the microtubular network became disorganized and no ring was apparent in the swollen isthmus region (Fig. 9C). Upon recovery, the microtubule band of the isthmus region reappeared within 4 h (data not shown). The actin microfilament network of Penium consists of parallel arrays of microfilament bundles in the subplasma membrane cortical region running parallel to the longitudinal axis ( Fig. 9D). At the isthmus zone, parts of these microfilament bundles converged inward to form a ring at the same location as the microtubular band and corresponding to the JIM7labelled region (Fig. 9E). After 36 h of oryzalin treatment, the isthmus-based microfilament band became highly disorganized ( Fig. 9F) but the parallel alignment of microfilament bundles in the unaltered regions of the cell remained. After 12 h of recovery, the normal distribution of microfilaments returned (Fig. 9G).
Ultrastructural effects of oryzalin treatment
The effects of oryzalin on cell wall ultrastructure were assessed using TEM. After 48 h of treatment, noticeable alterations to the wall were observed (Fig. 10A). In addition to alterations in the HG lattice, the wall was thinner, and little, if any, lattice was apparent. The Penium cell wall consists of three layers, an outer layer containing the HG lattice, an inner fibrous layer of cellulose, and a middle 'interface' layer where the HG of the outer layer embeds in the cellulose (Fig. 10B). Treatment of cells with oryzalin for 24 h (Fig. 10C) resulted in a notable disruption of the wall architecture with a sharp interface between altered and unaltered regions of the wall. All three wall layers were present in the region formed before oryzalin treatment. However, in wall formed during oryzalin treatment (i.e. the swollen zone), little of the HG lattice remained. After longer treatments (36 h), the medial layer appeared as multiple linear 'streaks' positioned nearly perpendicular to the long axis of the wall (Fig. 10D). These micrographs were taken from sections of cells embedded in plastic sheets to enable observation of their longitudinal wall profiles. After 48 h (Fig. 10E), the cell wall of the swollen region became notably thinner and contained remnants of the medial layer components located at the outermost region of the inner layer. In a comparison of 50 micrographs of the cell walls of treated and untreated cells, oryzalin treatment resulted in a decrease of 25% (±4%) of the inner/medial wall layer thickness.
Oryzalin induces a loss of wall biomechanical strength at the primary site of wall deposition
The cell wall of Penium consists of two major domains that are arranged in three recognizable layers. One domain consists of HG (Domozych et al., 2007; that binds with Ca 2+ to form the distinctive lattice that constitutes the outer layer of the wall. The second domain is cellulose based and makes up the inner cell wall layer. Aggregates of HG fibrils emerging from the base of the outer layer embed in the microfibrillar infrastructure of the inner cellulose-rich layer and form the medial layer. This layer contains both HG and RG-I, and represents the zone where pectin and cellulose physically intersect. This architectural design of the cell wall supports the elongate cylindrical shape of the cell and resists the pressures of internal turgor. Treatment of Penium with oryzalin compromises this cylindrical design and causes distinct swelling at the isthmus zone. This swelling is accompanied by significant alterations to both wall domains and the overall structural architecture of the wall. The isthmus is the site of the isthmus band during pre-division expansion where HG is secreted and incorporated into the wall, where cellulose microfibrils are synthesized, and where the pectic and cellulosic domains most probably become interconnected (also see Domozych et al., 2009b). Consequently, the developing cell wall at the isthmus band is more elastic than at other parts of the cell and is more susceptible to the pressure of internal turgor if its structural integrity is compromised. This would explain why oryzalin-induced swelling occurs here. Alternatively, oryzalin does not affect pre-existing wall, suggesting that the mature wall is not significantly remodelled after it forms, or that any post-synthesis remodelling is not affected by application of oryzalin. The mechanism of highly focused wall expansion and the oryzalin-induced swelling at the isthmus band in Penium have some notable similarities to that observed in other anisotropically growing plant cells. For example, in expanding pollen tubes, the focal point of wall expansion is also a narrow band, specifically the apical zone located at the tube tip (Geitmann and Steer, 2006;Geitmann and Ortega, 2009 Geitmann, 2010;Cai et al. 2011). Oryzalin treatment also causes swelling at this apical tip (Anderhag et al., 2000). In the pollen tube apex, high DE HG secretion and callose/ cellulose synthesis produce an elastic wall zone capable of regulating turgor-driven expansion. Immediately beyond the apex, pectin methylesterase (PME) remodelling of the HG followed by Ca 2+ cross-linking creates a rigid gel which strengthens the wall that will surround the long tube shank. In Penium, the isthmus band is functionally equivalent to the expanding apical tip of the pollen tube; that is, where HG secretion/modelling and cellulose microfibril synthesis actively occur. However, the Penium wall synthesis mechanism differs from that of pollen tubes in that although there is a single wall expansion zone (the isthmus band), wall expansion is bi-directional. This predicates the presence of a currently described mechanism that allows for both PME processing of secreted HG and displacement of this HG toward both poles of the cell.
Oryzalin disrupts Penium microtubular dynamics and wall deposition but not cell expansion
In previous studies, oryzalin has been shown to affect microtubule dynamics in plants and some protists directly by sequestering tubulin dimers (Hugdahl and Morejohn, 1993;Morrissette et al., 2004). In land plants, this leads to changes in cell wall infrastructure and subsequent cell swelling (Nakamura et al., 2004;Bannigan et al., 2006;Paradez et al., 2006;Corson et al., 2009) similar to that observed in this study. In Penium, it was shown that parallel bands of cortical microtubules aligned perpendicular to the cell's longitudinal axis are found in the central region of the cell, the largest and most prominent of which resides at the isthmus band. More importantly, this microtubule band was dramatically altered during oryzalin treatment, resulting in a random display of microtubules dispersed throughout the cytoplasm of the isthmus. This corresponded to alterations to the cell wall and the swelling at the isthmus region. What might be the link between the cortical microtubular cytoskeleton and wall expansion dynamics occurring at the isthmus band? Throughout the past half-century of cell wall research, close associations of cortical microtubules with cellulose microfibril orientation have been noted in many plant cells (Smith and Oppenheimer, 2005;Mutwil et al., 2008;Lloyd and Chan, 2008;Anderson et al., 2010;Chan et al., 2010;Endler and Persson, 2011). Recently, live cell imaging using fluorescent protein fusions with cellulose-synthesizing enzymes (e.g. cellulose synthase, or CesA complexes) has further demonstrated the dynamic interaction between the cellulose synthetic machinery residing on the plasma membrane and the underlying layer of cortical microtubules (Paradez et al., 2006). It is widely believed that cortical microtubules serve as guides that direct the movement of cellulose synthase complexes on the plasma membrane and, in turn, the production of cellulose microfibrils in specific orientations in the cell wall. According to this model, perturbation of the cortical microtubular network by an agent such as oryzalin would affect the synthesis of the cellulose microfibrillar network. The results of this study also suggest that alteration of the cortical microtubule network by oryzalin in the active wall expansion zone, the isthmus band, directly affects both the cellulose synthesis machinery and wall microarchitecture. For example, TEM imaging demonstrated that the cellulosebased inner wall layer was reduced in thickness by 25% at the oryzalin-induced swollen zones. It is possible that the microtubule disruption at the isthmus band slows or alters cellulose microfibril synthesis, yielding a thin cellulosic layer. Turgor pressure at this thin zone would then cause deformation of cell shape. The cellulosic framework here would still be sufficient to keep the cell from bursting but would be unable to maintain the narrow cylindrical shape at the isthmus (i.e. swelling occurs). It is also possible that oryzalin-induced alteration of the cellulose synthesis machinery causes an increased stretching in the cellulosic layer (i.e. increased sliding of microfibrils) which then contributes to the thinning of the inner layer and subsequent perturbation of the HG lattice. The link of oryzalin treatment to cellulose domain disruption is further strengthened by observations from this study whereby cellulose-affecting agents (e.g. isoxaben or DCB) also cause swelling at the isthmus. Interestingly, the present study also showed that in oryzalin-treated cells, the percentage of surface area covered by new wall material in relation to the whole cell was approximately the same as in untreated cells. This suggests that while structural changes occurred in the wall following oryzalin treatment, the rate of wall expansion is not noticeably altered. The geometry of expansion changes from linear (cylinder) to spherical (swollen isthmus) but not the amount of cellular expansion.
Evidence for coordinated deposition and interaction of the pectin and cellulose cell domains
The thinning of the cellulosic inner layer during oryzalin treatment also results in distinct alterations to the pectin domain of both the medial and outer layers. Proper formation of the cellulosic layer probably serves as the framework for the deposition and anchoring of the HG-based outer layer. When formation of this cellulosic layer becomes compromised by oryzalin, alteration of the HG lattice also occurs (Fig. 11). This result exemplifies a complex structural interaction between two polymer domains that must be developmentally coordinated and adds to the growing evidence supporting pectin-cellulose interactions (Zykwinska et al., 2005(Zykwinska et al., , 2007Peaucelle et al., 2012). In Penium, the identification of RGI in the medial layer suggests that this polymer may also be involved in this interaction, although its relative abundance appears to be relatively low compared with that of land plants .
Previous research has shown that pectins and pectinmodulating enzymes such as PME (Mohnen, 2008;Bosch and Hepler, 2005;Tian et al., 2006), as well as the cellulose 11. Schematic diagram of the structural changes that occur to the cell wall during oryzalin treatment. The cell consists of an inner layer of cellulose, an outer layer of HG that forms the distinct lattice, and a medial layer that consists of RG-I and HG. It is at this layer where the pectin and cellulose are connected. Upon oryzalin treatment, cellulose synthesis is altered, resulting in a thinner cellulosic layer. Subsequently, this perturbs the formation of the RG-I-containing medial layer that in turn disrupts the formation of the HG lattice. (This figure is available in colour at JXB online.) synthesis machinery (e.g. CesA; Mutwil et al., 2008;Petrasek and Schwarzerova, 2009;Endler and Persson, 2011) are produced in the Golgi apparatus (GA) and transported by GA-derived vesicles via actin-mediated movement to the cell surface (Bove et al., 2008;Cheung et al., 2008;Daher and Geitmann 2011). In this study, rhodamine-phalloidin labelling revealed an extensive network of actin microfilaments found in the peripheral cytoplasm where it aligned parallel to the longitudinal axis of the cell. Some of this network converges inward at the nucleus that resides in the same location as the isthmus band. Penium, like most desmids, also displays active cytoplasmic streaming in the peripheral cytoplasm that is directed along the longitudinal axis of the cell. These observations led to the presumption that actin-mediated cytoplasmic streaming is also a major mechanism for transporting Golgi-derived secretory vesicles in Penium, including those carrying wall polymers or polymer-biosynthetic/modulating enzymes to a wall expansion site at the cell surface, namely the isthmus band. It was also shown that oryzalin treatment results in a localized disorganization of the actin microfilament network in the affected region of the cell, the swollen isthmus. These observations led to the belief that oryzalininduced alterations in cell wall development in Penium may be due to perturbation of the actin network that subsequently disables or significantly alters delivery of CesA complexes being transported from the GA to the plasma membrane at the isthmus band. This would, in turn, affect cellulose microfibril production at the isthmus band and initiate a cascade whereby the wall at this band would no longer maintain its normal tensile strength which is responsible for restricting turgor-driven pressure, thus leading to the observed swelling. The alteration of the actin network might also directly affect pectin secretion at this wall expansion site (e.g. compromised delivery of HG-carrying vesicles). Though oryzalin is a microtubule poison, it has been previously demonstrated that there is a close association of microtubules with microfilaments (Szymanski and Cosgrove, 2009;Sampathkumar et al., 2011). In Penium, the cortical cytoplasm is highlighted by both microtubule and microfilament bands. If the microtubule network is disrupted by oryzalin, subsequent disruption of the actin network may also occur, leading to the aforementioned alterations in wall development and structure.
The pectin and cellulose domains: physically interacting but distinct functions?
This study has shown that oryzalin affects the architecture of both the cellulose and pectin domains of the cell wall and manifests in a major change to cell shape. These observations led to the question of which cell wall polymer and/or domain is primarily responsible for maintaining the structural integrity of the wall at the isthmus band and the cylindrical morphology of the cell. First, in untreated cells, the cell wall of the isthmus band consists primarily of the cellulose-rich inner layer (i.e. no HG lattice) and the typical cylindrical cell shape is maintained here. In oryzalin-treated cells, the cellulose layer at the isthmus band thinned, and swelling of the isthmus region occurred. Secondly, in cells pre-treated with cellulase and then treated with oryzalin and cellulase, the swelling at the isthmus zone became even more pronounced and, in some cases, leads to wall rupture. These observations indicate that the cellulosic inner layer is most important in resisting inner turgor pressure driving expansion and in maintaining the cylindrical cell shape. If this cellulose-based infrastructure is compromised, as it is with oryzalin treatment, cell wall integrity and its tensile resistance to turgor-driven pressure are also compromised. This observation closely corresponds to other studies that show that if cellulose infrastructure is altered at an expansion site, the tensile resistance of the cell wall and/or cell shape may be severely altered (Aouer et al., 2009).
What then is the role of the HG, and particularly the prominent HG lattice that covers most of the cell surface, in the structural mechanics of the cell wall? First, the present polysaccharide microarray analysis showed that levels of HG epitopes notably increased following oryzalin treatment. It may be the case that if, as suggested, HG is required for maintaining wall integrity in expanding Penium cells then the indirect disruption of cellulose synthesis and/or orientation caused by oryzalin treatment led to a compensatory increase in HG synthesis and/ or deposition, as has been suggested to occur in land plants (Burton et al., 2000;Bischoff et al., 2008). If so, this is reminiscent of the effect of the cellulose inhibitor isoxaben on cell cultures, which causes a disruption of the cellulose crystallinity and a, presumably compensatory, increase in HG . In this regard, it is probably significant that the polysaccharide microarray analysis showed a change in the binding of 2F4, which suggests that the effect of oryzalin is not just to induce production of HG per se, but rather the production of HG with sufficient contiguous non-methyl-esterified GalA residues to participate in the structurally important process of Ca 2+ cross-linking. It should be noted that oryzalin may also exert a direct effect on the activity of PME, similar to its action on other wall enzymes (Vissenberg et al., 2005), which, in turn, affects its remodelling of secreted HG. However, while oryzalin treatment causes significant alteration to the HG production levels and the lattice infrastructure is disrupted at the swollen isthmus region, comparable experiments with PL pre-treatment followed by oryzalin treatment do not result in further cell swelling or rupture, as observed with cellulase pretreatment (Fig. 7). This indicates that the HG is not primarily responsible for maintaining the structural integrity of the wall or cell shape at the isthmus band.
While further work is needed to resolve the role of the HG, it is suggested that the HG lattice may represent a network of reinforcing struts that are needed to support the large expanse of the elongate cylindrical shape of Penium. Struts are mechanical devices often organized in regular networks that are embedded in the external edifice of a structure, functioning to reinforce the integrity of structures that have large longitudinal axes (e.g. cylinders). While not serving as the main structural framework, they nonetheless help maintain elongate structures. Further biomechanical studies will be needed to confirm the role of the HG lattice and elucidate the tensile strength of the cellulose domain. It is also possible that the HG lattice does not affect wall rigidity but may function in cell adhesion. An interesting area of future research will be to determine whether these domains, as well as other wall components such as RG-I, consistently show common organizations and functions in the walls of CGA and land plants. | 2017-04-14T13:47:05.360Z | 2013-11-27T00:00:00.000 | {
"year": 2013,
"sha1": "c6893294d8018f90be747639acfbc0c6096fb8cf",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jxb/article-pdf/65/2/465/18044058/ert390.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c6893294d8018f90be747639acfbc0c6096fb8cf",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
237693008 | pes2o/s2orc | v3-fos-license | Memristive electromagnetic induction effects on Hopfield neural network
Due to the existence of membrane potential differences, the electromagnetic induction flows can be induced in the interconnected neurons of Hopfield neural network (HNN). To express the induction flows, this paper presents a unified memristive HNN model using hyperbolic-type memristors to link neurons. By employing theoretical analysis along with multiple numerical methods, we explore the electromagnetic induction effects on the memristive HNN with three neurons. Three cases are classified and discussed. When using one memristor to link two neurons bidirectionally, the coexisting bifurcation behaviors and extreme events are disclosed with respect to the memristor coupling strength. When using two memristors to link three neurons, the antimonotonicity phenomena of periodic and chaotic bubbles are yielded, and the initial-related extreme events are emerged. When using three memristors to link three neurons end to end, the extreme events owning prominent riddled basins of attraction are demonstrated. In addition, we develop the printed circuit board (PCB)-based hardware experiments by synthesizing the memristive HNN, and the experimental results well confirm the memristive electromagnetic induction effects. Certainly, the PCB-based implementation will benefit the integrated circuit design for large-scale Hopfield neural network in the future.
Introduction
Memristor, a known nonlinear circuit element, is defined by Leon O. Chua for describing the relationship between flux and charge [1]. In virtue of the quasi-static expansion of Maxwell's equations, an electromagnetic field interpretation of this unique relationship has been presented. Till now, the memristor has been applied in wide scientific domains due to its distinct natures, such as nanoscale dimension [2], nonlinearity [3,4], synaptic plasticity effect [5]. In neuroscience, from the point of view of electricity, we know that one neuron can be seen as a multichannel input-and output-signal processor or a non-autonomous nonlinear system that can modulate the external stimulus and thereby responses to it [6,7]. Also, synapse can be seen as a two-port memristive device to connect two systems so as to realize the complex memory transmission characteristic [8]. Thus, memristor-based neurons or neural networks are now playing a vital role in neuromorphic computation and brain-like applications [9][10][11].
In the past few years, availing of the memristor to express the electromagnetic induction that induced by membrane potential or electromagnetic radiation has been a hot topic. In [11], Ma et al. thought that due to the transformation of intercellular and extracellular ion concentration or the differences of spatial distribution of ions, membrane potential of a neuron would be waved, and thereby, the time-changing electromagnetic flows were induced, the effects of which could be imitated by a flux-controlled memristor coupling with a neuron. On account of these, some memristive neuron models were proposed, from which mode transition or selection [6,12], synchronous behaviors [13,14], spatiotemporal patterns [15,16], and coexisting modes [17] are uncovered profoundly.
For some examples, in [6], to describe the membrane potential of Hindmarsh-Rose (HR) neuron model under the electromagnetic induction, Lv et al. constructed a memristive HR neuron model, where different electric modes of bifurcation, spiking, and chaotic bursting state were observed. In [16], based on a FitzHugh-Nagumo (FHN) neuron model, Takembo et al. constructed an n-neuron FHN chain network model under electromagnetic radiation, and the dynamical simulations proved that the function of the brain may be impaired when it was driven to the external electromagnetic environments with strong radiation intensities.
Many researchers not only concentrate on the dynamical effects of a single neuron but also explore interactions of neurons in a network. In neural network, electromagnetic induction flows can be induced when membrane potential differences are existed between each two interconnected neurons, whose effects are equivalent to the bi-directional induced currents emerged by a flux-controlled memristor linking each two neurons [8,18,19]. Accordingly, to pay attention to the electromagnetic induction effects on a unified network is a burning question.
Manifold dynamics of biological neurons and neural networks are concerned for further understanding the complex nonlinear structures and functional behaviors of brain [11,20,21]. Different from biological neurons, conductance-independent artificial neural network has received more and more attention for its high degree of flexibility and practicability. Hopfield neural network (HNN) is a classical neural network possessing simple algebraic expression but can display complex dynamical states, which has been widely applied in numerous domains [22][23][24]. Because the dynamical behaviors are closely related to its applications, over the past years, a mass of modified HNN models has been proposed, including fractional-order HNN model [23], timedelayed HNN models [25], and hidden HNN model [26], and multiple dynamical characterizations have been revealed accordingly.
By contrast, memristive HNN model brings some new views for cognizing the brain, and it has got longterm attention by scholars. Because some properties of memristor bear striking resemblance to synaptic plasticity of neurons, by replacing the resistive weight with the memristive synaptic weight, some memristive HNN models were established to achieve the variable connection weight for neurons [26,27]. Followed by, in recent years, considering the complex electromagnetic environment, a neural network under electromagnetic radiation was reported [28,29]. When considering the membrane potential difference between two interconnected neurons in HNN, a memristive HNN model with the electromagnetic induction was raised. Moreover, the authors of the manuscript [30,31] discussed a memristive HNN model with two neurons under the action of electromagnetic induction, where coexisting behaviors triggered by different initial conditions were revealed and were validated by hardware experiments. Successively, the authors of the manuscript [32] focused on the initial sensitive dynamics in a memristive HNN model with three neurons when only considering the electromagnetic induction flows induced by the membrane potential difference between two neurons. Based on some commercially discrete components, analog implementations are all developed for the above-mentioned memristive HNN models [26][27][28][29][30][31][32]. Of course, based on the digitally circuit-implemented platforms such as DSP [33] and FPGA [34,35], the memristive HNN model can also be fabricated physically to verify the numerical simulations.
Nevertheless, we are ignorant of the electromagnetic induction on HNN induced by membrane potential differences of multiple neurons. Accordingly, it is necessary to establish a unified memristive HNN model to express the electromagnetic induction effects, which has not been reported until now.
In this paper, based on the hyperbolic-type memristor, a unified memristive HNN model is presented. For simplicity, a classical tri-neuron HNN model is taken as an example, based on which memristor-coupled HNN model with three cases is considered in succession. Interesting dynamical effects and intricate dynamical evolutions are uncovered. The main contributions for this paper are threefold. (1) A unified memristive HNN model is presented, and its boundedness is proved theoretically. (2) Multiple dynamical methods are employed to numerically reveal the bifurcations and coexisting attractors' behaviors, which is helpful to mimic the real dynamical behaviors of collective neurons and to cognize the brain. (3) PCB-based memristive HNN circuit experiments are developed, and the results well confirm these dynamical effects.
The remaining contents are listed as follows. In Sect. 2, a unified memristive HNN model is established, and its boundedness is proved. In Sect. 3, memristive electromagnetic induction effects on HNN are numerically revealed. In Sect. 4, an electronic neuron circuit platform is built, and the dynamical effects are validated. And lastly, we summarize our work in Sect. 5.
Memristor-coupled HNN model
In this section, availing of an example of the HNN model and a threshold hyperbolic-type memristor model, we construct a unified memristive HNN model to express the electromagnetic induction effects. Besides, the uniform boundedness of the presented model is proved theoretically.
An example of the HNN model
The mathematical model of a Hopfield neural network (HNN) with n neurons is generally described as where X = [x 1 , x 2 , …, x n ] T represents the n-neuron membrane potentials, W is an n 9 n synaptic weight matrix, and I = [i 1 , i 2 , …, i n ] T is an external current matrix.
The tri-neuron HNN has been widely studied. Thus, an example of HNN, with a 3 9 3 asymmetric synaptic weight matrix, can be referred to [36]. Herein, to facilitate the following analysis and calculation, two-minute values, including inter-connection weight w 22 and self-connection weight w 33 , are neglected, and the weight w 21 is adjusted as 2.8. A simplified synaptic weight matrix is thereby denoted as Utilizing the weight matrix (2) and taking no account of the external currents, the numerical simulations of the HNN model are shown in Fig. 1. One can be seen from Fig. 1a, the phase portraits of two symmetric period-1 limit cycles initiated from the initials (-0.01, 0, 0) (red) and (0.01, 0, 0) (blue) are coexisting in the x 1x 2x 3 phase space. Besides, as shown in Fig. 1b, two attracting domains depicted by local attraction basins are located in x 1 (0)x 2 (0) initial plane, where 'LP1' and 'UP1' represent the lower period-1 and upper period-1 behaviors, respectively. As a result, this HNN model takes on bistable period-1 behaviors.
Unified memristive HNN model
Referring to [18], a monotone, differentiable, and threshold memristor is used to express the electromagnetic induction, whose mathematical form is written as where k, G(u) = tanh(u), V M , and I M stand for the memristor coupling coefficient, memductance function, input voltage of memristor, and output current of memristor, respectively. In this expression, V M and I M stand for the membrane potential difference between two interconnected neurons and the induced current flowing through the memristor.
Using one flux-controlled memristor model to link each two neurons bidirectionally and taking no account of the external currents, a unified memristive HNN mathematical model with n neurons and n memristor arrays can be established as where K and V M denote the memristor coupling strength matrix and the membrane potential difference matrix. For n = 3, the two matrixes can be denoted as where Note that, the hyperbolic-type memristor is used to express the electromagnetic induction induced by the membrane potential difference between two interconnected neurons. Thus, in (4), the KV M tanh (U) term can be regarded as the induction current of the memristor. Besides, U = (u 1 , u 2 , u 3 ) T represents a magnetic flux matrix in the memristor array.
To intuitively express the electromagnetic induction flows induced by potential differences between the interconnected neurons, the abridged general view of the connection topology for the memristive HNN model with three neurons is depicted in Fig. 2, where the two-way induction currents are flowing through one memristor to mimic the electromagnetic induction flows. Therefore, each memristor is used to link two neurons bidirectionally, and six induction currents ± I Mi (i = 1, 2, 3) are yielded thereby.
In this paper, memristor is used to express the electromagnetic induction induced by membrane potential difference between two neurons. In neuroscience, as a matter of fact, a memristor can also be employed to express the synaptic plasticity of neurons, i.e., to replace the resistive weight with the memristive synaptic weight, and to express the electromagnetic induction induced by the external electromagnetic radiation or the inner membrane potential of neurons. To sum up, several examples of three expressions of the memristive HNN models are listed in Table 1.
One can be seen from Table 1 that due to the different nonlinear properties of memductance function, various memristor models can be adopted to construct the memristive HNN model. Besides, when using a memristor to express the synaptic plasticity effect, the memristive synaptic weight is a scalar, and when using a memristor to express the In this paper, we use the nonideal memristors to connect neurons end to end; thus, the induction currents are the bi-directional vectors. In summary, the connection topologies for the six memristive HNN models involved in [26][27][28][29][30][31] are drawn in Fig. 3, where the memristors in the memristive HNN models are connected in different ways.
Model uniform boundedness
Boundedness is a vital property of a nonlinear dynamical system. In this paper, for n = 3, uniform boundedness of the model (4) is deduced in theory, proving that all the motions, including chaotic motions, are trapped into a bounded region.
1. Basic Definition of Uniform Boundedness: Consider a general nonlinear dynamical system as where h: R ? 9 B ? R n is continuous, and B , R n is a domain that contains the origin.
2. Uniform Boundedness Analysis: Denote Y = [X, U] and take where I is a unit matrix, A is the linearized matrix of (4), and B can be regarded as the linearized matrix of the state equation U against the state variable X, which is a 3 9 3 matrix denoted as [27] Synaptic plasticity effect Scalar [28] Electromagnetic induction effect Figure 3c [29] Electromagnetic induction effect [30] Electromagnetic induction effect This paper Electromagnetic induction effect Then, the memristive HNN model (4) is rewritten by For the initial condition Y(t 0 ), by the variation of parameters formula, any solution Y(t) of system (10) can be written as It is easy to know that all the characteristic roots of constant matrix A have negative real parts, so there exist positive constants L and a, if there is a constant D with gðYÞ j j D, then for t C t 0 , we have Therefore, it is concluding that for the tri-neuron memristor-coupled HNN model, the model (4) Firstly, one memristor M 1 that connects neurons 1 and 2 is taken into account, the memristor coupling strength matrix in this case is denoted as where k 1 stands for the memristor coupling strength between neurons 1 and 2. The detailed expressions of the memristive HNN model for Case I can be written as The connection topology for the Case I is depicted in Fig. 4, and the two-way induction current flowing through M 1 is used to mimic the electromagnetic induction flow induced by neurons 1 and 2.
Stability of the equilibrium point is also a vital property for the HNN dynamical system, which is closely related to its application [38]. Setting the left side of (4) to zero and configuring the equilibrium points as P (g 1 , g 2 , g 3 , g u1 , g u2 , g u3 ), then there are g 3 = -6.6 tanh(g 1 ) ? 1.3 tanh(g 2 ), g u1 = g 1 -g 2 , g u2 = g 2 -g 3 , and g u3 = g 3 -g 1 .
Due to the difficulty of obtaining the arithmetic solutions, a graphic analysis method is employed to obtain the analytical solutions of (15) using MATLAB platform. When the memristor coupling intension k 1 is set as 0.12 and 0.18, respectively, the equilibrium points P can be determined by examining the intersections of functions, as shown in Fig. 5.
As observed from Fig. 5, when g 1 and g 2 are in the regions [-0.6, 0.6] and [-0.9, 0.7], the black H 1 curve with determined function remains unchanged, but the red and blue H 2 curves involve two different values of k 1 . Therefore, three examined intersections including one zero equilibrium point P 0 as well as two nonzero equilibrium points P 1 and P 2 can be precisely calculated, respectively.
In Fig. 6a, with the increase in memristor coupling strength k 1 in the region [0.08, 0.18], the memristive HNN model behaves as globally periodic states when selecting two sets of initial conditions (± 0.4, 0, 0, 0, 0, 0). By contrast, the memristive HNN model has a reverse period-doubling bifurcation route to chaos when considering two other sets of initial conditions. Taking the memristive HNN model with the initial conditions (0.5, 0, 0, 0, 0, 0) as an example, its orbit starts with period-1, enters into chaos at k 1 = 0.109 via chaos crisis, and then degrades into period-6 at k 1 = 0.144 and period-3 at k 1 = 0.179 successively via reverse period-doubling bifurcation. In addition, some periodic windows and chaos crisis scenarios can be also found in the chaotic regions. Furthermore, there are at least three different attractors' states coexisting in the memristive HNN model for a determined memristor coupling strength, demonstrating that the multistable patterns appear in Case I [30]. For k 1 = 0.12, the phase portraits initiated by four sets of initial conditions are plotted in Fig. 7. The results show that multiple attractors with different locations and topological structures coexist in the phase space, including lower period-1 limit cycle, upper period-1 limit cycle, period-8 limit cycle, and spiral chaotic attractor. The local attraction basins can be used to better explore the influences of initial conditions on the model (4) in Case I. Two representative examples for k 1 = 0.12 and k 1 = 0.18 with (x 3 (0), x 4 (0), x 5 (0), x 6 (0)) = (0, 0, 0, 0) are shown in Fig. 8a and b, where seven colors represent different types of attractors. Here, LP1, UP1, P02, P03, P04, P08, and CH represent lower period-1, upper period-1, period-2, period-3, period-4, period-8, and chaos, respectively, indicating the coexisting multistable patterns. Observed from Fig. 8a, four types of attractors are revealed when k 1 = 0.12 is fixed, and orange, blue, and banded yellow regions dominate the initial plane.
In addition, the riddled basins of attraction are displayed in small regions, implying that the model in this case is sensitive to the initial conditions, and the extreme events are yielded [40][41][42][43][44]. As shown in Fig. 8b, when k 1 increases to 0.18, complex stability evolutions happen, leading to that period-2 with the riddled domain is embedded in period-4. As a result, the memristive HNN model displays coexisting multistable patterns related to the initial conditions.
It should be noted that the extreme events are related to many contexts such as tsunamis, earthquakes, tornadoes, market crashes, and human brain seizures [40]. For a dynamical system, it can be defined as a recurrent and rare event on which an appropriate variable exhibits an unusual behavior [41]. Therefore, an extreme event can be effectively exhibited by the riddled basin of attraction in such a dynamical system [42][43][44] Two memristors M 1 and M 2 used to link three neurons are taken into account in Case II. Hence, the memristor coupling strength matrix can be denoted as where k 1 and k 2 are two parameters representing two different memristor coupling strengths. The detailed expressions of the memristive HNN model for Case II can be expressed as The connection topology for the Case II is depicted in Fig. 9, and the two-way induction currents flowing Fig. 10a. When setting four representative values of k 1 as 0.08, 011, 0.15, and 0.18, respectively, the one-dimensional bifurcation diagrams are plotted with respect to k 2 , as shown in Fig. 10b. Notably, the two-dimensional parameter plane is based on the ODE23 (built-in MATLAB) algorithm and painted by different colors according to the periodicities of the membrane potential x 3 [17,45].
Glanced in Fig. 10b, the phenomena of antimonotonicity appear distinctly. For fixed k 1 = 0.08, when increasing k 2 in the region [-0.005, 0.065], the orbit of the memristive HNN model in Case II begins with period-1 goes into period-2 and period-4 via the forward period-doubling bifurcations, then degrades into period-2 and period-1 via the reverse period-doubling bifurcations, and finally settles down stable point at k 2 = 0.0614. When increasing k 1 from 0.08 to 0.18, the period-4, period-8, period-12, and chaos bubbles are formed, respectively. Thus, the forward and reverse period-doubling bifurcation routes are obviously visible in bifurcation processes for the model (4), which are affected by two memristor coupling strengths.
Considering the special situation in Case III that memristive electromagnetic inductions between three neurons are the same, i.e., k 1 = k 2 = k 3 = k, thereby k is chosen as a single adjustable parameter in Case III. For this special situation, the memristive HNN model can be rewritten as where k is this single adjustable parameter, and A is a constant matrix Note that the other matrixes in (18) are exactly the same as those used in (4). Besides, the detailed expressions of memristive HNN model for Case III under this special situation can be described as Three sets of initial conditions are set as (0.01, 0, 0, 0, 0, 0), (-0.01, 0, 0, 0, 0, 0), and (-0.07, 0, 0, 0, 0, 0), respectively. When increasing k from 0 to 0.04, the bifurcation plots with respect to k are drawn in Fig. 13. As can be seen, the phenomena of cascaded chaotic bubbles, chaos crisis scenarios, and coexisting bifurcations emerge in the memristive HNN model. Specifically, when setting the initial conditions as (-0.07, 0, 0, 0, 0, 0), the forward and reverse perioddoubling bifurcation routes are also visible in bifurcation processes. Accordingly, when we endow four sets of parameter k for the model (4) in Case III, the phase portraits of chaotic attractor, periodic limit cycles, and stable point are plotted in Fig. 14.
In this special situation of Case III, to uncover the initial-dependent dynamics, the value of k is kept as 0.005. The local attraction basin is plotted in Fig. 15, and the coexisting tri-stable patterns are displayed, such as the blue period-1 (P01), light cyan period-2 (P02), and red chaos (CH). Note that all the red regions are fully mixed and riddled in the light cyan periodic regions, meaning the occurrence of the rare, recurrent, and irregular dynamics of the extreme events.
It is necessary to point out that using numerous simulation methods, the rich and complex dynamical behaviors can be numerically revealed in the memristive HNN model as the aforementioned memristor coupling strengths for each case are chosen. In other words, the dynamical effects of bistability, multistability, and extreme events can be clearly observed from the local basins of attraction, and the dynamical effects of antimonotonicity can be viewed from the bifurcation diagrams. Summarily, the dynamical effects on the memristive HNN model given in (4) are listed in Table 2. As can be seen, when the number of memristors increases from zero to two, the number of coexisting attractors goes from two to four and to five. However, when the memristive HNN model involves three memristors with the aforesaid special situation, the coexisting tri-stable patterns with the extreme events appear distinctly. Besides, the antimonotonicity behavior displays in Case II, which is related to two memristor coupling strengths. Notably, when three memristors utilized in the memristive HNN model are endowed with three different memristor coupling strengths, some more intriguing and intricate dynamical effects need to be further investigated.
PCB-based analog circuit validation
Electronic neuron circuit is nowadays regarded as an excellent artificial block to implement the VLSI applying in neuromorphic computing [46]. It can be achieved in three ways, namely analog, digital, and hybrid analog/digital circuits [47][48][49][50]. In this section, the memristive HNN model is implemented in analog circuit, and the PCB-based hardware experiments are carried out to validate the memristive electromagnetic induction effects.
Circuit synthesis for the memristive HNN model
Based on the unified memristive HNN model given in (4), an electronic neuron circuit can be implemented, and its circuit equations are described as where v and v U are n 9 1 voltage variable matrixes, and the integrating time constant s = RC = 10 kX 9 10 nF = 0.1 ms. In addition, for n = 3, on account of the synaptic weight matrix W, memristor coupling strength weight K, and matrix V M in (5), three 3 9 3 resistance arrays are presented as where R W represents the determined resistance array, R K represents the adjustable resistance array whose values change with different cases, and v VM represents the voltage matrix of membrane potential difference. Notice that, because the resistances are positive in the real circuit, the negative signs in (22a-22c) are adjusted by changing the connection way of the circuit. According to (22a-22c), the resistances in R W are configured as R 1 = R/3.8 = 2.6316 kX, R 2 = R/ 1.9 = 5.2632 kX, R 3 = R/0.7 = 14.2857 kX, R 4 = R/ 2.8 = 3.5714 kX, R 5 = R/1 = 10 kX, R 6 = R/ 6.6 = 1.5152 kX, and R 7 = R/1.3 = 7.6923 kX, respectively. Circuit schematic is designed synthetically using Multisim 12.0 software, the screenshot of which can be seen in Fig. 16, where three cases of circuit modules can be controlled by six-pin DIP switch with S 1 * S 3 keys. Therefore, the states of these three keys for the electronic neuron circuit and theoretical resistances for R K are summarized in Table 3. Besides, three different functional circuits denoted by fourteen hierarchical blocks (HBs) are shown in Fig. 16, where six HBs from ' -T1' to ' -T6' represent the hyperbolic tangent function circuit modules with negative output, five HBs from ' -H1' to ' -H5' represent the inverting operation circuits, and three HBs from 'I1' to 'I3' represent the subtraction operation circuits. Notice that, global connectors are employed to connect the common port for simplifying the circuital connection.
PCB-based hardware circuit validation
The designed PCB is made using the Altium designer 10.0 software. The off-the-shelf commercial components contain bipolar junction transistor MPS2222, operational amplifier TL082CP, analog multiplier AD633JNZ, chip resistor, precision potentiometer, and monolithic ceramic capacitor. And the phase portraits are captured by Tektronix digital oscilloscope in the X-Y mode.
The photograph of the electronic neuron circuit is displayed in Fig. 17, where the HNN circuit module is in the left, and six same and independent -tanh (Á) modules are in the right dotted box. 'S' is a six-pin DIP switch that is used to control the circuit connection states. For instance, when S 1 and S 2 are on and S 3 is off in this figure, the circuit is in the state of Case II. Table 3, when three keys are all off, the example of the HNN model given in (1) can be denoted by a third-order analog circuit. Besides, when Case I and Case II are classified, the adjustable resistance arrays R K are configured as respectively, where R k1 and R k2 are two adjustable resistances for each case. When configuring the two resistances in turn, the phase portraits for the first two cases are experimentally captured, as shown in Fig. 18. Note that due to the difficulty in achieving specific capacitor initial voltages in analog circuit, power supply should be on-and-off switched to endow the initial conditions in the real circuit [31,51]. In Fig. 18, three types of coexisting attractors for Case I and three types of coexisting attractors for Case II can be captured. Thus, the initial-related coexisting behaviors can be availably realized by circuit simulations [27,30]. Furthermore, when three keys are all on, the adjustable resistance array R K is configured as (22b). For the special situation in Case III, the adjustable resistances need to be the same, i.e., R k = R k1 = R k2-= R k3 = R/k 1 = R/k 2 = R/k 3 . Thus, the experimental phase portraits can be captured and shown in Fig. 19.
As can be seen, dynamical evolutions from chaos to period-2, to period-1, and to stable point can be readily observed, and the experimental results validate the simulation results given in Fig. 14. Besides, the photograph of PCB-based hardware circuit for Case III is shown in Fig. 19b, in which the chaotic attractor is cropped by Tektronix digital oscilloscope.
Due to the parasitic resistances and inner interferences in the real circuit, PCB-based experiment results are consistent with the numerical results basically though possessing some errors. In general, we can say that the electronic neuron circuit for implementing the memristive HNN model well validates the memristive electromagnetic induction effects on HNN. And in the next step, we may try to use simple activation function in HNN or find some alternative schemes for the complex activation functions so as to investigate the unified HNN model with n neurons.
Conclusions
Using the threshold hyperbolic-type memristors to link the interconnected neurons, this paper presented a unified memristive HNN model to simulate electromagnetic induction effects. The uniform boundedness of the memristive HNN model was deduced in theory, proving that all the motions are trapped into a bounded region. With the consideration of three cases, the dynamical effects were revealed in succession using multifold dynamical analysis methods. In Case I, stability analysis proved that the equilibrium points are all unstable saddle-foci, and numerical simulations disclosed the coexisting bifurcation behaviors, the coexisting multistable patterns, and the extreme events herein. In Case II, the antimonotonicity phenomena appeared with the creation and annihilate of periodic and chaotic bubbles, which was controlled by two memristor coupling strengths. Besides, the extreme events with coexisting five-stable patterns were induced by the initial conditions. In the special situation of Case III, when three memristor coupling strengths were the same, the extreme event with complex riddled basins taken place distinctly, meaning that the memristive HNN model is increasingly sensitive to the initial conditions.
The electronic neuron circuit of the memristive HNN model was constructed by a PCB hardware platform, and the results beautifully validated the numerical simulations for three cases. Accordingly, the study of the memristive electromagnetic induction effects on HNN not only reveals the interactions of neurons, but also provides a potential application in the neuromorphic computation. Furthermore, we emphasize that based on the other examples of memristive HNN models, the electromagnetic induction effects may not the same and need to be explored in the future. | 2021-09-28T15:53:27.822Z | 2021-07-20T00:00:00.000 | {
"year": 2021,
"sha1": "622ad7e10b3ba38d0a4649ecdcf6fe87ba750d9b",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-722277/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "10fa3e057ba23f1b4ed71e457f022a2a943d92c6",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
234000035 | pes2o/s2orc | v3-fos-license | Personal protective equipment preservation strategies in the covid-19 era: A narrative review
SUMMARY Background The COVID-19 pandemic has led to personal protective equipment (PPE) supply concerns on a global scale. While efforts to increase production are underway in many jurisdictions, demand may yet outstrip supply leading to PPE shortages, particularly in low resource settings. PPE is critically important for the safety of healthcare workers (HCW) and patients and to reduce viral transmission within healthcare facilities. A structured narrative review was completed to identify methods for extending the use of available PPE as well as decontamination and reuse. Methods Database searches were conducted in MEDLINE and EMBASE for any available original research or review articles detailing guidelines for the safe extended use of PPE, and/or PPE decontamination and reuse protocols prior to September 28, 2020. Grey literature in addition to key websites from the Centers for Disease Control and Prevention (CDC), World Health Organization (WHO), Infection Prevention Association of Canada (IPAC), and the National Health Service (NHS) was also reviewed. Results Extended use guidelines support co-locating patients with confirmed COVID-19 within specific areas of healthcare facilities to enable the use of PPE between multiple patients, and reduce PPE requirements outside these areas. Decontamination strategies for N95 respirators and face shields range from individual HCWs using conventional ovens and microwave steam bags at home, to large-scale centralized decontamination using autoclave machines, ultraviolet germicidal irradiation, hydrogen peroxide vapors, or peracetic acid dry fogging systems. Specific protocols for such strategies have been recommended by the US CDC and WHO and are being implemented by multiple institutions across North America. Further studies are underway testing decontamination strategies that have been reported to be effective at inactivating coronavirus and influenza, and on SARs-CoV-2 specifically. Conclusions This narrative review summarizes current extended use guidelines and decontamination protocols specific to COVID-19. Preserving PPE through the implementation of such strategies could help to mitigate shortages in PPE supply, and enable healthcare facilities in low resource settings to continue to operate safely for the remainder of the COVID-19 pandemic.
S U M M A R Y
Background: The COVID-19 pandemic has led to personal protective equipment (PPE) supply concerns on a global scale. While efforts to increase production are underway in many jurisdictions, demand may yet outstrip supply leading to PPE shortages, particularly in low resource settings. PPE is critically important for the safety of healthcare workers (HCW) and patients and to reduce viral transmission within healthcare facilities. A structured narrative review was completed to identify methods for extending the use of available PPE as well as decontamination and reuse. Methods: Database searches were conducted in MEDLINE and EMBASE for any available original research or review articles detailing guidelines for the safe extended use of PPE, and/or PPE decontamination and reuse protocols prior to September 28, 2020. Grey literature in addition to key websites from the Centers for Disease Control and Prevention (CDC), World Health Organization (WHO), Infection Prevention Association of Canada (IPAC), and the National Health Service (NHS) was also reviewed. Results: Extended use guidelines support co-locating patients with confirmed COVID-19 within specific areas of healthcare facilities to enable the use of PPE between multiple patients, and reduce PPE requirements outside these areas. Decontamination strategies for N95 respirators and face shields range from individual HCWs using conventional ovens and microwave steam bags at home, to large-scale centralized decontamination using autoclave machines, ultraviolet germicidal irradiation, hydrogen peroxide vapors, or peracetic acid dry fogging systems. Specific protocols for such strategies have been recommended by the US CDC and WHO and are being implemented by multiple institutions across North America. Further studies are underway testing decontamination strategies that have been reported to be effective at inactivating coronavirus and influenza, and on SARs-CoV-2 specifically.
Introduction
As communities continue to work to address the Coronavirus Disease 2019 (COVID-19) global pandemic, health systems are at risk of exhausting supplies of critical personal protective equipment (PPE) such as surgical masks, N95 respirators, face shields, goggles, gowns and gloves which are instrumental in controlling the transmission of the virus [1]. PPE is critically important for healthcare workers (HCW) to reduce both their risk of contracting the infection and serving as a potential vector for transmission of the Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2) [2]. Preventing infection amongst HCWs is also critically important because, unlike the majority of the population that can practice social distancing, HCWs have many close physical interactions with colleagues and patients on a daily basis posing both an increased infection risk to other HCWs and vulnerable patients [3]. Evidence from Italy suggests that while HCWs are generally younger and healthier members of the population, many have still been infected. Over 300 HCWs have died from COVID-19 in the United States and the global death toll for HCWs is estimated to be over 1,000 [4,5]. Although infections may have been community-acquired, nosocomial acquisition was responsible for some of these infections which may have been preventable with adequate supplies of PPE [1,2]. Current modeling suggests that both many developed and developing countries around the world may face PPE shortages in the coming months [1,6,7]. PPE shortages will likely be more severe in countries with less developed healthcare systems and fewer resources [8]. Consequently, this structured narrative review provides a review of key PPE preservation strategies, including conservation, extended use, decontamination and reuse.
Methods
Our search strategy was designed to identify any type of extended use, reuse or preservation of PPE with a focus on medical masks, N95, or equivalent, respirators and gowns. A search strategy of two medical bibliographic databases (MED-LINE, EMBASE) was conducted by two authors (KG and DL). A structured narrative review methodology was used because it was best suited to the objective of providing a broad overview of the rapidly evolving literature to aid clinicians and healthcare facilities and making decisions regarding PPE preservation. A systematic review methodology was not used due to the rapidly evolving nature the literature and to facilitate providing a broad overview of strategies used across PPE types, manufacturers, pathogens, and decontamination methods. Searches in Medline (Ovid 1946 to September 23, 2020) and EMBASE (Ovid 1946 to September 23, 2020) were conducted for any available original research or review articles pertaining to the decontamination, disinfection, recycling or reuse of personal protective equipment in any healthcare setting. Articles pertaining to non-medical grade PPE, the decontamination of non-PPE items such as surfaces within hospitals, or medical equipment, simulation studies, and non-English language articles were not included. Literature including guidelines and/or recommendations from the Centers for Disease Control and Prevention (CDC), the World Health Organization (WHO), Infection Prevention Association of Canada (IPAC), Health Canada, the National Health Service (NHS) of the United Kingdom, and the European Union (EU), as well as any academic literature referenced by these organizations, was also reviewed. Studies that met inclusion criteria had the relevant data extracted, and summarized in tabular form and in the body of the text. The quality assessment of included studies was completed using the QUADAS-2 tool which is specifically designed to assess risk of bias in studies [11]. Review articles, guidelines from health authorities, and new articles were included in this review, but were not eligible for quality assessment.
Results
A total of 33 studies met inclusion criteria and were included in this narrative review. Thirty three studies focused on decontamination of N95 masks 11 studies tested the decontamination strategy on SARS-COV-2 specifically with the remaining studies tested decontamination strategies on other virus' such as SARS-CoV-1 and influenza. All but seven of the studies were published since the start of 2020. Thirteen studies described heat based decontamination strategies, nine described ultra-violet light based decontamination strategies, nine described hydrogen peroxide based decontamination strategies, and two described parecetic acid dry fogging decontamination strategies.
Extended use recommendations are based off of guidelines from international health authorities. There are currently no recommendations regarding the reuse of disposable gowns or gloves. The aforementioned decontamination strategies have not been tested on surgical masks, however surgical masks can likely be reused safely once they have been placed in an open container for 72 hours or more [9,10]. One study specifically tested ultra-violet light in the decontamination of face shields, but otherwise recommendations for decontamination of face shields and other forms of eye protection are based on guidelines from international health authorities. Table 1 summarizes the results of the assessment for each study. 20 of 27 studies had low risk of bias across all four domains of the QUADAS-2, with 4 studies having one of the four domains rated as a high risk of bias, and 3 studies having high risk of bias for both the index test and reference standard domains (Table 1).
Extended PPE use
Most PPE currently available in health care facilities is designed for single use (i.e., providing a single episode of care to one patient). Under optimal conditions, a gown, surgical mask, face shield or goggles, and gloves would be donned prior to entering the room of a patient on contact and droplet precautions, care would be provided, and all PPE doffed and then either discarded or placed into the laundry hampers (e.g. reusable gowns) as appropriate. Restricting PPE usage to one patient assessment in this fashion reduces the risk of any pathogen from that patient being transmitted to others via the contact/droplet route. This approach is particularly important when individual hospital wards contain patients that are admitted for different medical conditions and the patient with an infection with a transmissible pathogen could easily transmit it to other vulnerable patients around them either directly or indirectly [12]. However, when PPE supplies are strained as we have seen with the COVID-19 pandemic, and several health care systems have completely exhausted PPE supplies leaving both HCWs and patients at risk, the risk-benefit of PPE extended use and reuse requires re-assessment. Moreover, when many patients with the same infection are cohorted in hospital wards, pathogen transmission between patients becomes less of a concern, especially if only patients are cohorted with the same infection. In this scenario, extending the use of PPE past the "one patient at one time" standard is justifiable [9,13].
There are extended use strategies for surgical masks, N95 respirators, face shields, gowns, and gloves ( Table 2). The included studies separated emergency departments and inpatient units into zones or designated areas for patients with confirmed/suspected infection and zones with patients unlikely to be infected to mitigate risks of disease transmission to uninfected patients with extended PPE use [14]. Further, the studies reported that the success of these extended use strategies were contingent on sufficient training and logistical support for HCWs. For example, they reported emphasizing strict hand hygiene, facilitating mechanisms to minimize the amount of times HCWs don/doff PPE (such as for drink/meal breaks), and minimizing transit between high-risk and low risk areas to reduce likelihood of infection amongst HCWs.
Surgical masks have been shown to be safe to use between multiple patients who have been confirmed to have COVID-19 [14]. Guidelines suggest that surgical masks should be discarded if they become wet or soiled and/or damaged in any manner. To safely store a surgical mask, HCWs should be instructed to fold it in half end to end, so the outwards facing side of the mask folds into itself, thus reducing potential contamination of the container into which it is placed. Moreover, it would seem prudent for HCWs to refrain from reusing the surgical mask for at least 72 hours from initial use given that viable virus has been detected on surfaces up to 3-days later based on the available evidence [10].
Similar guidelines exist for the extended use of N95 respirators. However, if the N95 is worn during an aerosol generating medical procedure (AGMP), it needs to be decontaminated prior to use with another patient (11). WHO and Public Health Agency of Canada guidance suggests that N95s be reserved for such AGMPs and are not required for routine patient contact, potentially making any extended use guidelines less applicable. Gowns and gloves can be also be used multiple times between cohorted patients with confirmed COVID-19, though they should not be stored for use another day or shift [9,14]. For all above articles of PPE, these extended use guidelines do not apply if the article becomes wet or visibly soiled with blood and/or bodily fluids or sustains any damage which impairs function (10). While no such folding is possible with a N95 or face shield, care should be taken to not make contact with the outside surface of the mask while removing it and placing it in an open container, such as a brown paper bag [9]. HCWs must also wash their hands prior to donning/doffing the PPE, and/or placing it in a container.
PPE decontamination and reuse
PPE decontamination and reuse is another important strategy that can be used to preserve supply. Certain articles of used PPE can be decontaminated, whereby any pathogens possibly contaminating the PPE are inactivated prior to reuse ( Table 2). Because most PPE currently available in health care settings is designed for single use and there is limited evidence to date demonstrating optimal decontamination and reuse protocols, these protocols should be considered a second-line strategy although new studies suggest this strategy may be Table 1 Studies are labeled by reference number. Each of the four domains of the QUADAS-2 tool is listed below, and risk of bias is reported as low, high, or unclear for each of the four categories.
Reference
Patient selection
Index tests
Reference standard Flow and timing [10] Low Low Low Low [15] Low High Low Low [23] Low Low Low Low [24] Low Low Low Low [25] Low High Low Low [26] Low Low Low Low [27] Low Low Low Low [28] Low Low Low Low [31] Low Low Low Low [32] Low Low Low Low [33] Low Low Low Low [34] Low High High Low [35] Low High Low Low [36] Low Low Low Low [38] High Low Low Low [39] Low Low Low Low [40] Low Low Low Low [42] Low Low Low Low [44] Low Low Low Low [45] Low Low Low Low [48] Low Low Low Low [50] Low Low Low Low [51] Low Low Low Low [52] Low High High Low [53] Low High High Low [56] Low Low Low Low quite acceptable. A rate limiting step is that PPE can only be decontaminated a fixed number of times before its integrity degrades to the extent that it compromises fit and function (particularly for N95 respirators) and accordingly strict care and quality assurance measures must be taken to safely implement such strategies. Each of the decontamination methods also requires implementation of protocols within healthcare facilities to ensure staff are trained to safely decontaminate their own PPE, or label it and drop it off at a centralized decontamination site [21].
Respirators
The best studied PPE decontamination strategies have been for N95 respirators, which can be effectively decontaminated using techniques involving heat [22e25], steam [26e29], UVGI [9,10] Soap and water, bleach immersion, and alcohol based cleaning solutions should NOT be used to decontaminate PPE [9,15] WHO [14] Enable extended use of all PPE by co-locating all confirmed COVID-19 patients within areas of EDs and inpatient units Using N95 masks, surgical masks, face shields, gowns, and gloves between patients confirmed to have COVID-19, provided they are not contaminated with bodily fluids or worn during an AGMP HPV, UVGI, ethylene oxide, moist heat (autoclave systems) can be used to decontaminate N95s Face shields and goggles decontaminated with soap and water followed by detergent (sodium hypochlorite 0.1%) or alcohol wipes Health Canada [16] Non-medical N95s may be used by HCWs at the discretion of the healthcare facility they work in N95s and surgical masks may be used beyond their shelf life No specific recommendations NHS [17] Surgical masks may be used for source control, if feasible and if the mask can be tolerated by the patient PPE used in AGMPs should be single use only Using N95 masks, surgical masks, face shields, gowns, and gloves between patients confirmed to have COVID-19, provided they are not contaminated with bodily fluids or worn during an AGMP N95 respirators to be worn by HCWs in high risk areas of hospital such as ICU, ED resuscitation rooms N95 respirators and surgical masks can be reused provided they have not been soiled, and still fit. No specific strategies recommended.
Face shields and goggles decontaminated with detergent product either combined/ sequentially with a decontamination product as agreed by the local infection prevention and control specialists. EU [18] N95 or N99 respirator to be worn by all HCWs in contact with patient suspected or confirmed to have COVID- 19 Differing number of sets of PPE allocated to healthcare teams based on severity of patient's illness [19].
If N95/N99 respirators are not available, surgical masks can be used instead Using N95 masks, surgical masks, face shields, gowns, and gloves between patients confirmed to have COVID-19, provided they are not contaminated with bodily fluids HPV, UVGI, and microwave steam bags can be used to decontaminate N95 respirators [20].
No recommendations for decontaminating face shields or goggles [21,29e33], hydrogen peroxide [29,34e36], or peracetic acid [23,35]. They all attempt to balance inactivating potential pathogens on the mask as possible while minimizing damage to the mask itself (Table 3).
Heat and humidity
Pilot studies have experimented with decontaminating N95s using dry heat. The advantage of such a technique is that it could be done in standard blanket warming ovens in hospitals, or at home in a conventional oven, or using handheld hair dryers. While there is no consensus on the specific temperatures and duration needed, studies have demonstrated dry heat of 70e100 0 C for 30 minutes to have similar level of decontamination as UV light, without compromising mask fit or function [22,39]. With regards to the number of times such a process could be repeated, a pre-publication report from Stanford describes a protocol that successfully inactivated E.coli using 75 0 C heat for 30 minutes for up to 20 total cycles [15]. Another study found that both dry and moist heat at 70 0 C was effective at disinfecting SARS-CoV-2 and maintained fibre diameter, fit, filtration efficiency, and breathing resistance after 10 cycles [24]. However, a study using hair dryers found that filtration efficiency was reduced after two cycles [25]. Further testing of several heat based decontamination techniques on SARS-CoV-2 is currently underway, including a protocol using dry heat at 75 0 C for 30 minutes that is part of an international 13 site study in partnership with the WHO [41].
Microwave generated steam bags have also been shown to be effective in inactivating influenza virus and a viral pathogen surrogate, MS2, without compromising mask fit or function, for between 3 and 6 decontamination cycles [26,42]. This approach could be completed at home, as microwave steam bags, which are typically used for decontaminating infant bottles and breast pumps, are commercially available, and most HCWs have access to a microwave. Additionally, a study from Massachusetts, USA, has found that utilizing universally available materials such as generic glass containers and steam can effectively decontaminate N95 respirators and maintain integrity over 20 cycles [28]. Hospital systems in Alberta and Toronto have also been testing the use of autoclave machines, which use a combination of heat, pressure, and steam, to sterilize N95 respirators in large batches [37,43]. One study suggests up to 400 respirators can be sterilized over a 90 minute cycle, and that they remain safe to use after up to 10 decontamination cycles [23]. Other one found N95s to be safe after autoclaving up to 5x in most cases [38]. This approach is particularly promising because many hospitals already have autoclave machines available, and thus could more easily implement this decontamination process.
UVGI
Heimbuch et al. and Lore et al. both evaluated UVGI at wavelengths of 254 nm for 6 and 2 different N95 respirator types, respectively [26,44]. Further work by Heimbuch in 2019 showed that 1 J/cm 2 of UVGI inactivated at least 99.9% of all H1N1, H5N1, H7N9 A/Anhui/1/2013, H7N9 A/Shanghai/1/2013, MERS-CoV, and SARS-CoV that was tested [27]. Likewise, Ozog et al. found that 1.5 J/cm 2 on both sides was effective at decontaminating SARS-CoV-2 [45]. A review by O'Hearn found minimal changes in filter efficiency following application of several different UVGI protocols [46]. Notably, work by Lindsley et al. seems to suggest a ceiling on the wavelength of UV light used, as they reported wavelengths above 470nm produced a statistically significant reduction in the strength of the N95 filter [40].
UVGI appears to be an effective way of repeatedly disinfecting N95 respirators. However, instead of a procedure that can be completed by individual HCWs while at home, UVGI decontamination systems would require dedicated funding, space, and technicians, as well as a system for HCWs to drop off and pick-up their specific N95 mask. Schnell et al. describe the design of a UVGI system using previous existing components implemented at a hospital in Portland, Oregon [47]. Hamzavi et al. has proposed the repurposing narrow-band UVB devices often found in dermatology offices for UVGI [33]. Additionally, a "double hit" process consisting of UVGI followed by heat treatment has been proposed as a conservative method of ensuring maximal decontamination [48].
Hydrogen peroxide
Hydrogen peroxide can be used as a vapor or gas plasma to decontaminate N95 masks. Hydrogen peroxide vapor (HPV) has been shown to inactivate viruses and highly resistant bacterial spores in the mask and on the mask straps, and is safe to use between 10-50 times per mask depending on the decontamination system used [34,49,50]. Studies by Ibàñez-Cervantes et al. and Jatta et al. reported that HPGP disinfection on N95 respirators reduced SARS-CoV-2 to undetectable levels after one vapor cycle [36,51]. However, a recent study by Lieu et al. tested extended use and HPV decontamination amongst healthcare providers during regular scheduled work hours and found the median number of cycles before respirator failure to be 2, with variation across models, suggesting that failure rate may be faster during real-life work conditions [52]. Hydrogen peroxide gas plasma (HPGP) has been shown to be similarly effective at inactivating pathogens, though less data exist for the maximum of cycles common N95 mask types could tolerate.
Hydrogen peroxide-based systems appear to be quite effective at decontaminating N95s and can be used over many repeated cycles. While these systems would also require significant investment, there is existing infrastructure that can be utilized. Of note, the FDA approved a HPGP system from the Antimicrobial Stewardship Programs to start decontaminating N95s in the US. Additionally, a California-based firm has developed hand-held HPGP devices that have been shown to effectively disinfect N95 respirators with less infrastructure required [53]. A hydrogen-peroxide decontamination process, coupled with strict pick-up and drop-off policies, has been implemented in a large academic hospital in Washington, USA [54] and described for use at the University of New Mexico [55].
Peracetic acid dry fogging systems
While there is limited literature on the efficacy of peracetic acid dry fogging systems (PAF), they have been shown to effectively inactivate a variety of pathogens, including SARS-CoV-2 specifically, without compromising N95 filter or fit after 10 decontamination cycles [23,35]. However, PAF systems require specialized equipment, and handling of the highly corrosive and flammable liquid peracetic acid. Table 3 Comparison of decontamination strategies for N95 masks. UVGI ¼ultraviolet germicidal irradiation, HPV ¼ hydrogen peroxide vapor, HPGP ¼ hydrogen peroxide gas plasma, PAF ¼ Peracetic acid dry fogging system. *Implementing these decontamination systems will require a system for collecting and labeling the PPE such that it can be returned to the HCWs (chain of custody), a mechanism for HCWs to pick up their PPE, and a finally a schedule that ensures that a HCWs article of PPE is decontaminated prior to their next shift.
Dry Heat
Hot air 70 C for 30 min Hair dryer (1400w, 50Hz) from distance of 10e20cm [25] Can be done in conventional oven or blanket warmer if it can reach target temperature [22,23].
Can be done at home (oven) or using blanket warmers that are present in most hospitals Shown to be effective at decontaminating SARS-CoV-2 [24] Limited number of studies (4 total) Damages masks after fewer decontamination cycles.
1e20 Wet Heat/Microwave generated steam 90 seconds on high-power, in home microwave (for microwaves with 1100W power), followed by <30 minutes for drying. Steam bags designed for disinfecting infant bottles can be used [26,27] 3 minutes in 1100W microwave over open glass containers filled with water and covered with mesh [28] Can be done at home (microwave) using commercially available products (steam bags) or universally available generic glass containers [28] Unlikely to cause changes in fit, odor, discomfort, or difficulty donning [29] Damages masks after fewer decontamination cycles. 3e20 Autoclave* 121 C for 15 min; total cycle time of 40 min (10 min conditioning/air removal, 15 min exposure, 15 min drying/exhaust), though exact protocol depends on the machine used [23]. Alternative protocol tested 121 C for 30 min with a w90 minute total cycle time [37]. 110 C fr 30 min (gravity cycle) Utilizes existing autoclave infrastructure present in many hospitals.
Can decontaminate hundreds of masks concurrently Damages masks after fewer decontamination cycles.
5-10 [38]
UVGI* Two 254nm UV light sources from two different angles for 5 minutes in dedicated room [30]. Alternative protocols used 1 UV light source (550nm) for 60 minutes [39], 60e70 seconds at 254nm UVGI [33]. PPE has to be positioned such that there is no shadowing that would prevent full UV light exposure.
Highly effective, shown to be effective in decontaminating SARS-CoV-2 specifically.
Can decontaminate hundreds to thousands of articles of PPE concurrently.
Highly effective, shown to be effective in decontaminating SARS-CoV-2 specifically.
Can decontaminate thousands of articles of PPE concurrently.
Modular, mobile decontamination unit recently FDA approved.
Requires specialized equipment and dedicated staff.
Limited number of facilities currently available 30e50 PAF* Need 80e90% humidity (requires approximately 30 ml of dilute liquid peracetic acid for 400ft3 container). Then expose N95 for 1 hr [23].
Highly effective, shown to be effective in decontaminating SARS-CoV-2 specifically.
Limited number of studies.
Requires specialized equipment that needs to be frequently cleaned
Non-recommended decontamination techniques
Notably, there are several means of decontamination that have been recommended against by the CDC [9]. Soap and water, bleach immersion, and alcohol based cleaning solutions have been shown to compromise the N95 filtration efficiency, making any reuse, regardless of the inactivation of any initial pathogens present, unsafe [29].
Face shields, visors, and goggles
Face shields, visors and goggles are all means of eye protection for HCWs. Generally, face shields are preferred as they can provide broader coverage, and if they cover the full face, can help reduce the risk of surgical masks or N95s becoming soiled or damaged. Provided the face shields are made of a clear plastic material, individual HCWs can clean their own face shield using a wipe and EPA-registered disinfectant [2,9]. If available, face shields could be decontaminated using UV light. A study by Ziegenfuss et al. showed that UV light was able to create a 2.4 log reduction in the amount of S. aureus on a face shield using 253.7 nm of light [56].
PPE collection, storage and redistribution in decontamination protocols
Each of the above decontamination strategies will require clear protocols and training for appropriate PPE collection, decontamination, storage and redistribution. At home decontamination strategies are the least logistically challenging for health systems, but still require HCWs to be trained to safely remove their PPE, store it in a sealed container, transport it home, decontaminate it using their oven or microwave, and then place it in a clean container for transport back to the hospital (see extended use guidelines section for more details). While offering the advantages of possibly greater HCW acceptability and requiring less health system resources and coordination, home-based strategies may be less acceptable to many health systems given the likely higher degree of variability in adherence to recommended protocols and risk for either persistent contamination and damage to PPE, potentially leading to greater infection risk.
In contrast, facility-based decontamination strategies require greater coordination and resources, but can decontaminate hundreds to thousands of articles of PPE concurrently, and remove the burden of protocol adherence from individual HCWs [30]. These protocols generally involve collecting, decontaminating and redistributing individual pieces of PPE to the same HCW that initial used them (a system ensuring chain of custody), encouraging greater end-user acceptability. This is often accomplished by HCWs labeling PPE prior to first use with their name and identification number, date of first use, and a tally mark for number of times reused. HCWs then place used PPE in a labeled container and drop it off at the decontamination center. HCWs later retrieve their personal article of PPE from a centralized pick up location. Hospitals would need to coordinate the schedules of the technicians for the decontamination equipment, porters for transporting PPE through the system, and a reliable means of tracking which articles of PPE belong to what HCW.
Discussion
While initiatives to redirect all available PPE to healthcare facilities, and rapidly increase PPE manufacturing are underway, maximizing the use of each article of PPE is paramount in the current setting in many jurisdictions around the world. Healthcare facilities should calculate their PPE burn rate to forecast potential shortages (see citation), and then implement PPE preservation strategies as needed [57]. Extended use guidelines suggest that HCWs can safely use surgical masks, gowns, and gloves between multiple patients confirmed to have COVID-19. N95 respirators can be decontaminated effectively using dry heat and steam techniques at home, or at larger scale using autoclave machines. Eye protection, either in the form of face shields and goggles can be cleaned using disinfectant wipes in a manner similar to any hard smooth surfaces. Having individual HCWs disinfect their own PPE places an additional burden on the individual, and requires that they are trained in how to do it properly, however they require less healthcare resources and less coordination than a centralized disinfecting process. For these reasons, it may be better suited to smaller healthcare facilities with fewer resources.
Autoclave, UVGI, HPV, HPGP, and PAF decontamination all require specialized equipment and the creation of a centralized PPE collection, storage and redistribution protocols, however, these processes can be repeated for more decontamination cycles, and can decontaminate larger quantities of PPE at one time. Therefore, these strategies are likely better suited to larger healthcare facilities with the available equipment, staff, funding to decontaminate all HCWs PPE centrally. Depending on local decontamination requirements and available resources, a combination of the centralized and individualized decontamination protocols could be utilized. For all extended use and decontamination strategies, the utmost care should be given to ensuring that all the PPE still fits properly prior to reuse. Effectively mitigating PPE shortages will be critical to preserving health care system integrity by minimizing the number of HCWs and patients infected, particularly in low resource settings.
This narrative review has several limitations. First, while multiple databases were searched, and documents from national and international health organizations were reviewed in detail, no systematic literature search was completed, meaning some relevant studies may have been missed. Second, there were limited studies available describing each of the individual extended use and decontamination strategies outlined, and some recommendations were based on extrapolations of work done on other virus' such as SARS-CoV-1. While the virus' could respond to the decontamination process similarly, more studies on SARs-CoV-2 specifically are needed. Third, while the types of PPE used are quite consistent worldwide, there are many different PPE models and manufacturers, and each product may not respond the same to a given extended use or decontamination strategy. That said, certain manufacturers have started to recommend specific decontamination techniques for their own products, and subsequent studies may help establish if specific extended use or decontamination strategies are not suitable for a given PPE model/manufacturer [58]. Finally, due to the rapidly evolving literature on COVID-19, it is possible that the optimal PPE preservation strategies will change as further testing is completed, and SARS-CoV-2 transmission is better understood. | 2021-05-09T13:16:24.956Z | 2021-05-08T00:00:00.000 | {
"year": 2021,
"sha1": "bd9b3df83db817b50e9116a235ef9dc9b2dbf329",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.infpip.2021.100146",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca373766c8a8475c90ef8cbff51d2162326c2fb3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246375694 | pes2o/s2orc | v3-fos-license | Issues of awards given as a part of social arbitration in a collective dispute . De lege lata and de lege ferenda remarks
: Social arbitration as the third method of resolution of collective disputes can be used to resolve a dispute in an amica-ble manner. Thanks to this method, parties to the collective dispute can end their conflict thanks to the arbitration award with no need to go on strike. The author analyses the legal nature of arbitration awards and presents consequences of the related labour law legislation. The conclusion is as follows: current legal regulations are in need of change, especially when it comes to the execution, amendment and supplementation of an award issues as a part of social arbitration with the involvement of trade unions, employers or their organisations.
Introduction
Collective labour disputes constitute a material element of the economic system of a state.It remains a valid question whether amicable methods of dispute resolution can be used effectively to avoid using strikes as a last resort.Social arbitration is one such amicable form.To evaluate the importance of this institution and its possible application objectively, one cannot ignore the nature of an arbitration award.However, it has to be explained first what collective disputes are and who they apply to.
While analysing collective disputes, one should note that a collective dispute is a legal concept separate from the concept of an individual employee dispute.The principal difference between an industrial dispute and The institution of social arbitration must inspire confidence in both parties to the dispute.Both the employee and the employer side should be certain that the award or agreement will be enforced without any problems whatsoever.Only clear and comprehensive regulation of the arbitration will make it possible to resort to arbitration proceedings to end a collective dispute rather than go on a strike.For this reason, the analysis covers the legal nature of an arbitration award in the context of practical problems with applying social arbitration in Polish legal reality.
The aim of this study is to determine whether legal regulations concerning the methods for ending social arbitration are comprehensive and clear or whether they need to be changed, and if yes, to what extent.The analysis uses the dogmatic-legal and legal-comparative methods.The countries under analysis share one common feature: they have all implemented legislation to resolve collective disputes.Research into legal mechanisms in the states concerned leads to the conclusion that these mechanisms are close to one another in terms of their objective: in each state, the legislator strives to ensure that the arbitration proceedings are effective in discouraging non-amicable forms of collective dispute resolution.Legal regulations of the countries in question are influenced by labour law standards adopted by international and European organisations.
Ways to end social arbitration
2.1.Collective agreements as the second method of collective dispute resolution in arbitration proceedings in addition to arbitration awards An award issued by a social arbitration college is not the only way to resolve a dispute at this stage.Just like with bargaining or mediation, parties to arbitration proceedings can enter into agreements resolving the dispute.According to § 9 of the Regulation of the Council of Ministers of 16 August 1991 on the procedure before social arbitration colleges 5 after the opening of a college session encourages the parties to reach an agreement.Such an agreement as the source of the appropriate content of the labour law, determining rights and obligations 6 has the legal nature characteristic for agreements based on RCD 7 resulting from bargaining and mediation 8 .It should determine: parties to the dispute, precise claims of the trade unions and normative collective provisions if they apply to the general situation of the employees.It can also contain individual provisions if they influence the contents of the employment relationship 9 .According to the Supreme Court, collective agreements are considered contractual provisions applying exclusively to entities covered with their contents.However, they are not treated as generally applicable legal regulations 10 .According to the representative theory, parties to a collective agreement execute it not only on their behalf but also on behalf of members they represent or individuals other than their members.It means that it is possible to make direct demands regarding the fulfilment of duties by individual employees and employers towards each other 11 .Agreements are based on the principle of the freedom of agreements by way of the submission of declarations of will in the course of social arbitration 12 .The doctrine See Krzysztof Wojciech Baran, "Porozumienia zawierane w sporach zbiorowych jako źródła prawa pracy, " Monitor Prawa Pracy, no. 9 (2008): 455; Janusz Żołyński, "Postępowanie arbitrażowe jako metoda rozwiązywania sporu zbiorowego, " Monitor Prawa Pracy, no. 10 (2011): 517.7 Grzegorz Goździewicz, "Charakter porozumień zbiorowych w polskim prawie pracy, " Work and Social Security, no. 3 (1998): 23; Łukasz Pisarczyk, "Pokojowe (ireniczne) metody rozwiązywania sporów zbiorowych, " in System prawa pracy.Zbiorowe prawo pracy, vol.V, ed.Krzysztof Wojciech Baran (Warszawa: Wolters Kluwer S.A., 2014), 643-644; see Baran, "Porozumienia, " 453-54.The labour law doctrine includes doubts regarding the distinction of agreements being specific sources of the labour law, see Ludwik notices elements of a civil law agreement contained in such agreements that, however, do not rule out the possibility that above-mentioned acts compiled at the arbitration stage in a collective dispute can be seen as sources of the labour law based on the act 13 .Agreements signed by parties to the dispute in the course of social arbitration entail an automatic change of collective interests of employees the satisfaction of which the trade union demanded into individual subjective rights just like agreements executed at the bargaining and mediation stage 14 .
Arbitration awards -decisions of social arbitration colleges
If an agreement resolving a collective dispute is not reached in the arbitration proceedings an arbitration award will be issued.In practice, one can distinguish substantive decisions considering the demand made by a trade union and dismissing collective claims made with regard to payment, working and social benefit conditions 15 .When a trade union makes a claim regarding union rights and freedoms, a decision can be made to dismiss that claim, i.e. considering the trade union's position ungrounded or to grant the application in whole or in part 16 .
A social arbitration college also issues typically formal awards.These type of awards includes a decision to refuse to hear the arbitration, de facto to reject the application to the inadmissibility of social arbitration as understood in RCD regulations, even though no legal basis for such a conclusion 13 See Pisarczyk, "Pokojowe, " 650-51; see also the topic of a collective agreement in Florek, Ustawa, 224-27.14 Andrzej Marian Świątkowski, "Ustawa o rozwiązywaniu sporów zbiorowych, " in Zbiorowe prawo pracy, eds.Jerzy Wratny, Krzysztof Walczak (Warszawa: C.H. Beck, 2009), 334.15 E.g. according to the information obtained as a part of access to the public information, the award rejecting the related request of the applicant was issued by the social arbitration college at the Regional Court in Piotrkow Trybunalski in 2012, Decision of the College for Social Arbitration at the Regional Court in Piotrków Trybunalski, Judgment of 2012, Ref. No. KAS -z 1/12, unreported.Data referring to specific awards under social arbitration that were referred to in the article were collected on the basis of enquiries emailed to all Regional Courts in Poland.
Review of European and Comparative Law | 2022 Vol.48, No. 1 of arbitration proceedings can be found in PSA or even less in RCD 17 .Additionally, a decision to discontinue proceedings exists in legal transactions 18 .
In the issued award, the social arbitration college also refers the case to a competent social arbitration college for the ruling, e.g. the college at the Supreme Court 19 .Decisions of the social arbitration college at the Supreme Court adopted the rule according to which "employer disputes can be transformed during the proceedings before a social arbitration college into a single multi-employer dispute if parties to these disputes are willing and the subject matter of the dispute applies to workers employed in at least two workplaces" 20 .In the analysed case being the basis for the issue of the above-mentioned award, the college primarily examined its competence to hear the dispute.It was necessary due to the fact that the dispute was conducted as eight separate disputes in the bargaining and mediation phase.Findings made by the college led to the conclusion according to which trade unions representing all workers covered with disputes in the bargaining and mediation phase jointly applied for the submission of 17 It has to be stated that both RCD and PSA do not introduce the possibility to reject a request, only to return it due to formal defects not remedied on time, see: Walery Masewicz, Zatarg zbiorowy pracy (Poznań: Polski Dom Wydawniczy "Ławica", 1994), 97-98.Some representatives of the labour law science indicate that the college issues substantive awards considering demands of the trade union in part or in whole or dismissing them in part of in while, see: Bogusław Cudowski, Spory zbiorowe w polskim prawie pracy (Białystok: Temida, 1998), 114.However, if arbitration is not admissible due to the fact that there was no bargaining or mediation carried out or when proceedings before the college demonstrate that the dispute is not a collective dispute the request will have to be rejected for purposive reasons.The postulate that the trade union's request can be rejected by the issue of an award by the social arbitration college is justified.Checking whether a dispute is a collective dispute lies within the competences of the college, see Rycak, "Praktyka, " 142.The labour law doctrine also contains the view according to which a request can be destroyed due to the inadmissibility of arbitration proceedings due to its return by the court president, see: Walery Masewicz, Ustawa o związkach zawodowych Ustawa o rozwiązywaniu sporów zbiorowych (Warszawa: Wydawnictwo Prawnicze PWN, 1998), 180.18 E.g. in 2005-2014, the social arbitration college at the Regional Court in Łodz dismissed arbitration proceedings twice: in one case, it was due to the trade union's loss of its mandate to represent employees, in another case, it was due to the withdrawal of the request for social arbitration.the dispute for resolution by the social arbitration college at the Supreme Court as a multi-employer dispute 21 .
Legal nature of arbitration awards
In light of art.16 clause 6 RCD, the arbitration award is binding for parties to the dispute unless they agree differently.If both the trade union in its application for social arbitration and the employer responding to that application fail to submit, pursuant to art.16 clause 6 RCD, an appropriate statement of application for arbitration with the issue of an award non-binding for the parties, the decision of the social arbitration college, in principle, will bind the parties 22 .The employer can express their position on the subject responding to the application of the trade union or in another letter before the date of the commission's session but not later than upon the opening of that session 23 .Contrary to the situation in which both parties express their conclusive consent to the binding nature of the college's decision, a trade union can decide to initiate a protest action in the form of a strike if the award is non-binding.Purposive considerations are in favour of this postulate 24 .The parties can autonomously choose the dispute resolution variant after the mediations end and apply non-amicable resolution methods.An award in arbitration proceedings whose binding power results from the will of the parties ends the proceedings related to the occurrence of a collective dispute with the employer 25 ; as a consequence, the collective dispute is resolved 26 .The statement of the binding power 27 shall be contained in the award itself 28 ; however, this situation does not always happen 29 .According to § 11 clause 3 PSA, an arbitration award should also contain the name and content of the college, award issue date, definition of parties, indication of the subject matter of the dispute, the resolution 30 and its justification, the statement whether the award binds the parties and signatures of members of the college 31 .
If the award does not contain any of the above-mentioned elements, it seems that it is not possible to supplement them pursuant to art.351 § 3 of the Act of 17 November 1964 -Code of Civil Procedure 32 for two reasons.Firstly, provisions of the Code of Civil Procedure do not apply to arbitration proceedings before social arbitration colleges -obviously with some exceptions.Civil procedure was referred to only in one place of the provision regulating the arbitration procedure, however, it only had to do with evidentiary proceedings.In the light of § 8 clause 2 PSA, a college can take evidence in line with the provisions of the Code of Civil Procedure on evidence.Secondly, it is the social arbitration college rather than a public court that issues an award; this is why, in light of the linguistic interpretation of Arbitration at the District Court in Olsztyn, Judgment of 2006, Ref. No. Kas-z.1/06,unreported.The above data come from the information obtained by email as an answer to enquiries addressed to Regional Courts.See more Maciej Jarota, "Arbitraż społecznyfakultatywna czy obligatoryjna metoda rozwiązywania sporów zbiorowych?Przyczynek do dyskusji o wykorzystaniu postępowania arbitrażowego w zbiorowych stosunkach pracy, " ADR Arbitraż i Mediacja, no. 2 (2018): 48-49.28 Goździewicz, "Mediacja, " 23. 29 Rycak, "Praktyka, " 145.30 In practice, the awards of the social arbitration college sometimes lacks the dispute resolution, which cannot be considered positive at this stage of the dispute.E.g. in the operative part of the award of 7 May 2015 regarding the proceedings requested by the trade union against the employer, the social arbitration college at the Regional Court in Warsaw limited itself to the indication that the parties failed to reach an agreement.Decision of the College for Social Arbitration at the District Court in Warszawa, Judgment of 2015, Ref. No. XXI Kas-z 2/14, unreported.The operative part of the decision did not refer to the dispute resolution while item 5 of the award states that the award is not binding for the parties.See more Maciej Jarota, "Arbitraż społeczny -fakultatywna czy obligatoryjna metoda rozwiązywania sporów zbiorowych?Przyczynek do dyskusji o wykorzystaniu postępowania arbitrażowego w zbiorowych stosunkach pracy, " ADR Arbitraż i Mediacja, no. 2 (2018): 48-49.art.351 § 3 CPC in connection with art.351 § 1 CPC, it is not possible to supplement an award issued pursuant to § 11 PSA.
It is worth mentioning that, in the past, according to art. 9 of the Regulation of the President of the Republic of Poland of 27 October 1933 on extraordinary disputes committees for the resolution of collective disputes between employers and employees in the industry and commerce 33 , the legal construct in force in Poland provided that the committee's award had the economically prevailing importance in the work branch covered with the award, the Council of Ministers could issue a regulation if requested by the minister of social care to give binding legal effect to the award in the entire area for which the award was issued or in a part of the area where it attained prevailing importance.Such an award, as understood in the said regulation, would apply directly to all employees and employers.Said legal regulations have overcome the rule of committee awards contained in the pre-WWII Law of Obligations, typical for collective labour agreements (c.l.a.), according to which they are only binding for those parties who have executed them 34 .
To summarize this part of the discussion, a concern may be expressed about the impossibility to supplement an existing award, if necessary.Similarly, the Polish legislation does not provide for rectification of an award should there be an obvious error in the dispute resolution.In the Polish legal reality, there are no legal regulations that would precisely define the manner in which the panel should act if it is necessary to amend the award from the proceedings.
Execution of arbitration awards
While analysing the issues of arbitration awards, it is worthwhile to consider whether parties to the employment relationship are entitled to the claim for the execution of an award issued by a social arbitration college on the same terms as those applying to the formulation of demands referring to the execution of provisions of the collective agreement executed in the course of social arbitration.The view that it is possible for employees and employer to make direct civil law claims based on an arbitration award just as is the case with the collective agreement 35 is debatable 36 .
As soon as demands made by the trade union are transformed into individual rights pursuant to the collective agreement executed at the arbitration stage, individual employees acquire the right to claim the satisfaction of individual rights guaranteed to them.In turn, the issue of an arbitration award not being a source of labour law does not entail the legal transformation of employee interests to be satisfied by the employer on the basis of the decision of the social arbitration college into rights of individuals with an employment relationship.The conclusion of this analysis is that the issue of an award does not result in an automatic transformation of employees' collective interests into their rights 37 .Reference publications present the prevailing view that, to enforce the employer's observance of the arbitration award referring to employee interests, it is only possible for the trade union to exert pressure by organizing a strike or another non-amicable method of resolution of collective disputes even with no renewed procedure for the initiation of a collective dispute 38 .
The Polish model of an amicable resolution of collective disputes does not provide for sanction for the failure to comply with an award issued in the arbitration proceedings 39 even though the failure to comply can be considered a violation of art.26 clause 1 clause 2 RCD 40 .The enforceability of 35 Żołyński, Ustawa, 95.In 1960ies, doubts existed regarding the enforceability of collective labour agreements as far as the enforcement of resulting obligations is concerned.Free market economies adopted the inadmissibility of the formulation of individual employee claims against a participant in the collective labour agreement.A particular significance of rights vested in the employee organisation representing employees rather than individual rights was stressed, see arbitration awards is not subject to enforcement proceedings 41 .This fact is demonstrated in the linguistic interpretation of art.777 CPC, in particular, the lack of an indication of awards issued in the social arbitration mode in its contents and the absence of a legal standard in RCD that would establish the admissibility of enforcement of such awards by way of enforcement proceedings 42 .
Such deficiencies may raise doubts whether a trade union is actually able to effectively enforce pay and work conditions, social benefits, or trade union rights and freedoms established by an arbitration award.Hence, the trade union organisation, unlike the employer, will essentially go for a strike method rather than arbitration proceedings.Mediation failing, the trade union party will count on a strike as a viable method to achieve its demands from a collective dispute stage.The mere fact that an arbitration award will become non-binding if one of the parties submits a relevant declaration hardly encourages the use of the social arbitration method.Since parties to a collective dispute are not bound by the arbitration award, decisions of the social arbitration committee are treated as non-mandatory for the trade union and the employer.On the other hand, the inability to effectively enforce a binding arbitration award means that arbitration proceedings actually lose their sense as a constructive method of resolving a controversy.
Arbitration awards in selected European states vs. Polish legal realities
In light of the unique nature of an arbitration award and a limited possibility of its enforcement in Polish labour law, it is worthwhile to analyse the said institution from the perspective of selected examples of European states.Legal regulations of individual European states define the legal nature of arbitration awards in an inconsistent manner.In some European states, arbitrators issue awards.In the Russian Federation, an arbitration award is binding for the parties.They are obliged to execute it in pain of the fine of 2000-4000 roubles.If the employer fails to comply with the award the trade 41 Żołyński, Ustawa, 93.42 Baran, Zbiorowe, 442.union can initiate a protest action in the form of a strike 43 .In France, the arbitrator's decision can be appealed against to the Supreme Court of Arbitration consisting of the equal number of judges from the Council of State and judges from the Court of Cassation 44 .
In light of art.20 clause 4 of the Latvian Act of 26 September 2002 on employment relationships, compliance with an arbitration award is voluntary in Latvia.If parties compile a written agreement to determine the binding power of the arbitration award that arbitration award shall have legal effects typical for a collective agreement 45 .In Great Britain, an award issued in the course of an optional arbitration is binding if the parties decide during the pending procedure that they would comply with the award irrespective of its contents 46 .
In Slovakia, awards issued by an arbitrator regarding the execution of collective agreements can be appealed against to a District Court that repeals the arbitrator's award if it is in conflict with the legal regulations or with the collective agreement.In the same country, an arbitrator's decision regarding the conclusion of a collective labour agreement is final with no appeal possible, unlike decisions referring to disputes regarding the execution of duties under collective agreements.If a court annuls the arbitrator's award the dispute shall be referred to the same arbitrator for reassessment.The lack of consent to the participation of the same person as the arbitrator results in the nomination of an arbitrator on the request of any of the parties by the Minister of Labour, Social Affairs and Family of the Slovak Republic 47 .
In Spain, art.21 clause 3 V ASEC provides that a binding arbitration award is enforceable immediately.the SIMA office.After that, the award is forwarded to an appropriate agency for publication if required under the law.The award has the same legal effects as a collective labour agreement 48 .
In Germany, parties to a dispute can appeal from the decision of an arbitration committee to a labour court within 2 weeks of the award announcement.The appeal can be upheld in the event of a law violation.The German labour law doctrine indicates that parties rarely decide to undermine the resolution made at the arbitration stage in collective disputes in this manner 49 .In Denmark, an award issued at the arbitration stage is final even though, if material rules of the procedure influencing the resolution of the case are violated it is possible to consider the award invalid before the labour court 50 .
Polish legal regulations do not assume the two-tiered procedure in arbitration proceedings 51 .As already mentioned, parties in Slovakia and Germany have the possibility to appeal against an arbitration award to a labour court.Analysed legal solutions applied in above-mentioned countries make the appellate review of an arbitration award possible, which is particularly desirable for a discretionary ruling by an arbitration agency in a specific case, in a manner not limited with statutory criteria.The aspect of determination of an entity competent to consider the means of challenge of arbitration awards is also extremely important.The labour court seems to be competent to assess an arbitration award because judges who are labour law practitioners are able to guarantee reliability and independence while analysing arbitration proceedings in a collective dispute.
It should be remembered that, according to the rule set out in art.262 § 2 item 1 of the Act of 26 June 1974 -Labour Code52 public courts cannot interfere with disputes regarding the establishment of new payment and working terms or the application of labour law standards to the nomination of college members at the social arbitration stage 53 .The lack of a twotier system in social arbitration in Poland provokes doubts 54 .In particular, the literature rightly indicates that the delegation of three, by definition, independent members of the college from the employer side and three members from the trade union to form the college is only apparent; in fact, a dispute is resolved by a professional judge.Individuals indicated by the trade union or by the employer do not vote against their principals 55 .
The statement of reasons for the draft collective labour code of 2008 prepared by the Labour Law Codification Commission 56 provides that social 53 Żołyński, Ustawa, 93.54 In the previous legal regime applicable in 1980ies, the Public Prosecutor General argued that purposive considerations (the need to amend awards) warrant the application of remedies provided for in civil procedure regulations but the Supreme Court did not share this position in its decision of 10 December 1986, III PZP 72/86, see Polish Supreme Court, Resolution of 23 May 1986, Ref. No. III PZP 72/86, unreported.The labour law doctrine also indicated that, even though the nature of arbitration proceedings is different than the nature of litigation, this circumstance does not entail the right to conclude that it is not possible to appeal against an arbitration award on the basis of autonomous findings of the parties, see Andrzej Marian Świątkowski, "Spory zbiorowe (I), " Praca i Zabezpieczenie Społeczne, no. 8 (1987): 13-17.55 See Cudowski, Spory, 112; Żołyński, Ustawa, 93; Żołyński, Ustawa, 438.56 Http://www.mpips.gov.pl/gfx/mpips/userfiles/File/Departament%20Prawa%20Pracy/kod-eksy%20pracy/ZKP_04.08..pdf., accessed May 9, 2016, hereinafter: CLC.The CLC draft was submitted to the President of the Council of Ministers on 5.12.2006 even though its contents refer to "April 2007", while the draft description found at the website provides the information that the draft originated in April 2008.The Labour Law Codification Commission had been preparing the draft for a few years on the basis of the Ordinance of the Council of Ministers of 20th August 2002 on the establishment of a labour law codification commission, Journal of Laws 2002, No. 139 item 1167, as amended.The Commission initially worked under the leadership of Tadeusz Zieliński.As of 5.12.2003, the Commission consisted of: Michał Seweryński, a professor at the Łódz University (the chairman); Ludwik Florek, a professor at the Warsaw University (deputy chairman); Grzegorz Goździewicz, a professor at the M. Kopernik University in Toruń; Zbigniew Hajn, a professor at the Łódz University, a judge at the Supreme Court; Andrzej Kijowski, a professor at the A. Mickiewicz University in Poznań, a judge of the Supreme Court; Walerian Sanetra, a professor at the Białystok University, President of the Supreme Court; Barbara Wagner, a professor at the Jagiellonian University, judge of the Supreme Court; Jan Wojtyła, a professor at the K. Adamiecki University of Economics in Katowice; Jerzy Wratny, a professor Review of European and Comparative Law | 2022 Vol.48, No. 1 arbitration as a resolution method for dispute of the voluntary nature of awards issued by an arbitration commission does not fulfil the role expected by the legislature.In light of this fact, authors of the project observed that it was necessary to reinstate the importance of arbitration by considering that the arbitrator's decision would be binding for the parties and end the collective dispute.Additionally, authors of the draft believe that above-mentioned problems justify the introduction of the rule according to which an arbitration award would be subjected to judicial control when it comes to its legal compliance and interests of the parties, which would strengthen the rule of law and social peace in collective labour agreements.This position of the Labour Law Codification Commission was expressed in the suggestion contained in art.156 § 2 CLC that assumed that each of the parties to a collective dispute would be able to appeal to a court against an arbitration award within 7 days of its receipt if that award blatantly violates the party's interest or the law 57 .The appeal would be made to a district court in the case of a one employer dispute and to a regional court in the case of a multi-employer dispute -both these courts having jurisdiction in the dispute initiation location.Even though an appeal from an arbitration award is not provided for in the following draft of the collective labour code of 14 March 2018 by the new Labour Law Codification Commission 58 , there is no doubt that the concept worked out in 2006 is worth considering.at the Rzeszów University; dr Eugenia Gienieczko, Director of the Labour Law Department in the then Ministry of Labour and Social Policy.In 2005, Teresa Liszcz, a professor of the Maria Curie-Skłodowska University in Lublin, later a judge in the Constitutional Tribunal, joined the Commission.57 Art.158 § 2 of the draft Collective Labour Code from mid 1990ies suggested that each of the parties and the labour inspector could be granted the right to appeal against the arbitrator's award violating the laws, see Cudowski, Spory, 115.58 The text of the draft collective labour agreement, see https://www.gov.pl/web/rodzina/bip-teksty-projektu-kodeksu-pracy-i-projektu-kodeksu-zbiorowego-prawa-pracy-opracowane-przez-komisje-kodyfikacyjna-prawa-pracy, accessed January 23, 2021.One has to welcome the rule corresponding, among other things, with the German or French legislation, that is expressed in art.156 § 2 CLC and guarantees the parties to a dispute the right to appeal against the arbitration award while satisfying conditions provided for in the laws.Considering that the appellate review of arbitration awards is necessary in complicated collective labour relationships, the suggestion presented in CLC is very desirable.However, it would be worthwhile to think about a prolongation of the 7-day deadline for the appeal to 14 days.Such a solution would make a professional preparation of the appeal possible, especially in cases with complicated factual and legal circumstances.
One has to note that satisfying the appeal condition, i.e. blatant violation of an interest of the party, can turn out to be ambiguous.It can be difficult to define the blatant nature of the arbitrator's undesirable resolution if the arbitrator has the freedom of decision with no statutory model imposed in advance.The grounds for an appeal will be evaluated by an independent court.It seems that, even though the court analysing the legal compliance of an award will be limited by the proper application of the principles of interpretation of the law, it will have the margin of decision when it comes to the analysis of the party's interest.Therefore, the court hearing the appeal would have the particular responsibility for the correct determination of facts and an appropriate dispute resolution.
While analysing the issue of an appeal from the arbitrator's award, the participation of a labour inspector at this stage of the dispute is worth considering.In light of the 2006 proposal, the labour inspector would not have the right to appeal 59 even though, in certain situations such as the announcement of a strike, suspension of operations of a plant or its part by the employer for more than 3 months, creation of a major threat to public interest, the labour inspector would be able to initiate a collective dispute pursuant to art.154 § 2 CLC.According to art.156 § 1 CLC, the arbitrator would resolve a collective dispute by issuing an award to be delivered Prof. UŁ dr hab.Mirosław Włodarczyk, Dr Jakub Szmit, legal counsel Marta Matyjek, attorney at law dr Liwiusz Laska.
to the parties, to an appropriate labour inspector and to the National Labour Dialogue Consultant.Therefore, the labour inspector would receive the arbitrator's award and, in certain cases, would in fact be able to initiate arbitration proceedings according to CLC but would not be able to appeal against the decision issued by way of social arbitration.Such a situation can cause doubts whether the suggested participation of the labour inspector in social arbitration would be sufficient.It seems that the lack of interference from the labour inspector at the stage of the appeal against the arbitrator's decision is justified in the context of the desirable autonomy of the will of parties to the dispute regarding the application of arbitration proceedings.The freedom of trade unions and employers should also cover the making of decisions regarding the undermining of arbitration awards.
Conclusions
The above considerations allow us to formulate three principal conclusions.Firstly, the Polish legislature should offer an in-depth analysis of institutions lacking in PSA.A reference should be introduced to labour law regulations to state that in cases not regulated in the appropriate regulation on the amendment and supplementation of an arbitration award and the rejection of an arbitration application, CPC provisions shall be applied accordingly.As a consequence, the amendment and supplementation of an arbitration award would be permitted in line with cases specified in art.350 § 1 CPC and art.351 § 1 CPC.Such an action would also remove doubts regarding the interpretation of the possibility that the social arbitration college may reject the employee application for procedural reasons.Secondly, the failure of parties to the dispute to respect binding arbitration awards is also a problematic issue.Legal consequences of the parties' failure to comply with the award issued by an arbitrator within specific time limits has to be set out precisely in the Polish act.The example of Russia is interesting here, where a fine is imposed for non-compliance with the award.Indeed, it would be worth adopting such a sanction in the Polish legal reality -a pecuniary penalty could be imposed by the National Labour Inspectorate (PIP).Additionally, the arbitration award should be included in the catalogue of enforcement titles referred to in art.777 § 1 of the Code of the Civil Procedure so that it could be efficiently enforced in practice.
It also seems appropriate to adopt the principle that an arbitration award is binding on the parties to a collective dispute and is made immediately enforceable, as is the case, for example, in Spain.Such a legal construct could increase confidence in this method of collective dispute resolution.Legal certainty that the award will be enforced is essential from the perspective of the effectiveness of social arbitration itself.
Thirdly, it is worth considering whether it might be necessary to introduce the possibility of an appeal against an arbitration award in such socially important cases relating to the resolution of collective disputes.It seems that this solution would contribute to an increased importance of social arbitration.Judicial control over the award issued in arbitration proceedings would promote trust among parties to the dispute, which could mean that this method of resolution of collective disputes would be used more frequently.Award correctness verification would also make it possible to eliminate errors that can appear in practice during the case assessment.It would not be unusual if such a possibility is introduced into Polish legislation.In France, Slovakia or Germany, an option to appeal against an arbitration award is guaranteed by law.It seems that, just like in Germany, the Labour Court should be competent to hear appeals.Obviously, it remains an open point whether the case should be examined by the Court of Appeal or the Supreme Court.Given that social arbitration committees competent for in-company disputes operate at District Courts, it would not necessarily be desirable to make the District Court an appellate body.However, if awards are reviewed by the Court of professional judges, there is a reason to believe without any doubt that the process will be carried out with due diligence.
In view of the foregoing, how should we assess the above-described regulations on the settlement of a collective dispute through social arbitration?First of all, do they provide the balance between social partners?It ought to be emphasized that seemingly these regulations affect the legal situation of both employers and trade unions to an equal extent.Nevertheless, the provisions are unclear and the enforcement of an arbitration award uncertain, which makes the trade unions reluctant when it comes to this form of dispute resolution.However, this is a disadvantage for the employer itself as well.Failing successful mediation, the employer must be aware that a strike is forthcoming.If we interpreted the principle of equal treatment of parties to a collective dispute in its broad sense, we could not unambiguously claim that the rule is complied with, given that only a trade union party is vested with the right to institute arbitration proceedings.Still, the rule is not absolute, and we should share the view that it is permissible that in certain situations the legislator may intentionally differentiate the rights of the subjects of collective labour relations in order to achieve a legal balance in practice 60 .It seems legitimate whenever the legislator differentiates between the legal situation of the trade unions and the employers in this respect.It cannot be presumed that the employer could also submit a motion to initiate social arbitration in a binding manner.This would be an excessive interference in the collective dispute resolution procedure, giving the employer a real opportunity to block the trade union's right to organise a strike for some indefinite period of time.Given the current problems in the application of arbitration proceedings and an illusory, non-binding nature of arbitration awards, this would be a highly dysfunctional step.Pre-arbitration measures, i.e. negotiations and mediations give the parties to a collective dispute the opportunity to reach an agreement before a strike is initiated in the wake of a failed mediation.Postponing the possibility of resorting to a non-amicable action due to pending arbitration initiated by the employer could adversely affect the success of previous dispute resolution methods.The employer would then be basically deprived of any pressure in the event of a dispute, which could mean its lower involvement in the amicable settlement of the dispute during negotiations or mediations.
It is particularly worth noting that the common objective of bargaining, mediation and arbitration is to prevent non-amicable actions, especially strikes.However, arbitration is the last amicable stage of collective disputes in the light of RCD regulations.It does not necessarily mean that it is the most important method of dispute resolution even though, in the Polish legal reality, arbitration is the final tool making reconciliation of the parties possible.In turn, a strike can negatively impact various aspects of daily life, in particular, it can worsen the employer's economic situation.Therefore, so as not to permit an automatic cessation of work by employees after unsuccessful mediation, one has to guarantee complete and clear mechanisms of arbitration proceedings.From this perspective, changes in the legal nature of an arbitration award, the possibilities to appeal against it, enforce, supplement or amend it are unavoidable.A comprehensive analysis of the arbitration award institution by the legislature is necessary to strengthen social arbitration as an amicable dispute resolution method.
Each award is submitted to 43 Elena Gerasimowa, "The Resolution Of Collective Labour Disputes, " in Labour Law in Russia: Recent Developments and New Challenges, eds.Vladimir Lebedev and Elena Radevich (Newcastle: Cambridge Scholars Publishing, 2014), 274. | 2022-01-29T16:10:22.231Z | 2022-01-27T00:00:00.000 | {
"year": 2022,
"sha1": "eebd29ab1606ff46814a239f1b181f3987075ff1",
"oa_license": "CCBY",
"oa_url": "https://czasopisma.kul.pl/index.php/recl/article/download/12240/12252",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0d07d7d42caaf5fad3a02727e4c129b82d181fab",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
} |
54079457 | pes2o/s2orc | v3-fos-license | A technique to predict the aerodynamic effects of battle damage on an aircraft’s wing
Abstract A technique is developed that can be used to predict the effects of battle damage on the aerodynamic performance of an aircraft’s wing. The technique is based on results obtained from wind tunnel tests on a NASA LS(1)-0417MOD aerofoil with simulated gunfire damage. The wind tunnel model incorporated an internal cavity to represent typical aircraft construction and this was located between 24% and 75% of chord. The damage was simulated by circular holes with diameters between 20% and 40% of chord. To represent different attack directions, the inclination of the hole axis relative to the aerofoil chord was varied between ±60° pitch and 45° of roll. The aerofoil spanned the wind tunnel to create approximate two-dimensional conditions and balance measurements were carried out at a Reynolds number of 500,000 for incidences, increased in 2° increments, from –4° to 16°. Surface flow visualisation and pressure measurements were also carried out. For a given hole size, the increments in lift, drag and pitching moment coefficients produced trends when plotted against the difference between the upper and lower surface pressure coefficients on the undamaged aerofoil taken at the location of the damage. These trends are used as the basis of the predictive technique. The technique is used to predict the effects of a previously untested damage case, and these are compared with wind tunnel tests carried out on a half model finite aspect ratio wing. For all coefficients the trends in the predicted data are similar to experiment, although there are some discrepancies in absolute values. For the drag coefficient these discrepancies are partly accounted for by limitations in the technique, whilst discrepancies in the lift and pitching moment coefficients are attributed to limitations in the aerofoil test arrangements.
undamaged aerofoil taken at the location of the damage. These trends are used as the basis of the predictive technique. The technique is used to predict the effects of a previously untested damage case, and these are compared with wind tunnel tests carried out on a half model finite aspect ratio wing. For all coefficients the trends in the predicted data are similar to experiment, although there are some discrepancies in absolute values. For the drag coefficient these discrepancies are partly accounted for by limitations in the technique, whilst discrepancies in the lift and pitching moment coefficients are attributed to limitations in the aerofoil test arrangements.
diameter of 20%c hole dC d drag coefficient increment due to damage for aerofoil dC D drag coefficient increment due to damage for finite aspect ratio wing dC l lift coefficient increment due to damage for aerofoil dC L lift coefficient increment due to damage for finite aspect ratio wing dC m pitching moment coefficient increment due to damage for aerofoil dC M pitching moment coefficient increment due to damage for finite aspect ratio wing F hs scaling factor for damage hole size
INTRODUCTION
Aircraft survivability in a combat environment is an important aspect of the design process. Most survivability assessments concentrate on structural and systems integrity, but it is known that aircraft can survive a significant level of damage and continue flying. To assess whether an aircraft can survive an attack and still fly to a friendly base requires the ability to estimate the influence of damage on the aerodynamic characteristics of an aircraft. The exact nature of damage sustained by an aircraft is a function of many variables (e.g. weapon type, attack angle, etc.) so any method used to determine the aerodynamic effects of damage must be able to consider a wide range of damage scenarios. Ideally the method should also be straight forward to use so that critical damage scenarios can be quickly identified, and then analysed in more detail.
The main aim of the present study is to develop a simple technique to predict the effects of battle damage on the aerodynamic characteristics of a finite aspect ratio wing. Such a method will allow the survivability analyst to identify critical damage cases which may require more detailed investigation by either numerical or experimental methods. Whilst computational methods (1,2) can be used to predict the effects of battle damage with a reasonable degree of accuracy, the time required to generate model grids makes them unsuitable for rapidly assessing the wide range of possible damage scenarios on a wing in a short period of time. Possible damage cases on other lifting surfaces (e.g. fin and tailplane) add to the complexity of the survivability analyst's task and increases the desirability of having a simple predictive technique.
Whilst developing the predictive technique, the paper also extends the existing experimental knowledge base on battle damage by investigating the aerodynamic effects of damage arising from a range of attack directions.
PREVIOUS STUDIES
The first systematic investigation into the aerodynamic effects of battle damage was carried out by Irwin and Render (3) . In their paper they noted that 'previous work had considered the effects of very simple forms of damage on aircraft models at high speeds, while examinations into low speed characteristics have failed to explain the aerodynamic effects'. Irwin and Render (3) investigated the effect of simulated gun fire damage on the two-dimensional characteristics of a NACA 64 1 -412 aerofoil. The damage was modelled by circular holes located at either quarter or half chord with diameters ranging from 10% to 40% of aerofoil chord. The flow through the damage hole was shown to have many of the features identified by Andreopoulos and Rodi (4) for a jet in cross flow when it emerges from a flat plate. This allowed the flow through the damage to be characterised as being either a weak or strong jet. A sketch of the key features of a weak jet is reproduced from Irwin and Render (3) and shown as Fig. 1. Forward of the damage hole, the flow over the wing separated at the forward separation line and a horseshoe vortex was formed. Upon exit, the jet through the hole was immediately bent over and attached itself to the surface of the wing. At the downstream edge of the hole a contra rotating vortex pair was seen. This vortex pair is a significant feature of jets in cross flow and as described by Mahesh (5) different mechanisms have been proposed to explain their production. Downstream of the hole, the flow in the attached wake was seen to have a varying velocity profile with the highest values at the edges. Irwin and Render (3) determined that weak jets resulted in small changes in lift, drag and pitching moment, and were associated with small holes or larger holes at low incidence. Irwin and Render's (3) sketch of a strong jet is shown in Fig. 2. A forward separation line indicating the formation of a horseshoe vortex was again seen upstream of the damage. Behind the damage the spanwise distance between the two arms of the horseshoe vortex was far greater than seen for the weak jet. Towards the trailing edge, the horseshoe vortex ended in a pair of contra rotating vortices on the surface of the wing, with reverse flow between them which was entrained around the trailing edge of the wing. As this reverse flow approached the damage it was entrained into the strong jet which had exited through the damage hole and penetrated into the freestream flow. It was observed that strong jets resulted in large changes in aerodynamic forces and moments, and were associated with large diameter holes and small holes at high incidence.
Irwin and Render (6) investigated the influence of a wing's internal structure and showed that the presence of a cavity reduced the size of lift and drag changes due to damage. Although these changes were relatively small, they became more significant as incidence or hole size increased, which indicates that modelling of the internal structure is desirable for battle damage studies.
Two criticisms that can be levelled against Irwin and Render (3) are that circular holes are unrepresentative of battle damage and that they ignored petalling which often takes place around a hole when a projectile passes through a metal structure. Render et al (7) investigated the effect of hole shape by considering star shaped damage. Despite the complexity of the flow through the damage, flow visualisation revealed that the flow could still be categorised as a weak or strong jet with features broadly similar to those identified by Irwin and Render (3) . By considering both flow visualisation and the size of lift, drag and pitching moment changes due to damage it was concluded that circular holes are a reasonable representation of battle damage, provided the diameter is close to the maximum width of the damage being simulated. These findings are broadly in line with those of Robinson and Leishman (8) who investigated ballistic effects on helicopter rotor blades and concluded that the shape of the damage played only a minor role in the aerodynamic degradation. Robinson and Leishman (8) also used serrations to simulate petalling, but these did not change the basic characteristics of the flow through the damage, although the resulting wake appeared to be more energetic. In terms of the aerodynamic losses it was concluded that the hole rather than the serrations was the dominant source of changes in aerodynamic forces and moments.
Computational studies using a circular hole with a diameter of 30% of aerofoil chord and a star shaped case from Render (7) have been carried out by Saeedi et al (1) . In both cases the wing was solid and there was no attempt to model an internal wing geometry. The numerical simulations confirmed the flow features seen in the wind tunnel for both weak and strong jets, and the predicted effects of the damage on lift, drag and pitching moment were close to experiments, although the agreement diverged as incidence was increased towards stall. Saeedi et al (1) also investigated the flow inside the hole and observed a complicated vortex arrangement of interacting vortices in the upper and lower parts of the hole, which were also symmetrically placed on the left and right sides of the hole.
The aerodynamic effects of battle damage on finite aspect ratio wings was investigated by Render et al (9) for constant chord unswept wings between aspect ratios of 6 and 10. The flow through the damage was asymmetric, i.e. differences existed between the outboard and inboard sides of the damage. This was attributed to the variation in static pressure along the span of the wing which weakened the jet slightly on the outboard side. However, the jet still retained the flow characteristics observed by Irwin and Render (3) and also seen for jets in cross flow. Render et al (9) also showed that the effects of damage on lift and drag can be related to the difference between the pressure coefficients on the upper and lower surfaces of the undamaged wing at the location of the damage. This gives rise to the possibility of using the pressure distribution around an undamaged wing to predict the likely effects of damage.
Most recently Render and Pickhaver (10) have investigated the effects of battle damage on a NASA LS aerofoil and investigated the effects of attack direction. Changes in attack direction result in the upper and lower surface holes being displaced relative to each other. This initial investigation into attack direction is extended in the current paper. Pickhaver and Render (11) then demonstrated that the techniques developed by Render et al (9) could be successfully used to predict the effects of damage on a wing with an aspect ratio of 6. A significant finding identified by Pickhaver and Render (11) was that for all attack directions, the lift, drag and pitching moment increments were related to the difference between the pressure coefficients on the upper and lower surfaces of the undamaged wing at the location of the damage. When plotting increment data against difference in pressure coefficient, it was seen that there was no distinction between weak and strong jets. Pickhaver and Render (11) also took the opportunity to investigate the effects of Reynolds number and concluded that between Reynolds numbers of 10 6 and 5 × 10 5 there was no noticeable difference in the aerodynamic effects of battle damage.
AIMS AND OBJECTIVES
As previously stated, the main aim of the present study is to develop a simple technique to predict the effects of battle damage on the aerodynamic characteristics of a finite aspect ratio wing. Render et al (9) provide the basis of a predictive technique, but there remains a significant drawback with the proposed method because it requires input data on the aerodynamic effects for each damage geometry of interest. This data could be obtained by either wind-tunnel testing or computational methods, but for a large number of damage cases this would involve significant time and effort. Such an expenditure of time and effort is undesirable at the early stages of a survivability analysis, where the emphasis is on quickly identifying critical damage conditions for detailed analysis. This paper demonstrates that the use of a limited input data set (e.g. two damage cases) can be used to predict a range of damage conditions. Nearly all of the previously described battle damage studies have considered simulated gunfire damage that was normal to the aerofoil's chord line. In other words, the wing was hit by a shell or bullet which was fired from either directly below or directly above. In reality gun fire can come from a range of attack angles. For example an attack direction of ahead and below an aircraft is typical of anti-aircraft gun emplacements, whilst attacks from above and behind are typical of cannon fire from enemy aircraft. This paper investigates the effects of attack direction and incorporates the findings into the developed predictive technique.
The stages used to develop the predictive technique were: 1. Wind-tunnel tests on a two-dimensional aerofoil model to provide input data for the predictive technique. This included determining the aerodynamic effects of damage for a range of hole diameters and attack angles.
2. Wind-tunnel tests on a finite aspect ratio wing. This served two purposes: (a) To obtain pressure coefficient data for an undamaged wing. These results were used as an input to the predictive technique.
(b) To determine the aerodynamic effects of some of the damage cases previously tested on the two-dimensional aerofoil. This provided data for both developing the predictive technique and assessing the accuracy of predicted results.
3. Use of the predictive technique to predict the effects of previously untested damage cases on the aerodynamic characteristics of a finite aspect ratio wing.
4. Wind-tunnel testing of the finite aspect ratio wing with the previously untested damage cases.
5.
Comparison of predicted results from 3. with wind-tunnel tests from 4. to assess the accuracy of the predictive technique.
The NASA LS(1)-0417MOD aerofoil (12) was selected for the study since it is a more modern design than the NACA aerofoil used by Irwin and Render (3) . The aerofoil is also likely to be similar to those found on modern low speed aircraft such as Unmanned Air Vehicles (UAV) designed for the reconnaissance or surveillance roles.
TWO-DIMENSIONAL TEST ARRANGEMENT
The aerofoil model had a chord of 200mm and incorporated an internal cavity between 0·24c and 0·75c to replicate the internal structure of an aircraft wing (Fig. 3). The leading and trailing edges of the model were solid and manufactured from ProLab 65 which is a synthetic modelling board. The top and bottom of the cavity were formed by removable panels. These panels were moulded from fibreglass and attached to the model by countersunk screws. The aerofoil model was installed in Loughborough University's low turbulence wind tunnel which has a working section 0·45m × 0·45m and a turbulence intensity of typically 0·1%. The model was mounted on to a balance beneath the working section by means of fore and aft struts, and spanned the working section to give approximately two-dimensional conditions for the undamaged aerofoil ( Fig. 4). Incidence was increased from -4° to 15° which covered the zero-lift and stall of the aerofoil. Increments in incidence were 2°, reducing to 1° as stall was approached. The balance measured lift, drag and pitching moment with calibrated accuracies of better than 0·05% full scale deflection. Balance readings were recorded by a PC using LabView software and a National Instruments CompactRIO data acquisition system. A mixture of titanium dioxide, paraffin and linseed oil was used to obtain flow visualisation at the same incidences used for balance measurements. All tests were run at an air velocity of around 37ms -1 , which was close to the wind tunnel's maximum, and resulted in a Reynolds Number of 500,000. A transition strip was installed on the upper surface at 0·075c to minimise potential Reynolds number effects. Simulated battle damage consisting of circular holes was added to the removable panels, with a new set of panels used for each damage case tested. As reported by Irwin and Render (3) gunfire damage can result in hole diameters ranging from 5% of local wing chord (denoted 5%c) for a small shell at the wing root to 100%c for a large anti-aircraft shell close to a wing tip. To retain structural integrity of the removable wing panels, three hole diameters of 20%c, 30%c and 40%c are considered in this paper. Diameters of 5%c and 10%c were also tested during the study, but changes in forces and moments were small and difficult to distinguish from repeatability errors. The angular orientation of the damage was: The length of the model's chord was chosen to allow more precise damage modelling. However, this gave a chord to tunnel height ratio of 0·444, which is significantly larger than the normally accepted maximum of 0·3513. Wind-tunnel corrections were applied to balance measurements using the method of Garner (14) . The applicability of this method was established at the start of the study when wind-tunnel tests were carried out on undamaged models of 200 and 141mm at the same Reynolds number. The latter model gave a chord to tunnel height ratio of 0·3. Comparing the results from the two models indicated that the adopted wind-tunnel corrections were valid for the 200mm chord model.
The acceptability of the data from the undamaged model was assessed by comparing with NASA wind-tunnel data (12) collected at a Reynolds number of 2,000,000. This was the lowest Reynolds number at which data was available, and a comparison was only possible because of the presence of a transition strip on the undamaged model. Unlike the present study, the NASA data for lift and pitching moment coefficients was obtained from surface pressure measurements and the drag data from a wake survey and this will account for some of the discrepancies between the two sets of data shown in Fig. 6. However, the main cause of discrepancies can be attributed to the difference in model construction. The NASA data is for an aerofoil model with a smooth continuous and accurate contour which was not the case for the undamaged hollow model with its removable panels. Despite fitting well, the panels produced discontinuities in the model surface. Flow visualisation showed that compared to the rest of the model the upper surface panel produced early separation over the rear of the model and resulted in a premature stall ( Fig. 6(a)). This flow separation combined with the surface discontinuities resulted in increased drag coefficient Table 1 Obliquity across the incidence range ( Fig. 6(b)). The pitching moment coefficients for the two models ( Fig. 6(c)) were similar, although the present study produced smaller values. This is believed to be due to friction in the pin joints used for mounting the present model in the wind tunnel. Taking into account the differences in model profiles, comparisons with the NASA data indicated that acceptable undamaged data was produced by the present study. One set of undamaged panels were pressure tapped to provide surface pressure measurements. Chordwise tappings were placed on the centreline of the panel at fixed intervals of 10mm. The tappings were connected to a Pressure Systems 16TC/DTC pressure scanner linked to a PC via a Chell CanDAQ data acquisition unit. The data acquisition software sampled each pressure tapping 8,192 times over a period of approximately 30 seconds. The nominal accuracy of the pressure measurements was ±0·0696mmH 2 O.
The results for the damage cases are presented as increments in lift, drag and pitching moment coefficients for the undamaged wing and are defined as: where lower case l, d and m denote two-dimensional wing quantities. Tests on different damage cases indicated that the repeatability levels for the increments were ±0·013 for dC l , ±0·0013 for dC d and ±0·0020 for dC m .
THE INFLUENCE OF ATTACK DIRECTION
Flow visualisation for 20%c straight through damage is shown in Fig. 7. The leading edge of the model is at the bottom of the pictures. To avoid contamination of the transition strip the flow visualisation mixture was applied downstream of the strip. Using the key flow features identified by Irwin and Render (3) a qualitative assessment of jet strength can be made. Figure 7(a) is at zero incidence and shows the sides of a horseshoe vortex (A) which formed around the front of the hole and then travelled to the trailing edge of the wing. Behind the hole, and bounded by the sides of the horseshoe vortex, was an attached wake (B). The horseshoe vortex and the attached wake are the key features of a weak jet identified by Irwin and Render (3) . The flow through the damage strengthened with increasing incidence, and as shown in Fig. 7(b), at 4° all of the features described Increasing incidence to 8° resulted in further strengthening of the damage jet and an increased size of spanwise disturbance to the flow over the upper surface of the model (Fig. 7(c)). Introducing obliquity had a marked effect on the flow through the damage as is shown in Fig. 8. The two pictures are for 20%c damage at +60° and -60° obliquity and can be compared with the straight through case shown in Fig. 7(c). The three pictures illustrate how the jet strength and the extent of the flow disturbance increased as the obliquity became more negative. Although not shown, the results for +30° and -30° obliquity fitted in with these trends. The increasing jet strength was evident at all incidences and can be attributed to the increased pressure difference across the damage hole as obliquity was increased. The obliquity cases are still effectively jets in cross flow, but due to the presence of the internal cavity they are not similar to the inclined jets in cross flow studied by workers such as Compton and Johnston (15) and Milanovic and Zaman (16) . In these studies the axis of the jet at exit was specifically inclined to the freestream flow. To investigate the effects of the cavity, flow visualisation studies were carried out on the inside of the removable panels. When viewing the internal flow visualisation pictures it should be noted that the panels had to be removed from the model to allow the photographs to be taken. As a result flow visualisation mixture sometimes flowed whilst the panels were removed. The pairs of photographs in Figs 9 to 11 are of the upper and lower panels for one damage case at 8° incidence. The leading edge of each panel is at the bottom of the photograph, and the top and bottom of each photograph coincide with the start of the wing spars. For the -60° obliquity case there is little evidence of flow over the lower panel ( Fig. 9(a)) apart from two inclined lines (A) either side of the hole and a small collection of liquid on either side and towards the rear of the hole (B). By contrast the upper surface ( Fig. 9(b)) shows significant flow as is evident from the relative absence of mixture The internal flow visualisation for the straight through case is shown in Fig. 10 and it is clear from the upper surface panel ( Fig. 10(b)) that virtually all of the internal flow took place in the rear half of the cavity. Good quality photographs for the lower surface proved difficult to get because a pool of mixture collected towards the rear of the hole and tended to run whilst the panel was removed. From the lower surface panel (Fig. 10(a)) it appears that upon entering the cavity at A, the air flowed rearwards, spread along the rear spar (B) and then towards the upper panel where it exited through the rear of the upper surface hole (i.e. behind C in Fig. 10(b)). This flow behaviour within the cavity is similar to that seen for -60° obliquity. Interestingly the internal flow visualisation suggests that the jet exited from the rear of the hole. This is in contrast to Fig. 7(c) which by the position of the horseshoe vortex indicates that the jet occupied the entire hole. This apparent contradiction suggests that some flow passed through the front of the lower and upper holes without entering the cavity. At +60° obliquity (Fig. 11) there was less evidence of flow along the lower panel towards the rear spar. The collection of liquid behind the hole (A in Fig. 11(a)) and the bulge in the flow pattern forward of the hole on the upper surface (B in Fig. 11(b)) suggests that upon entering the cavity the flow through the lower hole had sufficient momentum to impact on the upper surface and then spread out to the sides before exiting through the upper surface hole. It should be noted that due to the presence of the rear spar there was little flow through the most rearward part of the upper surface hole.
Based on the external flow visualisation it was anticipated that reducing the obliquity from +60° would result in a reduction in lift coefficient and an increase in drag coefficient at any given incidence. This is confirmed by the coefficient increments shown in Fig. 12. For all increments there were broadly three regions. At the lowest incidences the damage flow for all cases was a weak jet and the increments were similar for all obliquity angles. The onset of strong jet flow marked the start of the second region. For the -60° obliquity case the transition to strong jet flow occurred at around 0° of incidence whilst this was delayed to 4° for +60° obliquity. With the strong jet established, the increments became larger and a distinct trend developed with obliquity For all increments the effects of damage reduced after 10° of incidence and the trends with obliquity were less well defined. This represents the third region and coincides with the upper limit of the linear portion of the undamaged lift curve slope and the onset of significant separation over the rear of the undamaged model behind the panel.
The behaviour of the +60° obliquity case is noteworthy as it was not always consistent with the other cases. Both the lift coefficient ( Fig. 12(a)) and the drag coefficient ( Fig. 12(b)) increments peaked at 6° incidence as opposed to 10° for all of the other obliquity cases. For the pitching moment coefficient increments (Fig. 12(c)) the effects were more subtle, but it can be observed that the +60° obliquity case crossed the +30° curve at 8° incidence. The behaviour of the +60° obliquity case can be explained by flow visualisation on the upper surface. At 4° (Fig. 13(a)) the damage flow was a strong jet. At 6° (Fig. 13(b)) the jet appeared to weaken since the extent of the wake was visibly smaller. The jet remained in this state for further increases in incidence ( Fig. 8(a)). From Fig. 11 it is known that at these higher incidences the flow through the lower hole Figure 14 shows the internal flow visualisation for +60° obliquity at an incidence of 4°. The arrangement of the photographs is the same as used in Fig. 11. There is distinct flow on the lower surface with the flow features of a contra rotating vortex pair (A in Fig. 14(a)) and an attached wake (B) indicating a weak jet. Upon reaching the rear spar the flow spreads sideways (C) and up towards the upper surface and exits through the rear of the hole, although the presence of the spar prevents flow at the furthest aft point of the hole. The flow through the bottom hole has changed its behaviour by 8° incidence (Fig. 11) to impact on the upper surface. It is suggested that this change in behaviour and loss of momentum is responsible for the damage jet on the upper surface losing its strength.
Skewing the damage hole brought about little change in the increments previously shown for the straight through and obliquity cases. This is not surprising since the pressure difference across the damage hole is unlikely to change significantly with skew as there is no spanwise variation in the surface pressures of an undamaged two dimensional wing. However, skew did introduce asymmetry into the flow at all incidences. This is illustrated by Fig. 15 for a skew angle of 60° at an incidence of 8°. Comparison with the straight through case in Fig. 7(c) shows jets of comparable strengths, but the skew case is asymmetric. This is most clearly shown by the contra rotating vortex pair on the edge of the hole which appear to be slightly asymmetric. This asymmetry is introduced by Fig. 16(b)) and then spread out before exiting the hole between the positions B in Fig. 16(b).
DEVELOPMENT OF THE PREDICTIVE TECHNIQUE
Render et al (9) showed that for straight through damage the value of a coefficient increment was related to the difference between the pressure coefficients on the upper and lower surfaces of the undamaged wing at the damage location (dC p ). Figure 17 shows the pre-stall coefficient increments for the 20%c hole cases with and without obliquity, including both weak and strong jets, plotted against dC p for the undamaged aerofoil. The values of dC p were obtained by integrating surface pressures obtained from wind-tunnel measurements over the areas of the holes at the upper and lower locations. In Fig. 17 a trend for each increment with dC p is apparent, admittedly with some scatter. However, it is important to note that the scatter is not due to trends in the data resulting from changes in obliquity or skew. Best fit curves could be placed through each data set and these curves could be used to estimate the likely effects of a 20%c hole at any combination of skew and obliquity between the tested extremes of ±60° obliquity. The data in Fig. 17 has been compiled using results from six different obliquity cases. This represents a significant amount of wind-tunnel testing and is contrary to the idea of producing a predictive technique that requires minimal input data. A realistic minimal input data set could comprise of the extreme damage cases of interest, which in this study would be +60° and -60° obliquity. This would reduce the number of wind-tunnel test cases to two. These two cases are highlighted in Fig. 17 along with the best fit curves through this data. The curves were derived from a second order least squares fit, which given the scatter in the measured data, were deemed to give an acceptable level of accuracy. For all coefficient increments the best fit curves are close to the curves through all of the data and indicate that the use of the two extreme damage cases will provide an acceptable alternative for estimating the effects of 20%c damage holes. For reference the equations of the best fit curves through the two extreme damage cases are reproduced below since they are used in the following analysis.
. . . (4) dC dp =0·0376766(dC p ) 2 +3.87867 × 10 -3 ) (dC p ) . . . (5) dC mp =-0·0450634(dC p ) 2 -0·0180391 (dC p ) . . . (6) where dC lp , dC dp and dC mp are the predicted coefficient increments for 20%c damage on the LS(1)-417MOD aerfoil. The damage cases of interest are unlikely to be just one diameter, so it is necessary to consider the effects of damage size. This is illustrated by Fig. 18 where the increments for straight through cases of three different diameters (20%c, 30%c and 40%c) have been plotted against dC p . Increasing the hole diameter resulted in increased jet strength and larger increments which is consistent with the findings of Irwin and Render (3) . As with Irwin and Render, the present study found that the pitching moment increments (Fig. 18(c)) were largely independent of damage size. This suggests that the pitching moment results for the 20%c case could be applied to any hole diameter up to 40%c. For the lift and drag increments it would be possible to produce best fits curves for a range of hole sizes and then interpolate to intermediate diameters of interest. However, the aim of the predictive technique is to produce a method that requires only limited inputs from wind-tunnel testing or numerical analysis, and the testing or computational analysis to produce best fir curves for a range of hole sizes is likely to be significant. Shown in Figs 18(a) and (b) are least squares straight lines for the data from each diameter. Normalising the gradients of the lines for the two largest holes by the gradient of the 20%c line, produces the values in Table 2. Also in Table 2 are the hole diameters normalised by the diameter of the 20%c hole. The value of the normalised hole diameter is within 10% of the normalised line gradients and it is suggested that over a limited range of hole sizes the use of diameter ratio is a convenient method to account for the effects of hole diameter. This can be expressed as: (4) and (5). F hs is a is the normalised diameter defined by: where D Dia is the diameter of the damage of interest and D Ref is the diameter of the reference hole used to formulate the best fit curves (equations 4 to 6). For the present study D Ref is 20%c. Analysis of the data collected by Irwin (17) produced similar results to those shown in Table 2, and it is believed that the use of F hs is likely to be applicable to all aerofoils provided that 1<F hs < 2. Whist this method is approximate, its advantage lies in that no additional wind-tunnel testing or computational runs are required to account for hole size. For smaller holes the use of F hs is likely to remain appropriate. However it is important to note that small holes produce weak jet flows over large parts of the incidence range, and consequently small coefficient increments. Use of the 20%c hole which had a strong jet by 4° incidence means that the outlined method is essentially a strong jet method and is likely to over predict the magnitude of increments for small holes.
The method outlined so far produces predictions of the coefficient increments for an aerofoil with battle damage with skew and obliquity. To analyse an aircraft, these increments will need to be converted to the geometry of the finite aspect ratio wing. The basic method has been outlined by Render (9) , but the method was developed for a special case where the chords of the aerofoil and the finite aspect ratio wing were identical. The method has been extended by Pickhaver (18) to allow predictions for a finite aspect ratio wing with sweep and taper, but for the present paper an unswept and untapered wing is considered. The equations for this type of wing are: . . . (10) The use of upper case subscripts indicate that dC L , dC D and dC M are the lift, drag and pitching moment coefficient increments for the finite aspect ratio wing. In addition, b is wing span and c is wing chord with the subscripts 2D and 3D being used to identify the two dimensional and three dimensional wings respectively. Using the predicted coefficient increment values from Equations (6), (7) and (8) in Equations (10) to (12) yield the final predicted coefficient increments for the finite aspect ratio wing.
ASSESSMENT OF RESULTS FROM THE PREDICTIVE TECHNIQUE
During its development, the predictive technique was assessed against battle damage wind-tunnel data, collected during the present study. However a more realistic assessment is to mimic the future use of the technique, by predicting previously untested damage cases. Two damage cases were defined and tested on an existing finite aspect ratio half wing model. The model had been previously used for battle damage studies and had the same aerofoil and internal construction as the model used in the previously described two dimensional tests. The cavity extended from the wing root to 85% of span, with the solid tip required to maintain the strength and stiffness of the model. The removable panels were located at three different spanwise locations, so that the centres of the panels were located at 25%, 50% and 75% of span. The model was untwisted and had a constant chord of 325mm and a span of 975mm. The half model configuration is shown in Fig. 19 and gave an effective aspect ratio of 6. Tests were carried out in the Loughborough University 1·9m × 1·3m closed working section wind tunnel, which had a turbulence intensity of less than 0·15%, at the model Reynolds number of 1 × 10 6 . This was a higher Reynolds number than could be achieved for two-dimensional testing, but testing at the higher Reynolds number was desirable to minimise inaccuracies in test data. Both the two and three dimensional models had transition strips at the same locations to minimise the effects of Reynolds number, but finite aspect ratio tests were also conducted at 5 × 10 5 to ensure that there were no significant Reynolds number effects on the coefficient increments presented in Figs 20 and 21. The experimental coefficient increments for the finite aspect ratio wing are defined by equations to 1 to 3, but use upper case letters as subscripts to identify three dimensional coefficients with the resulting increments being written as dC L , dC D and dC M . Force and moment coefficients were measured using an underfloor balance which had a nominal accuracy of better than 0·05% full scale deflection for all components. The overall repeatability of balance results was assessed as dC L = ±0·0072, dC D = ±0·0018 and dC M = ±0·0012. Balance measurements were corrected for blockage (14) and lift interference (19) . Surface pressure measurements were carried out using the same system described for the two dimensional testing.
The two previously untested damage cases both used a 28%c diameter hole. The first had an obliquity of -50° with zero skew. This obliquity was chosen because it was the largest that could be achieved for this hole size without having to cut into the front face of the cavity. The second damage case had -35° obliquity and +55° skew, with the axis of the hole centres moved so that the upper hole was as close as possible to the leading edge of the panel. This resulted in the axis of the holes being moved forward to 0·43c. Results for the second case will be presented since the comparisons between experiment and prediction were similar for both cases. In addition, results are only presented for damage at the centre span location. All three span locations were assessed, but differences between wind tunnel and predictions were seen to remain consistent. Pre-requisites for the predictive technique are: • Surface static pressure distributions for the undamaged aerofoil.
• Coefficient increment data for the damaged aerofoil over a range of incidences, This should be obtained for at least two damage cases which cover the range of obliquity angles of interest. The damage size should be reasonably close so that the diameter ratio (F hs ) is less than 2. Data for a 20%c hole at -60° and 60° obliquity will be used. The resulting value of F hs is 1·4.
• Surface static pressure distributions for the undamaged finite aspect ratio wing at the spanwise locations of the damage.
For this paper, all of the above prerequisite data came from wind-tunnel measurements, but could equally well have come from computational methods. The following steps are used to provide the predictions: • The surface static pressure data for the aerofoil is used to determine the pressure coefficient difference (dC p ) at each incidence for each of the damage locations considered for the aerofoil. This is then combined with the coefficient increment data for the aerofoil to produce equivalent plots to Fig. 17.
• Best fit curves are then placed through the data. In the present paper the best fit curves through the ±60° obliquity cases are given by Equations (4) to (6).
• Values of dC p for the undamaged finite aspect ratio wing are calculated at each damage location and for each incidence.
• The coefficient increments for the aerofoil are calculated for each value of dC p using the equations for the best fit curves developed in step 2.
• These increments then need to be converted to the hole size of interest. This is done by using Equations (7) and (8) with the appropriate value of F hs from Equation (9).
• The increments are currently expressed in terms of the aerofoil geometry. To convert to the geometry of the present finite aspect ratio wing, which is unswept and untapered, requires the use of Equations (10) to (12). The spans and chords are those of the two wind-tunnel models which have already been defined.
• Knowing the relationship between dC p and incidence for the finite aspect ratio wing allows the final predictions to be plotted (Fig. 20).
In Fig. 20, the predicted trends are similar to wind-tunnel results and this confirms the usefulness of the technique for use in the initial stages of a survivability analysis. Sensitivity studies were carried out by varying the damage cases used to compute the best fit curves in Fig. 17, however these produced little change in the predicted results and the comparisons with wind-tunnel data remained essentially the same. For the lift coefficient increment there is a divergence between prediction and experiment at the highest incidences. A similar divergence between finite aspect ratio experiment and prediction was noted during development of the prediction technique for the 20%c hole at -60° obliquity. The two dimensional experimental data for this damage case (converted to finite aspect ratio geometry by using Equations (10) to (12) is compared with finite aspect ratio experimental results for dC L in Fig. 21. The two data points closest to the divergence point at dC p = -0·8 are for the aerofoil at dC p = -0·791, which is 4° of incidence, and dC p = 0·768 for the finite aspect ratio wing at 8°. Flow visualisation of the aerofoil case has been previously shown in Fig. 8(b). Flow visualisation for the finite aspect ratio wing is shown in Fig. 22. To retain consistency with previous photographs the image has been rotated so that the leading edge of the model is at the bottom and the right of the photograph is nearest the wing tip. Surface flow visualisation was difficult for the vertically mounted half model because gravity caused liquid that had collected at separation points to flow downwards towards the wing root. This has occurred to the liquid at the centres of the large vortices on the surface of the wing, and for the vortex closest to the tip the liquid has flowed along the separation line between the reverse flow and the expanding jet. The jet on the finite aspect ratio wing is twisted due to the spanwise pressure variation, but based on the findings from Render (9) , this is not expected to produce significant changes in dC L . Expansion of the jet downstream of the damage hole appears to be more significant for the finite aspect ratio wing, and detailed measurements of the width of the horse shoe vortex indicated that the jet on the finite aspect ratio wing was slightly stronger despite being at a slightly lower dC p . This difference in jet strength may be due to the relatively small wind tunnel used for the two dimensional testing. It is possible that the closeness of the tunnel walls constrained the development of the jet for this extreme obliquity at incidences of 4° and above. However, it is important to note that none of the other damage cases tested on both the aerofoil and finite aspect ratio wing showed a mismatch. For the previously untested damage case dC D was under predicted over most of the incidence range although the results converge with increasing incidence (Fig. 20(b)). Part of this under prediction is likely to be due to the value of F hs and it is noted from Table 2 that the normalised gradient for the drag coefficient of the 30%c hole was 1·36. This is 10% less than the value of Fhs, and given the closeness in size of the 28%c hole suggests that a similar over estimation of Fhs may hae happened causing the final dC D values to be over predicted by around 10%.
The predicted trend with incidence for dC M is in line with experiments although absolute values are under predicted (Fig. 20(c)). Similar discrepancies were seen for all damage cases tested on both the aerofoil and the finite aspect ratio wing, and it is believed that this may well be due to the different mounting methods for the two models. As discussed earlier, compared with NASA data the undamaged aerofoil produced pitching moment increments of smaller magnitude and this was attributed to friction in the pin joints. The half model of the finite aspect ratio wing was mounted directly unto the underfloor balance and this problem did not arise.
CONCLUSIONS
he trends in aerodynamic coefficients due to battle damage previously identified by Irwin and Render (3) , are applicable to other aerofoil geometries. These trends are: battle damage increases drag, reduces lift and makes the pitching moment more negative (i.e. nose down). Up to the onset of stall, these effects became more pronounced as incidence is increased.
The addition of negative obliquity (i.e. the upper surface hole is forward of the lower surface hole) resulted in more significant changes in the aerodynamic coefficients. Positive obliquity reduced the magnitude of changes in the aerodynamic coefficients.
For all damage cases with a given hole size, there is a broad trend when coefficient increments are plotted against the difference in pressure coefficients between the upper and lower surfaces on an undamaged aerofoil at the damage locations. Best fit curves through the data can be used as the basis of a method to predict the effects of battle damage on a finite aspect ratio wing. This technique has been demonstrated by comparing predictions with wind-tunnel results for two previously untested cases. | 2018-11-20T08:47:22.688Z | 2015-08-01T00:00:00.000 | {
"year": 2015,
"sha1": "c27f76c0859c7b465449d1b45f1ec843aec16b2d",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/A_technique_to_predict_the_aerodynamic_effects_of_battle_damage_on_an_aircraft_s_wing/9227384/1/files/16806986.pdf",
"oa_status": "GREEN",
"pdf_src": "Cambridge",
"pdf_hash": "8f4e4531344a275a8ae7a1b1bc80b312d3ff7cf8",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
246862514 | pes2o/s2orc | v3-fos-license | Derivation of Top-Level Aircraft Requirements for Small Aircraft Transport by Modelling Demand in Europe
Advancements in aircraft technologies and in the process of aircraft electrification allow for the design of new small aircraft transport (SAT) configurations with a significant impact on sustainability, travel time and operating cost. Additionally, the European Flightpath 2050 creates a European-wide political landscape to enable developments in that field to thrive. All this together provides an environment that promises to open new business opportunities in the form of new and revived mobility services. However, the described ecosystem raises the question of what demand exists for SAT and what top-level aircraft requirements (TLAR) need to be achieved to realize customer-centric SAT. Data of the existing traffic patterns in Europe is analyzed to create a demand model, derive the TLAR and ultimately lay the foundation for a successful European SAT transport system. Initially, traffic pattern data is collected with a resolution on county and city level, thereby ensuring a high accuracy of larger and smaller travel distances. Subsequently, to the data collection, the income distribution in European countries is analyzed and in combination with a Willingness To Pay (WTP) function the actual existing SAT demand is determined. The demand optimized TLAR are then derived by varying the demand models input parameters to maximize the demand. The above-described approach allows to extract the potential annual demand in Europe for a certain set of requirements, it also details how a single parameter effects the demand. Hence, it provides sensitivities to illuminate design focal points. In consideration of all the described factors the paper defines the TLAR, thereby enabling the design of new SAT configurations.
Abstract. Advancements in aircraft technologies and in the process of aircraft electrification allow for the design of new small aircraft transport (SAT) configurations with a significant impact on sustainability, travel time and operating cost. Additionally, the European Flightpath 2050 creates a European-wide political landscape to enable developments in that field to thrive. All this together provides an environment that promises to open new business opportunities in the form of new and revived mobility services. However, the described ecosystem raises the question of what demand exists for SAT and what top-level aircraft requirements (TLAR) need to be achieved to realize customer-centric SAT. Data of the existing traffic patterns in Europe is analyzed to create a demand model, derive the TLAR and ultimately lay the foundation for a successful European SAT transport system. Initially, traffic pattern data is collected with a resolution on county and city level, thereby ensuring a high accuracy of larger and smaller travel distances. Subsequently, to the data collection, the income distribution in European countries is analyzed and in combination with a Willingness To Pay (WTP) function the actual existing SAT demand is determined. The demand optimized TLAR are then derived by varying the demand models input parameters to maximize the demand. The above-described approach allows to extract the potential annual demand in Europe for a certain set of requirements, it also details how a single parameter effects the demand. Hence, it provides sensitivities to illuminate design focal points. In consideration of all the described factors the paper defines the TLAR, thereby enabling the design of new SAT configurations.
Introduction
Aviation, as every aspect of life, will need to undergo major revolutions to ensure it meets the highly ambitious environmental goals of the future. Currently, aviation annually contributes 3.9 % to the effect of global warming, while only a fraction of the world population uses air transport regularly [1]. This climate impact is likely to rise when considering that projections by the International Transport Forum are showing that the passenger kilometers are going to increase from 44 trillion to 122 trillion annually by the year 2050 [2].
At the same time, people desire more individual travel solutions, which also reduce travel time and cost. On the road, this tendency was shown in the last few years by the success stories of individual on-demand car services. For air transport, this progress has not yet reached the customer, but rather is shown by the high number of new start-ups aiming to provide short-distance transportation services. Most start-ups select one of two approaches to achieve this. First, flying short distances either within cities or within a few miles of the city relying on electric propulsion and vertical take-off and landing. This category is considered as urban air mobility (UAM). Regional air mobility (RAM), the second category, is one size bigger in capacity and range. On the technical side, it also takes advantage of an electrified propulsion system. However, contrary to UAM, RAM uses solely the existing airports infrastructure while taking-off and landing conventionally.
This tendency towards more individual transport strains Europe's transport infrastructure. Already, 1 % of the GDP of Europe is annually lost in traffic jams. With road traffic projected to increase by 30 % from 2010 to 2050, this strain on the infrastructure will increase even further [3]. A possible solution is to move the traffic from the ground to the air, since air transport only requires infrastructure at both ends of the journey.
Based on the environmental situation, the tendency towards more individual transport solutions, and the shift towards an electrified propulsion system, the paper aims to show that a significant number of people would take advantage of an additional mode of transport. To ensure that the selected TLAR will produce an aircraft meeting the demand within Europe, a demand model for Europe is created. It models the traffic patterns of trains, cars, conventional aircraft and SAT. The demand model is then used to derive the TLAR for a SAT vehicle.
The paper elaborates on the process of determining suitable TLAR for a SAT vehicle by creating and subsequently applying a European demand model. Since the demand model is the basis for the TLAR derivation, the method of developing such a model is first presented, followed by a description of the TLAR derivation process. After the methodical approach, the results are illustrated in two parts. First, the TLAR derived from the demand model and second, the TLAR derived from other sources. At the end, the discussion sets the context for the results followed by the final section, the conclusion.
Methods
Based on the goal stated above, the approach can be split into two parts. First, the development of a demand model by modelling the traffic patterns within Europe. Second, applying the demand model in addition to already existing information to derive the TLAR.
Demand Model
The development of the demand model can be split into data acquisition and filtering, creating a quantity structure to model the traffic patterns and adding additional detailed information to the demand model. With the European demand model established, the next step is to determine the number of people that would switch from the existing modes of transport to SAT.
The approach of data acquisition and filtering is executed in the following steps. As a first step, all SAT relevant airports and airfields in Europe are identified. For this purpose, three different airport categories are defined, which from a population and geographical perspective IOP Publishing doi:10.1088/1757-899X/1226/1/012091 3 have a significant potential for SAT vehicle services. If an airport has at least 50,000 inhabitants within a radius of 10 km, it is classified as "urban". The second category is considered a "remote" airport, which has at least 10,000 inhabitants within a radius of 20 km and is more than 50 km a way from the nearest urban center. An urban center is defined by the European Commission as "high-density clusters of contiguous grid cells of 1 km 2 with a density of at least 1,500 inhabitants per km 2 and a minimum population of 50,000" [4]. The third category "island" represents an airport which is located on an island with at least 10,000 people living within a radius of 10 km of the airport. The analysis and filtering of the airports according to the process mentioned above is carried out using a geographic information software. In addition to the geographic coordinates of all European airports, a global grid (1x1 km 2 ) with detailed population data, a vector map with all islands in Europe and geographic coordinates of all European urban centers is used. By applying the buffer analysis method to the collected data, all airports in Europe are identified that apply to one of the three airport categories. The result is an overview of SAT nodes that are representing a significant potential for SAT traffic. Moreover, the individual SAT nodes are evaluated if they meet the infrastructural requirements for SAT vehicles.
For each of the transport modes air, rail and road a quantity structure is created. Airline data from Sabre [5] containing detailed information about worldwide flights is used to create the quantity structure of large aircraft traffic. For SAT, especially indirect flight connections, which from an origin-destination perspective can also be flown directly with a SAT vehicle, represent great potential. The assumption is that people would save a lot of time, if an affordable direct connection is offered. Therefore, the direct air distance (great circle distance) between the origin airport and the destination airport of all European flight connections (with at least one stopover) are analyzed. In the next step, all connections with direct air distance exceeding 2,222 km are filtered out to reduce complexity. Resulting in the identification of 25,396 indirect flight connections from 2019. Additionally, detailed information on passengers and ticket prices for the individual connections are exported from the Sabre data portal.
A different approach is used to create the quantity structure for road and rail traffic. Using the traffic origin/destination matrix from the EU-funded ETISplus project [6], containing detailed trip data at a resolution on the NUTS 3 level, a quantitative framework for the ground-based traffic between all European NUTS 3 regions is created. These regions represent the highest possible resolution of the European NUTS classification and are either counties or cities [7]. The data is structured as trips per year between every NUTS 3 region in Europe. A minimum number of 5,000 trips per year is set as a threshold, resulting only in the consideration of connections with a significant potential. In total, 51,326 European NUTS 3 road traffic relations and 4,126 European NUTS 3 rail traffic relations are incorporated into the model's quantity structure.
The subsequent step is to define plausible cost and time assumptions for the model's modes of transport. For conventional air traffic, the average ticket prices of the respective flight connections is used. To consider access and egress times to and from the airport, 3 hours are added to the flight time. For road traffic, a value of 0.30 e is assumed for the kilometer costs in Germany, which is adjusted accordingly for all other countries using the global Fuel Price Index [8]. The assumed kilometer costs for train traffic in the individual countries are taken from a European rail study [9]. In addition, country-specific assumptions about the average speed are used to determine the respective time for trips of the ground-based traffic.
Finally, an analysis of the income distribution in the individual European countries is carried out to determine the buying power for each European country. This is done by adding detailed income statistics for each country, using the World Inequality Database [10]. The usage of the database makes it is possible to analyze the income distribution of the population in 1 % percentile steps for all European countries.
Applying the WTP function [11] to the demand model allows to calculate the number of people that will switch from existing transport modes to a SAT vehicle. This WTP function (1) The following baseline values are assumed for the demand model input parameters: Cost, for the base value the above described average cost of 0.30 e for driving a kilometer with a car in Germany is used. The lower bound of the range is set to 150 km because for any distance below that, the assumptions is made that other modes of transport are more desirable. However, for the upper bound the limit is reduced to 1500 km from the initial 2,222 km limit because early runs of the demand model showed a negligible small demand above 1500 km. A resulting advantage is the deceased complexity of the model. The number of passengers is set to nine because a nine passenger configuration allows for single pilot operations according to the FAA [12]. Although the evaluated market is the European one, setting the seat number to nine considers the possibility to also operate the aircraft with less crew cost in North America. Setting the cruise speed to 350 km/h ensures that the aircraft can be propeller-driven. Moreover, considering the ICAO Annex 14 Aerodrome categorizations [13], the decision is made to meet Category 3 runway requirements. Category 3 airports must have a runway length of 1200 m to 1800 m. Hence, the baseline value is set to 1200 m, ensuring all Category 3 airports can be serviced.
Setting a baseline value allows it to illustrate the improvement of each iteration, while simultaneously providing a starting point for the TLAR derivation process. The process's target is to determine the optimum input parameter value. Thus, making the optimization process part of the TLAR derivation. The following steps are executed for each iteration: (1) Running the demand model with the determined input parameters. For the first iteration, the baseline values are assumed. (2) Identifying the most promising demand model input parameter to improve the annual demand. This is done by using the sensitivity analysis with normalized input and output. An exemplary sensitivity analysis can be found in Figure 1. Steps one through four are repeated until each parameter has been adjusted once. For requirements that are not derived from the demand model litterateur and studies are used. The requirements and their reasoning can be found in the section Data and Results.
Data and Results
Following the methodical approach described above, the resulting TLAR will be presented below. The separation is made between the requirements derived from the demand model and the requirements that are based on literature and studies. Furthermore, all TLAR derived in this paper can be found in Table 1. For a better understanding of the process to determine the demand models input parameters, the first iteration is examined as an example. Figure 1 illustrates the normalized sensitivities for the baseline analysis, thereby showing which variable is best suited for an adjustment within the first step. In the case of the exemplary first iteration, the seat-km-cost variable is selected because it offers an increase in demand by 18.6 % when lowering the seat-km-cost by 10 cent to 0.25 e. The increase of 18.6 % equals to 99.07 million more annual passengers, resulting in 633.19 million total annual passengers.
Demand Model Based TLAR
The gains per iteration are shown in Figure 2, with Iteration 0 representing the baseline assumptions and Iteration 1 the cost adjustment. For Iteration 2 the field length is adjusted to 800 m, resulting in an increase of 67.78 million annual passengers to 700.87 million. Adjusting the take-off distance to 800 m considers the ICAO airport categorization, thereby enabling the aircraft to land in all ICAO Category 2 airports, under Category 2 the runway length of the airport varies between 800 m and 1200 m. For iterations 3 and 4 no adjustments are made. This is due to the resulting sensitivity analysis for Iteration 3, which shows that only an improvement could be made by increasing the cruise speed. Due to the resulting drag penalty an increase in cruise speed would cause, the choice is made to remain with a speed of 350 km/h. For the reaming input parameter, the number of passengers, the demand model shows no effect on the annual demand. Hence, the number of passengers is kept at the baseline value of nine.
With the input parameters defined, the demand model output is evaluated in detail to derive further requirements. The demand distribution can be seen in Figure 3. The highest SAT demand exists on routes between 200 km and 300 km, with an annual demand of 247 million passengers. Also, on distances between 300 km and 400 km the SAT demand is significant with 124 million passengers per year. However, on greater distances the number of passengers continuously declines from 63 million passengers on distances between 400 km and 500 km to IOP Publishing doi:10.1088/1757-899X/1226/1/012091 6 5 million passengers between 800 km and 900 km. Thus, on such long distances, the number of people -and therefore the SAT shift potential -who travel with a personal vehicle or by train is minimal. Additionally, the provided time savings of a SAT vehicle compared to indirect flights with conventional aircraft decreases. Based on the illustrated trip distribution, the design mission distance as well as the required maximum range can be deduced. For the maximum range, the target is set to cover 95 % of the demand, which is achieved at 525 km. Contrary to the maximum range, the design mission range aims at optimizing the aircraft for a design point. As Figure 3 indicates, the maximum demand occurs between 200 km and 300 km. Thus, the design mission distance is set to 225 km.
Additional TLAR
With the demand model defining most mission related requirements, the missing ones are the max cruise altitude and the max payload. Although the aircraft is to be certified under the CS-23 regulations, the assumption is made that if the cruise altitude exceeded 25,000 ft the EASA would require a second live support system as defined in the CS-25 category [14]. Hence, the maximum cruise altitude is set to 25,000 ft. Using the above defined seat number, the payload is determined. This is done by multiplying the total sum of average passenger weight and average luggage weight which equals to 106 kg/PAX [15] with the sum of the passenger number and two pilots. Regardless of the targeted one-pilot operation of the aircraft, the cockpit needs to seat two people for training purposes.
Since the requirements aim at meeting a marked that provides schedule operations, further requirements can be derived. The turnaround time should not exceed 45 min. Furthermore, the aircraft is required to be designed for one-pilot-operation, to decrease crew cost and thereby lower the operating cost.
As explained in the introduction, the environmental impact of aviation is substantial. Hence, the next generation of SAT aircraft need to ensure that they have as little impact on the environment as possible. This is achieved by using the environmental goals of the European's Flightpath 2050 and apply them as requirements for this aircraft. Since the aircraft is targeting an entry into service for the year 2030, the goals are 20 years ahead of Europe's timeline. Thus, the requirements are a 75 % reduction in CO 2 , a 90 % reduction in NO X and a 60 % reduction [16]. Additionally, the aircraft should taxi on the ground without producing emissions of any kind.
Discussion
The previously derived TLAR clearly show that an aircraft meeting those requirements would result in a substantial amount of demand. Evaluating the results of the demand model of Paparoth [17] further underlines this statement. Although the demand estimation of Paparoth focuses solely on the country of Germany, a comparison of the two models can partially validate the results. Both models estimate that with a range of above 400 km over 85 % of the demand for SAT vehicles is covered. Furthermore, the TLAR presented in the previous section are of the same magnitude as the ones resulting from Paparoth's model. An additional comparison to the mobility analysis of Moore and Goodrich [18] presents the same picture for the trip distribution on the North American continent, underlining the decision to also consider FAA regulations.
When putting the demand model based TLAR into an engineering context, the significance of the TLAR results become evident. The reason for this is that the TLAR are right on the edge of what is assumed by Bill at el. [19] to be feasible for battery electric flight. Hence, it could be argued that there is a business case to develop a SAT vehicle for the European market.
However, the European demand model is based on ideal world assumptions, which distorts the real demand. Those assumptions can be split into two categories. In the first category, assumptions that overestimate the demand can be found. The model, for example, does not consider that airports could be restricted due to other air traffic. Moreover, for the calculations of the demand, the influence of the weather on the operability is neglected. This leads to an overestimation of the demand because most small airport do not have the technical capability to allow low visibility operations. Also, a 100 % reliability is assumed, further overestimating the demand. On the contrary, in the second category, assumptions can be found that underestimate the demand. The following assumptions fit into the second category. The initial filtering of IOP Publishing doi:10.1088/1757-899X/1226/1/012091 8 the airports will eliminate airports that could offer profitable connections. Additionally, only people are considered that would travel in any case. This underestimates the demand because it neglects people that could not make the trip with the existing transport modes.
Considering the ideal world assumption, it can be argued that instead of focusing on the low budget high-volume segment, chances of success would increase when focusing on the higher price segment first. This allows to learn more about the aircraft and the future technologies that are required to enable climate friendly aviation. Furthermore, the high price segment will then lay the basis to create an aircraft meeting the described TLAR.
Conclusion
The developed demand model uses data from existing traffic patterns of road, rail and air between NUTS 3 regions in Europe as a basis. By applying a modular choice based on a WTP function that considers time gained when using the SAT vehicle, the model estimates the number of people that would use a SAT vehicle instead of other transport modes. Using the described European demand model, the TLAR are derived by an iterative adjustment of the demand model's input parameters. Additionally, to the iterative adjustment regulatory aspects are considered such as ICAO airport categories. The resulting demand in Europe for a SAT vehicle with a capacity of nine passengers and a range that exceeds 400 km is above 700 million annual passengers. Those derived TLAR show the potential of a SAT vehicle in Europe and at the same time provide specifies requirements for future aircraft development. | 2022-02-16T20:05:27.123Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "1124ce295b1cbf33ace21062e8c9361490a2e8c1",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1226/1/012091",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1124ce295b1cbf33ace21062e8c9361490a2e8c1",
"s2fieldsofstudy": [
"Engineering",
"Business",
"Economics"
],
"extfieldsofstudy": [
"Physics"
]
} |
202897951 | pes2o/s2orc | v3-fos-license | Retrospective dosimetry using Egyptian halite (NaCl)
ABSTRACT Thermoluminescence (TL) sensitivity studied for some Egyptian halite (NaCl) samples. Three natural rock salts collected from Fayoum Governorate, Qattara Depression, and Siwa Oasis and two commercial salts (Table salt and Analytical NaCl) were studied for potential application to retrospective dosimetry. The chemical compositions of the samples were analyzed by using EDX technique. The Kinetic parameters were estimated by using peak shape method for general order. The deconvolution process for the glow curves was done using the peak fit program. The chemical analyses observe a difference in the trace impurities according to the collection area of the samples. All samples have three peaks (P1, P2, and P3) but the two peaks P2 and P3 suffered overlapping in case of table salt and Fayoum Governorate. The response curves are linear from, 10 up to 100 Gy for Table salt; Analytical salt and Siwa halite, in contrast, the dose–response curves for Qattara Depression and Fayoum halite are linear from 0.8 up to 100 Gy. The TL signal fading of the samples ranged from 21% to 34%, from the initial signal. The experimental results and the estimated kinetic parameters can be dedicating the ability to use the five investigated salt samples as a retrospective dosimetry.
Introduction
Unfortunately, even if all conditions have been made to secure nuclear or radiological facilities but accidents sometimes occur, because of operational faults (i.e. Chernobyl), natural disasters (i.e. Fukushima), or even terrorism. In the case of nuclear accidents or radiological occurrence, it is possible that a large number of members of the public are significantly exposed to radiation. In this situation, the attention is focused on measurement and assessment of the radiation-absorbed dose that relates to radiation deterministic effects on cells and tissues. Depend on the exposure and absorbed dose; authorities can be able to determine the appropriate action to activate the proper emergency plan.
In radioactive accident conditions, finding a method for measuring the radiation dose for individuals is not an easy question. As likely members of the public being exposed to radiation in the accident, do not have dose-measuring equipment such as sensitive films or radiation measuring devices. Therefore, attention was sought to finding a way to estimate the radiation dose by using materials gathered from the scene or the belongings of the affected population to visualize the level of radiation exposure at the stage.
Many physical dosimetry techniques, such as electron paramagnetic resonance (EPR), thermoluminescence (TL) and, optically stimulated luminescence (OSL), were applied to determine radiation dose following radiological events (Bailiff, Sholom, & McKeever, 2016). TL technique was our concern in this work.
Recently, many authors were headed toward the study and testing of natural and industrial materials as retrospective materials by using the TL technique. TL signals measured from natural samples can deliver information about the radiation-absorbed dose; it had been used as a dosimeter for decades (Gomaa & Eid, 1982;McKeever, Chen, & Halliburton, 1985). So that TL technique allows evaluates the radiation dose absorbed by measuring TL signals from materials that may be found all around the accident. As retrospective or accident dosimeter, the probable TL materials must consider several characteristics, such as hightemperature glow curves, stable peaks, and highenough sensitivity.
This study aims to make reference TL measurements of natural NaCl minerals (halite) extracted from the Egyptian west desert where Egypt's nuclear power production project will start in future. TL sensitivity was studied for five (NaCl) samples: three natural rock salts and two commercial salts (Table salt and Analytical NaCl). This study tries to demonstrate a linear dose-response function for the investigated halite samples, which connects the radiological events TL signals with its same dose. More investigations have to be carried out in the future after the power plant project starting in work.
Material and methods
Five salt samples, as shown in Figure 1, were collected, three natural rock salts from Fayoum Governorate, Qattara Depression, and Siwa Oasis (three regions are known for their salt mining in Egypt.), one common Table salt from market (commercial name in Egypt called Bono) and one Analytical NaCl, then they were grinding by mill and sieved to 106-210 μm grain. Samples were kept in the laboratory condition without any special treatment before the investigation. Energy Dispersive Spectroscopy (EDX) chemically characterized all Samples. The samples exposed to 60 Co gamma rays using a Gamma cell (Medical sterilizer-Cm20 at June 2001) irradiator.
TL-measurements were performed using TL-Reader Harshaw model 3500 with linear heating rate 5°C/s from 25°C up to 400°C. Five mg of the salt sample was uniformly distributed on the sample tray to assure good thermal contact with the heater during TL measurements. All measurements were taken after 24 h from irradiation to uniform the decay period and decrease the error in reading according to low-temperature peak.
To study the changes in the TL intensity with time (fading), all investigated salts were irradiated with 100 Gy of gamma-ray from 60 Co cell. They were stored in the dark in laboratory condition and measure the TL intensity at different intervals for 30 days.
Sample characterization
The chemical compositions of Table salt, Fayoum Governorate, Qattara Depression, Analytical NaCl, and Siwa halite were determined by energy dispersive X-ray analysis (EDX). Results show the presence of minor traces of elements such as O, Cu, Ca, Zn, Re, and Si with difference concentration beside main element Na and Cl in the investigated samples. Whether in Siwa halite there is a minor trace of Ca element with 0.13% only beside the main element Na and Cl. The difference in the concentration of the chemical composition of each type of salt may be due to a geological region that the sample was collected from it. The comparison between the chemical compositions of all investigated samples determined by EDX is listed in Table 1.
Characteristic glow curve
The characteristic glow curves for Table salt, Fayoum Governorate, Qattara Depression, Analytical NaCl, and Siwa halite, respectively, after exposed to different gamma-doses from 60 Co (25, 50, 100, 500 Gy) are shown in Figure 2(a) through 2(e). From these figures, we can observe that the number and position of glow peaks separate the samples into two groups.
The first group includes, Qattara Depression, Analytical NaCl and Siwa halite has three peaks (P1, P2 and P3) at average values 95°C-181°C-260°C, 98°C-189°C-270°C, and 97°C -182°C -270°, respectively, for the three samples P3 is the main peak and it may be considered as the dosimetric peak. The presence of the dosimetric peak at~266°C was reported for an Alpine salt with iodine (Ekendahl & Judas, 2010), two types of rock salt in Romanian (Timar-Gabor & Trandafir, 2013), and some of the domestic salts, from Australasia, Europe, Asia, and America (Hunter, . The second group includes Also, from the figure, it can be observed that the maximum peak position in all salt types does not change after exposure to different gamma doses; however, TL-intensity increases with increasing radiation dose. The direct relation between TL intensity and radiation dose may be explained by the fact that as the radiation dose increase as the number of traps being filled increase so that the rate of recombination during thermal stimulation increase (Manam & Sharma, 2003). The natural glow curves for all investigated samples before irradiation are shown in Figure 3.
Comparison of salt samples
From the present work and also from the literature studies (references provided in the above sections), we can see that different salt types have different glow curves. Figure 4 shows the variation of TL intensity with different types of salt. From this figure, we can observe that there are significant variations in TLintensity between various types.
The TL intensity of Table salt is the highest in comparison with those of, Fayoum Governorate, Qattara Depression halite, analytical salt, and Siwa halite.
The significant similarities in the characteristic and TL-intensity of Table salt and Fayoum Governorate sample may be due to the same location of both types. The difference in TL-intensity and peak position in different halite types may be due to the presence of various impurities, which depend on the site of the collected sample. Moreover, during the sample preparation of halite, which usually involves mechanical operations such as grinding for powdering the halite, these operations may introduce defects and changes in the morphology of the halite, consequently inducing the TL signals that may affect the radiationinduced dosimetric signal evaluation. The possibility of free radical production by mechanical operations in bone and tooth enamel was reported a long time (Marino & Becker, 1968;Polyakov, Haskell, Kenner, Huett, & Hayes, 1995). In contrast, other researchers such as Yüce and Engin (2017) have suggested that grinding and milling treatments of some natural minerals cause significant changes in the TL spectra of the metals by introducing an intrinsic defect.
Dose-response curve
Dose-response is one of the most critical studies of the TL characteristics, which investigate the relationship between the change of irradiated dose and the Depression halite have linear dose response between 0.8 and 100 Gy after that the response becomes sublinear. However, the dose-response curves for Analytical NaCl and Siwa halite are sub-linear from 0.8 to 10 Gy, linear from 10 to 100 Gy, and sublinear again from 100 to 500 Gy.
The first sub-linear in range 0.8-10 Gy may be due to the priority of filling the nonradioactive traps than the radiative traps. Where the sub-linearity appeared after 100 Gy dose may be for the reason of supplying most of the available present radiative traps in the sample lattice (McKeever, 1988).
The result of this work is in agreement with doseresponse measured by (Yüce & Engin, 2017) for common household salt in Turkish, who reported a linear dose response between 0.4 and 55 Gy. Also, (Polymeris et al., 2011) said linear response from 0.25 Gy to 100 Gy for the dosimetric glow peak of Kalas salt and the two dosimetric peaks of Turkish salt, Spooner et al. (2011) reported TL growth curves for the integrated TL 180-280°C over the dose range 0.14-35 Gy.
Fading
It is essential to know the stabilities of the traps connected with the dosimetric TL peaks because these reflect the storage capacities of the traps. The five investigated samples were irradiated to test dose 100 Gy and stored in the dark at room temperature.
The sample readout after interval times from exposure to 30days. Figure 6, showing the relative TL responses of Table salt, Fayoum, Qattara Depression, Analytical NaCl, and Siwa halite, respectively, as a function of storage time. Figure 6 represents that fading effect for all five investigated samples followed the equation Þ and the equation constants y 0 , A, and R 0 shown in Table 2 for different samples.
The figures showed that a 20% loss of TL-signal after the first 15 days in case of Table salt, Fayoum and Siwa halite but Qattara Depression loss about 20% after 1 week, on the other hand, the Analytical NaCl shows an almost constant response after 2 weeks and the loss gradually occurred through the 30 days. Also, we can observe that various halite types have corresponding fading rate since the initial peak intensity is decreased by (31%, 29%, 34%, 21%, and 28%) for Table salt, Fayoum, Qattara Depression, Analytical NaCl, and Siwa halite, respectively, after 30 days from irradiation. Figure 5. Dose-response curves for Table salt, Fayoum, Qattara Depression, Analytical NaCl, and Siwa halite, respectively, after exposed to different doses from gamma ray.
The fading behavior of our samples was somewhat compatible with the fading behavior of previous studies such as by Druzhyna et al. (2016). Many reviews for the storage time effect on TL response of household salt concluded that the loss of TL response may be~40% after 2 weeks and then remained stable (Elashmawy, 2018). Also, (Timar-Gabor & Trandafir, 2013) reported that there is rapid signal loss during first 7 days for some Romanian commercial salt after that the signal remained constant at~65% of its initial value over the period investigated (30 weeks).
Kinetic parameters
Kinetic parameters (deep traps, frequency factor, and order of kinetics, etc.) play an essential role for a deep understanding of TL phenomena . By using the peak fit program, the glow curve of the five samples under study deconvoluted as in Figure 7. From the figure, we can see that the glow curve could be deconvoluted into three glow peaks. The kinetic parameters were calculated by using Chen' empirical formula for the general order case (Chen, 1998;McKeever, 1988;Singh, Kaur, & Singh, 2012). The frequency factor (s), the mean lifetime (τc) and the escape probability (P) of the corresponding peak measured by using Urbach (1930) equation. The estimated values of the activation energies E (eV), frequency factors, mean lifetimes τc (y), and escape probabilities P (y −1 ) for the five investigated salt samples are tabulated in Table 3 From Table 3 we can note that Table salt and Fayoum Governorate salt samples have low values of kinetic energy (E) for P3 (the dosimetric peak) which investigate the highest fading of their TL-signal. In the case of Analytical NaCl and Siwa Oasis samples, the kinetic parameters were approximately similar. The values of kinetic energy (E) of P3 for Analytical NaCl and Siwa samples are 1.715 ± 0.016 and 1.756 ± 0.042 eV, respectively, which proved the brief stability of TLsignal of them as well as Qattara Depression sample which has a value of kinetic energy (E) for P3 equal to 1.372 ± 0.014 eV. P1 in the Qattara Depression has the minimum value of kinetic energy 0.570 ± 0.026 eV, i.e. it has the highest TL-signal fading of the samples. This observed in Figure 2 whereas, the first peak height of the Qattara Depression is the shortest one of the five studied salt samples.
Conclusion
Three natural rock salts from Fayoum Governorate, Qattara Depression, and Siwa Oasis, and two powder salts, commercial Table salt, and Analytical NaCl were investigated by using Thermoluminescence technique to be used as retrospective dosimetry. Qattara Depression, Analytical NaCl, and Siwa Oasis glow curves have three peaks at 95°C-181°C-260°C, 98°C-189°C -270°C and 97°C -182°C -270°, respectively. Table salt and Fayoum halite have a similar shape of glow curves. The two samples have three peaks, P1 is separate, but p2 and p3 overlapped. The peak position of Table salt and Fayoum halite for P1 at 93°C and 96°C and for the common of the overlapped P2 and P3 are 217°C and 227°C, respectively. This similarity in their TL characteristics may be due to that the Table salt products from the same area of Fayoum halite. All the investigated samples have linearity in the response curve make the samples are suitable to be used in retrospective dosimetry. Even though the TL signals fading were high but the residual TL signal enough to be used in case of high dose dosimetry. The estimated kinetic parameters investigated some experimental results. In future, we will study other types of halite in Egyptian west desert where Egypt's nuclear power production project will start soon to save in a reference library to be used as retrospective dosimetry in case of radiological accidents.
Disclosure statement
No potential conflict of interest was reported by the authors. | 2019-09-17T02:58:31.568Z | 2019-01-02T00:00:00.000 | {
"year": 2019,
"sha1": "c0db42a337e50c7dc8590a54a734e68ca2d3edd9",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/16878507.2019.1662173?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "340ea5bf99c2c9ee55d2641ca5c36a3634ab2cbd",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
256857660 | pes2o/s2orc | v3-fos-license | The effect of question order on outcomes in the orbital core outcome set for alcohol brief interventions among online help-seekers (QOBCOS): Findings from a randomised factorial trial
Objective A core outcome set (COS) has been developed in alcohol brief intervention (ABI) research through international consensus. This study aimed to estimate order effects among questions in the COS. Methods Individuals aged 18 or older who searched online for alcohol-related help were invited to complete the COS. The order of questions was randomised following a factorial design. Primary outcomes were order effects among the COS items and patterns of attrition. Results Between 21/10/2020 and 26/11/2020, we randomised 7334 participants, of which 5256 responded to at least one question and were available for analyses. Current non-drinkers were excluded. We found evidence of higher self-reported average consumption and odds of harmful and hazardous drinking was found among those who first answered questions on recent consumption and impact of alcohol use. Lower self-reported recent consumption was found among those first asked about average consumption. Quality of life (QoL) was reported lower among those who first responded to when questions on impact of alcohol use were asked first, which in turn was lower among those who first answered question on when average consumption and QoL were asked first. Attrition was lowest when average consumption was asked first, and highest when QoL or impact of alcohol use was asked first. Median completion time for the COS was 4.3 min. Conclusions Question order affects outcomes and attrition. If the aim is to minimize attrition, consumption measures should be asked before QoL and impact of alcohol use; however, this order impacts self-reported alcohol consumption and so researchers should be guided by study priorities. At a minimum, all participants should be asked the same questions in the same order. Trial registration The trial was prospectively registered (ISRCTN17954645).
Introduction
The World Health Organisation (WHO) has defined alcohol brief interventions (ABIs) as "practices that aim to identify a real or potential alcohol problem and motivate an individual to do something about it". 1 ABIs aim to help individuals change their behaviour, assess and provide feedback on alcohol use, and motivate and facilitate behaviour change. 2,3 Over the past 60 years, both face-to-face 1,4,5 and digital [6][7][8][9] ABIs have been researched and implemented in a wide range of populations, including primary care patients, 5 emergency health care populations, 10,11 college students, 12,13 and veterans. 14 Comparisons across trials of ABIs, and evidence synthesis of outcomes, are limited because of the high variety of outcome measures used, despite interventions being similar. To overcome this issue, the ORBITAL (Outcome Reporting in Brief Intervention Trials: Alcohol) project was established with the overarching goal of determining an international, consensus-derived, core outcome set (COS). 15 The aim was to prioritise the key outcomes to be measured in all online, digital, and otherwise delivered ABIs designed for adult drinkers who are at risk or currently experiencing harm but who are not currently in treatment.
The COS used the COMET (Core Outcome Measures in Effectiveness Trials) methodology, 16 including a systematic review that quantified the diversity in outcomes. In 405 trials of ABIs, they found 2641 different outcomes were used, measured in 1560 different ways. 17 Following two e-Delphi rounds, a consensus meeting, and psychometric evaluation, 18 10 outcomes formed the consensusderived COS. 19 The outcomes are: 1. Frequency of drinking 2. Typical number of drinks consumed on a drinking day 3. Frequency of heavy episodic drinking 4. Combined consumption measure 5. Hazardous and harmful drinking 6. Standard drinks consumed in the past week 7. Quality of life 8. Alcohol-related consequences 9. Alcohol-related injury 10. Use of emergency health care services A consensus was also formed within the ORBITAL project on measures for the COS outcomes. These are listed in Appendix A, 19 and described in brief here. The WHO's Alcohol Use Disorders Identification Test -Consumption (AUDIT-C) tool 20 is used to measure the first five outcomes of the COS. There are three questions with scores ranging from 0-4 on each question, and the total score ranging from 0-12 to form the combined consumption measure. A cut-off point of 5+ was used to illustrate hazardous or harmful drinking. The sixth outcome is measured by asking how many standard drinks were consumed each day of the past week, presented as seven questions, one for each day of the week (reported in grams to allow for intercountry comparison). The seventh outcome is measured using the PROMIS (Patient-Reported Outcomes Measurement Information System) global health 1.2 items, a 10 item questionnaire with higher scores indicating higher quality of life. 21 The eighth outcome is measured using the 15-item Short Index of Problems (SIP) questionnaire. 22,23 Each item is scored from 0-3 with a 3-month reference period, with scores ranging from 0-45. An additional question based on the SIP regarding injuries inflicted while drinking or being intoxicated (the ninth outcome) was also scored from 0-3. The tenth outcome is measured using a single question about the number of visits to emergency health care services adapted from EconForm90. 24 This paper reports the Question Order Bias Core Outcome Set (QOBCOS) study, which aimed to assess if there are question order effects among the outcomes of the COS. This type of effect, which can be viewed as a source of bias in measurement, is a well-known phenomenon in marketing and political science. 25,26 However, more recently, it was suggested that question order effects may exist when measuring alcohol consumption. In exploratory analyses of data collected in a trial, individuals asked to first report weekly alcohol consumption were less likely to be screened as risky drinkers using an average consumption measure, in comparison to those who were first screened using an average consumption measure and then asked about alcohol consumption. 27 However, in a trial aiming to estimate social desirability bias by randomising order to questions about alcohol dependence and problems and reports on alcohol consumption, no evidence was found that earlier questions biased subsequent reports on alcohol consumption. 28 The QOBCOS study was conducted to investigate this phenomenon and how it may be a potential source of bias. In addition, we aimed to study patterns of abandonment of the questionnaire to inform how to reduce attrition.
Methods
A double-blind randomised factorial design trial was used to investigate question order bias among the outcomes of the COS for ABIs. The trial was prospectively registered (ISRCTN Registry ISRCTN17954645) and received ethical approval on 2020-07-01 from the Swedish Ethical Review Authority (Dnr 2020-01799). A trial protocol was also pre-registered. 29 This report follows the guidelines set out in the CONSORT statement. 30
Settings and participants
Individuals searching online for information in the English language on how to drink less or quit drinking were recruited using Google Ads. Examples of search queries targeted were "How do I drink less," "I drink too much," and "Support for drinkers". The adverts were framed as an invitation to take part in a study to improve alcohol intervention research. Individuals who clicked on the advert were asked to read the study information, confirm that they were at least 18 years old and consent to take part. No demographic data on participants was collected to minimise participant burden. This design meant that there was a very low threshold for participation, which, coupled with no incentive for taking part in the study, does increase the risk of high attrition rates.
There were no explicit exclusion criteria; however, as decided a priori, 29 analyses excluded those who reported having not consumed any alcohol during the past three months (i.e., answering Never to the to the first AUDIT-C question and having consumed zero drinks in the past week).
Interventions
The 10 COS outcomes were divided into four clusters: (1) average consumption: frequency of drinking, typical number of drinks consumed on a drinking day, frequency of heavy episodic drinking, combined summary consumption measure, hazardous and harmful drinking; (2) recent consumption: standard drinks consumed in the past week; (3) quality of life: health-related quality of life; and (4) impact of alcohol use: alcohol-related consequences, alcohol-related injury, use of emergency health care services. The order of the four clusters was permuted to create 24 (=4!) order combinations, i.e., 24 conditions to which participants were randomised. Table 1 shows the conditions and different permutations to which participants were allocated.
Questions were presented to participants in the order corresponding to their allocation. All questions were shown on the same page, with the next question revealed after responding to the current question. To make the trial similar to regular surveys, participants were allowed to go back and change their responses to previous questions. Once all questions had been answered, participants were thanked and recommended to read more about alcohol and health on a selection of websites (listed in Appendix B). Contact information for the primary investigator was provided to participants as part of informed consent materials, however, there was no direct interaction between the study team and participants.
Outcomes
The primary outcomes were: (i) the 10 outcomes of the COS measured using the recommended questionnaires (listed in Appendix A 19 ), and (ii) the proportion of participants abandoning the questionnaire. These outcomes facilitated the primary analysis of order effects. Since the COS is new, the abandonment rate can guide future trials adopting the COS.
There were two secondary outcomes: (i) time spent on the questionnaire among completers and abandoners to estimate the anticipated burden of completing the COS, and (ii) the proportion of participants visiting the links provided at the end of the questionnaire to show if responding to the COS satisfied participants' intentions to seek help online, and if
Sample size
The trial used a Bayesian group sequential design [31][32][33] ; thus, a set of target criteria were evaluated continuously to decide when recruitment would end. The primary analyses were repeated periodically as data were collected and the posterior distributions of coefficients representing cluster order effects were assessed for evidence of effect or futility. Let ß k,i represent the coefficients for each order effect (i = 1,2,3) in each model (k = 1…10) and D represent the data available at the interim analyses. Then, the target criteria were: • Effect: p(ß k,i > 0 | D) > 97.5% or p(ß k,i < 0 | D) > 97.5% (i.e., if the question order effect was greater or less than the null with a probability greater than 97.5%) • Futility (linear regression): p(−0.1 < ß k,i < 0.1 | D) > 95% (i.e., if the question order effect is close to the null with a probability greater than 95%) • Futility (negative binomial and logistic regression): p(log(1/1.2) < ß k,i < log(1.2)) > 95% (i.e., if the question order effect is close to the null with a probability greater than 95%) For the effect criterion, a sceptical normal prior was used for regression coefficients (mean=0, SD=1.0), and a wider prior was used for the futility criteria (mean=0, SD=2.0). As is the case in Bayesian adaptive designs, these criteria were used as guides to aid the decision of when to stop recruitment, rather than strict a priori defined rules. [31][32][33] By virtue of sceptical priors, estimates are pulled towards the null when data is scarce, protecting against spurious and potentially erroneous findings. Due to the nature of Bayesian inference, there is no need to adjust analyses for multiple looks at the data. 34
Randomisation
Block randomisation was used to achieve equal allocation among arms (random block sizes of 24 and 48 were used to ensure that the sequence could not be predicted). The randomisation sequence and allocation were fully automated and computerised, leaving researchers blinded throughout the study period. Participants were aware they were taking part in a research study; however, the true nature of the study was not revealed to them, since this would interfere with the effects being studied. Therefore, participants were also blinded to their allocation.
Since no identifiers were collected for individuals, we used web-browser cookies and HTML5 storage to store allocation information on the participants' web-browsers. Participants who had not completed the questionnaire and returned to the trial website were presented with the cluster order according to their assignment. Participants who had completed the questionnaire and returned to the trial website were thanked for their participation, but not offered an opportunity to answer the questions again.
Analysis
All analyses were conducted following intention-to-treat principles, with all participants analysed in the groups to which they were randomised. As was pre-specified, current non-drinkers, identified by responding Never to the first AUDIT-C item and having not consumed any alcohol the past week, were excluded from analyses. Since causal mechanisms leading to missing data in this study were unknown, and it was anticipated attrition would be high, complete case analyses were planned as primary, with sensitivity analyses conducted with imputed data (multiple imputation with chained equations). Imputation was done using responses to all questions in the questionnaire. Imputed analyses included participants who responded to at least one question; thus, excluding participants for which no data at all was available. Non-drinkers were included to improve multiple imputation but were subsequently removed from imputed analyses, as were individuals with imputed values suggesting that they were current non-drinkers.
Estimates of model parameters were interpreted by inspecting marginal posterior distributions using Bayesian inference 35 (see Sample Size for specifications of priors). The Bayesian analysis treats evidence as a continuous measure, where it is the relative compatibility between different parameter values and the data that is studied. Thus, there is no decision whether there is or is no evidence for a studied phenomenon. We complemented this analysis with null hypothesis testing, which treats evidence as dichotomous, and estimates the probability of the data given that the parameter value is fixed at the null. To reject the assumption that the parameter value was null, we used the conventional alpha level of 0.05. These two approaches each give their own side of the story, one where data is fixed, and parameters are not, and vice versa. Both approaches were used for our scientific inference, thus evidence is considered both continuous and dichotomous in this analysis.
Primary analyses. The primary analysis of question order effects was conducted by using regression models in which each outcome in each cluster was regressed against a dummy variable representing whether each of the other clusters was asked before or after the outcome. For instance, standard drinks consumed in the past week (Cluster 2), was regressed against three dummy variables, representing Cluster 1, Cluster 3, and Cluster 4, respectively. The dummy variables took value 0 if the cluster was asked after Cluster 2 and value 1 if the cluster was asked before Cluster 2. For each outcome, one regression model was estimated, yielding a total of 10 models, using negative binomial regression for counts (past week's consumption and number of visits to emergency health care services), logistic regression for hazardous or harmful drinking (using AUDIT-C scores of 5+ as cut-off), and normal regression for scores (all other outcomes). We investigated 2-and 3-way interactions among cluster dummy variables to explore if the order of a combination of clusters affects outcomes.
The proportion of participants abandoning the questionnaire was analysed in two ways. First, to identify cluster orders that were more likely to result in abandoning the questionnaire, logistic regression was used to model abandonment versus completion with allocated arm as a covariate. Second, to identify clusters more likely to result in abandonment, multinomial regression was used to model the abandoned cluster (i.e., the cluster which was being presented when participants abandoned the survey). To account for the different number of questions within each cluster, the model of abandoned cluster was adjusted for the number of questions responded to. Both models were conducted using standard normal priors.
Secondary analyses. Time spent on the questionnaire was analysed in two ways. First, using normal regression with allocation as a covariate among both completers and abandoners. Second, using normal regression with the COS outcomes as covariates (completers only). Both analyses were conducted under standard normal priors, and the second analysis was also conducted using shrinkage priors.
Results
A total of 7334 participants were randomised from 21 st October and 26 th November 2020. At this time, the a priori defined target criteria (see Sample Size) were found to be sufficiently fulfilled that a decision was made to stop recruitment. See Appendix C for the final evaluation of the target criteria. Among randomised participants, 2078 did not respond to any questions at all. There were 475 participants who were current non-drinkers, leaving 4781 participants with partial responses who could be included in analyses of outcome measures where data was available. Imputation analyses were performed among the 5256 who had responded to at least one question, with nondrinkers removed before analysis.
Descriptive data of the study population are presented in Table 2, using available data for each outcome, excluding current non-drinkers. Approximately 61% of those included were classified as hazardous or harmful drinkers using average consumption measures. Past week's drinking was considerable, with mean consumption at 322 grams of alcohol (SD=298). The impact of alcohol use was reflected by a mean score on the Short Index of Problems (SIP) at 13.7 (SD = 11.5). There was some recent use of emergency healthcare reported, with a low proportion of injury.
Primary analyses
AUDIT-C items, total score, and hazardous and harmful drinking -cluster 1. Estimates of order effects on outcomes within Cluster 1 (average consumption) are shown in Table 3.
There was evidence that the total AUDIT-C score was higher among those first asked about their past week alcohol consumption (beta = 0.26; 95% CI = −0.01; 0.52; probability of effect 97.0%; P-value=0.059), largely driven by differences in responses to the first and third AUDIT-C items. The odds of being classified as a hazardous or harmful drinker was also higher among those first asked about past week consumption, albeit with weaker evidence (OR = 1.13, 95% CI = 0.97; 1.31, probability of effect 93.5%; P-value = 0.13).
Conversely, there was evidence that first being asked the PROMIS Global 10 questionnaire (Cluster 3) resulted in lower total AUDIT-C scores (beta = −0.31; 95% CI = −0.58; −0.04; probability of effect = 98.8%; P-value = 0.020), with evidence that responses to all three AUDIT-C items were affected in the same direction. There was however no marked difference in odds of hazardous and harmful drinking from responding to PROMIS Global 10 first. Finally, AUDIT-C scores were higher among those who had first responded to the SIP, injury, and emergency questions (Cluster 4) (beta = 0.69; 95% CI = 0.42; 0.96; probability of effect > 99.9%; P-value < 0.001). All three AUDIT-C items were higher among those first responding to the questions in Cluster 4. Odds of hazardous or For AUDIT-C 1, 2, 3 and Total, the coefficient represents the difference in outcome scores if the cluster was asked before (vs after) the outcome. For hazardous and harmful drinking, the coefficient represents the odds ratio when asking the cluster before (vs after) the outcome. The 95% compatibility interval is represented by the 2.5% and 97.5% quartiles of the marginal posterior distributions. b The proportion of the marginal posterior distribution which is in the same direction as the median below/above the null (0 for linear coefficients and 1 for odds ratios). c The P-value was calculated based on the maximum likelihood estimate of regression coefficients. These are not displayed in the table as almost identical to the posterior median in this large sample.
harmful drinking was also higher amongst those who first asked the questions in Cluster 4 (OR = 1.33; 95% CI = 1.14; 1.54; probability of effect > 99.9%; P-value < 0.001). Notably, the estimated order effects of Cluster 4 on Cluster 1 were markedly attenuated in the imputed analyses, thus, these may be taken as a more conservative estimate of order effects. There was no evidence of any marked interaction effects with the order of cluster 2, 3, and 4 on any of the outcomes in the first cluster.
Past week consumptioncluster 2. Estimate of order effects on cluster 2 (recent consumption) are shown in Table 4.
The evidence suggested that those who responded to the AUDIT-C questionnaire (cluster 1) before past week consumption reported 9% fewer grams of alcohol than those responding to AUDIT-C after (IRR = 0.91, 95% CI = 0.84; 0.99, probability of effect 98.7%; P-value = 0.023). There was no evidence of any marked order effect from Cluster 3 or Cluster 4 on past week consumption, nor any interaction effects with the order of Clusters 1, 3, and 4.
PROMIS global 10cluster 3. Estimates of order effects on Cluster 3 (quality of life) are shown in Table 5.
There was no evidence of any marked order effects with Cluster 1 and Cluster 2. There was however evidence that those responding to the SIP, injury, and emergency questions (Cluster 4) before the PROMIS Global 10 questions reported lower quality of life scores (beta = −0.98; 95% CI = −1.56; −0.41; probability of effect > 99.9%; P-value < 0.001). No interaction effects with the order of Cluster 1, 2, and 4 on PROMIS Global 10 were observed.
Short inventory of problems, injury, and emergency health care visitscluster 4. Estimates of order effects on Cluster 4 (impact of alcohol use) are shown in Table 6.
The evidence suggested no order effects from any of the clusters on the questions relating to injury and emergency health care visits. Also, no interaction effects with the order of Clusters 1, 2, and 3 were observed on any of the outcomes in the fourth cluster.
Abandonment. Figure 1 illustrates the marginal posterior distributions over the probability of abandoning the questionnaire in each of the 24 conditions (including all 7334 randomised participants). The mean abandonment rate was 52%, illustrated by the vertical line in Figure 1. Conditions which started with alcohol consumption measures (clusters 1 and 2) were less likely to be abandoned than conditions that started with questions relating to quality of life and the impact of alcohol use (clusters 3 and 4).
Secondary analyses
Participants spent a median of 100 s responding to the questionnaire (IQR: 0; 256), including the 2078 participants who did not respond to any questions at all (for which the time spent was 0 s). Among those who completed at least one question, the median time spent was 200.5 s (IQR: 71; 298). Among completers, the median time spent was 256 s (IQR: 190; 348). There was no consistent evidence suggesting the order of clusters was associated with overall time spent on the questionnaire among both abandoners and completers. Similarly, among completers, there were no marked differences in time spent on the questionnaire regarding responses to the COS.
Discussion
We employed a factorial randomised trial to estimate order effects among the outcomes of the COS for ABIs, and to investigate patterns of abandonment. We found evidence that first responding to questions about average consumption led to subsequent lower self-report of recent consumption, and lower impact of alcohol use. First responding to questions about recent consumption, however, led to higher scores on average consumption. We also found evidence that first responding to questions regarding quality of life led to lower self-report of average consumption and lower impact of alcohol use. Finally, first responding to questions regarding impact of alcohol use led to higher average consumption, including being more likely to be classified as a hazardous/harmful drinker, and lower quality of life.
Interpretation of findings
The causal mechanisms leading to the order effects identified in this study are unknown and were not explicitly measured. However, potential reasons for the effects include using one's own answers to earlier questions to inform answers to subsequent questions, 36 perhaps altering responses to be consistent across questions. Also, reflecting on the impact of alcohol use may also be a potential reason why recall of alcohol consumption was affected. For instance, having first responded to average consumption measures, participants may have altered their report of recent consumption to be closer to that of an average week's consumption. Since participants were recruited while searching for help online to reduce their drinking, they may have recently felt that they drank too much or experienced an adverse event and as such their average weekly consumption may be less than recent weekly consumption. Conversely, having first reported recent consumption, participants may have reminded themselves of what an average week's consumption looks like, or attempted to be more consistent in their responses, and thus reported higher average weekly consumption than if not having this opportunity to reflect.
Apart from order effects, this trial also identified a pattern of abandonment suggesting that first being asked about alcohol consumption measures resulted in less abandonment than first being asked about quality of life or impact of alcohol use. Abandonment was more likely while responding to questions regarding recent consumption, quality of life, and impact of alcohol use, in comparison to average a The point estimate in the table is the median of the marginal posterior distribution for the regression coefficient representing if the cluster was asked before (vs after) the outcome. The coefficient represents the incidence rate ratio when asking the cluster before (vs after) the outcome. The 95% compatibility interval is represented by the 2.5% and 97.5% quartiles of the marginal posterior distributions. b The proportion of the marginal posterior distribution which is in the same direction as the median below/above the null (1 for incidence rate ratios). c The P-value was calculated based on the maximum likelihood estimate of regression coefficients. These are not displayed in the table as almost identical to the posterior median in this large sample. Table 5. Estimates of order effects of Cluster 1, 2 and 4 on PROMIS Global 10 scores. consumption (adjusted for number of questions responded to). Previous research has found attrition is higher when participants in alcohol studies are asked to respond to questionnaires perceived to be less relevant to them. 37 Participants were expecting to answer questions about alcohol and were therefore likely prepared to answer questions about consumption. Being asked questions about quality of life and impact of alcohol may then have resulted in a higher than anticipated cognitive effort, reflection on consumption, emotions about consumption, or feeling judged about their alcohol use.
Previous research
A previous study has suggested that those who are first asked about recent consumption are later less likely to be screened as hazardous or harmful drinkers using AUDIT-C (OR = 0.83; 95% CI = 0.70-0.99), 27 however, our findings showed the opposite. The decision to analyse question order effects appeared post-hoc (not mentioned in protocol or trial registration) in an intervention trial. The trial also used AUDIT-C to determine hazardous/harmful drinking although the full AUDIT was collected. In contrast, the current study was purposely designed to study order effects with a pre-registered protocol and statistical analysis plan. 29 The COS uses the alcohol consumption subscale (AUDIT-C) from the full AUDIT screening tool, which has an additional factor of alcohol dependence and problems. 20,38 In a study of social desirability bias, 28 order effects of first responding to the dependence and problems subscales before the AUDIT-C were estimated. The study found no evidence of any marked order effects, which conflicts with our findings. There are several potential reasons why these studies show different results, including that the COS recommends SIP to assess alcohol problems rather than the AUDIT subscale, and the issues of the AUDIT For SIP and Injury, the coefficient represents the difference in outcome scores if the cluster was asked before (vs after) the outcome. For emergency health care visits, the coefficient represents the incidence rate ratios when asking the cluster before (vs after) the outcome. The 95% compatibility interval is represented by the 2.5% and 97.5% quartiles of the marginal posterior distributions. b The proportion of the marginal posterior distribution which is in the same direction as the median below/above the null (0 for linear coefficients and 1 for incidence rate ratios). c The P-value was calculated based on the maximum likelihood estimate of regression coefficients. These are not displayed in the table as almost identical to the posterior median in this large sample.
questions 9 and 10 to measure change over time. 19 The SIP is a dedicated and detailed inventory of alcohol-related problems and may invite more reflection on alcohol consumption. There may be a number of differences in population composition and drinking intention which could explain the equivocal findings. We targeted individuals with search terms around help with drinking and improving research on alcohol interventions. Other studies are often aiming to develop or evaluate interventions. Similarly, we had 475 individuals sign up who were not current drinkers, which diverges somewhat from intervention research, although we excluded these from our analyses. Participants were also asked if the aim of their participation was to get help to reduce their alcohol use, 29% of responders (1020/3494) said no. Again, this diverges from typical intervention research.
Implications for research and practice
Some of the effects we observed in this study are small, and so whether these effects are relevant depends on the context in where one expects to observe them. For instance, when comparing brief intervention studies, differences in estimates may be partially due to the way in which questions were asked rather than interventions being different. Perhaps more problematic, if questionnaires were designed differently for intervention and control participants within a trial, the order effects may at least partially mask (or inflate) the observed effectalthough, where effects of interventions are large, it may have less relevance and can possibly be disregarded. On the other hand, weekly alcohol consumption was lower by 9% when AUDIT-C was asked first, which could be a substantial part of the effect one expects from some brief interventions.
There appear to be order effects in how the data is presented, and the intent of the investigators may determine in which order questions should be presented. For example, it is widely considered that self-reported alcohol consumption often underestimates the alcohol consumed. 39 If the intent is to capture a higher self-report, a potential option may be to present impact of alcohol use and recent consumption first, followed by average consumption and quality of life. However, this may have a penalty in relation to retention, where average consumption works best as the first cluster. A trial that aims to include hazardous and harmful drinkers will likely include more participants if asked to respond to the impact of alcohol use measures or recent consumption before average consumption. If used in a screening and brief intervention setting, where the feedback and advice are dynamic based on participant responses, the order in which questions are asked during screening could affect the advice given, which should be borne in mind when designing intervention materials. Finally, if the COS outcomes are used in survey research, findings may be influenced by order effects. Care should be taken when comparing findings from different surveys or registries, and when possible, analyses should account for the uncertainty in outcome measures which order effects may produce.
Limitations and generalisability
This was an online study that did not require individuals to verify any personal identifiers, rather, we relied on webbrowser storage to ensure that individuals were randomised once. However, this means that there is an unknown risk that participants used a different device or web-browser to participate multiple times. Order effects could be reduced once having been exposed to all questions and completing them again, thus, our effect estimates may be biased towards the null. There were 7% (484/7335) of participants who visited the study site multiple times from the same webbrowser (visits within the same hour were not counted to differentiate between revisits and reloading of the page), thus, interest in the study site was relatively low after initial contact and there was no financial incentive which could have motivated multiple participation.
Several participants (n = 2078) did not respond to a single question. This is somewhat unsurprising, considering that individuals may simply have been curious about the website or study, and therefore clicked on the advert and consented without much reflection. Faced with questions about alcohol and health may have discouraged further exploration. This does limit our ability to infer unbiased intention-to-treat estimates, and there is no data available for imputation for these individuals. We ran our sensitivity analyses with imputed data among participants who had at least responded to one question, which attenuated some of the effect estimates, but did not change our overall findings.
The COS is designed to be used at follow-up in alcohol intervention studies, thus, participants at baseline have been asked similar questions and been screened into the study. In an intervention study, participants may potentially be protected against question order effects, due to previously responding to similar questions. However, it remains an open question if the order effects identified in this trial are persistent over longer periods of time, or if they only affect immediate subsequent responses. As others have noted, it would be interesting to see the intersection with objective measures such as biomarkers or ecological momentary assessment measures. 39,40 Finally, we strongly encourage researchers to replicate this study to develop stronger, international evidence on question order effects; those interested in doing so are invited to contact the corresponding author.
Conclusions
We found evidence of order effects among the four clusters of ORBITAL COS outcomes. Researchers designing studies which include measures of average and recent consumption, quality of life, and the impact of alcohol use should be aware of these effects and design (and pre-register) their studies accordingly. At a minimum, all study participants should be asked the same questions in the same order. Researchers should be guided by the nature of the studied population, recruitment, additional questions, concerns about under-reporting, screening for inclusion, and retention concerns. For instance, if the aim is to reduce attrition, consumption measures should be asked before the quality of life and impact of alcohol use; however, this order affects selfreported consumption and should therefore be balanced with study priorities. Replication for stronger, international evidence for question order effects will better guide researchers in decision making. The COS is practical, can be responded to in reasonable time, with less attrition if average consumption measures are asked first.
Declaration of conflicting interests: MB owns a private company (Alexit AB) which develops and distributes digital lifestyle interventions to the general public and for use in health care settings. Alexit AB had no part in funding, planning, or execution of this trial. GWS was the lead researcher on the development of the ORBITAL effectiveness and efficacy outcome set. CG is a paid scientific consultant for the behaviour | 2023-02-15T16:16:15.452Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "34333b9c88f5ed5da4eff522a3ae9e3d9101a5c4",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4e7f687bd972e98978003b9c0bdcc3201a6fa11d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119269362 | pes2o/s2orc | v3-fos-license | Lepton flavor violation in inverse seesaw model
We analyze the lepton flavor violation processes $\mu-e$ conversion, $l_i\rightarrow l_j\gamma$ and $l_i\rightarrow 3l_j$ in framework of the Standard Model (SM) extended with inverse seesaw mechanism, as a function of $\tilde{\eta} = 1 -|Det(\tilde {U}_{PMNS})|$ that parameterizes the departure from unitary of the light neutrino mixing sub-matrix $\tilde {U}_{PMNS}$. In a wide range of $\tilde{\eta}$, the predictions on the $\mu-e$ conversion rates and the branching ratio of $\mu\rightarrow e\gamma$ are sizeable to be compatible with the experimental upper limits or future experimental sensitivities. For large scale of $\tilde{\eta}$, the predictions on branching ratios of other lepton flavor processes can also be reach the experimental upper limits or future experimental sensitivities. The value of $\tilde{\eta}$ depends on the determinant of the Majorana mass term $M_{\mu}$. Finally, searching for lepton flavor violation processes in experiment provides us more opportunities for the searches of seesaw nature of the neutrino masses.
Introduction
Neutrino oscillation experiments 1,2,3,4 have established compelling evidence that neutrinos are massive and the neutral lepton flavor is not conserved. In SM extended with massive neutrinos, the charged Lepton Flavor Violation (LFV) processes, arising from loop level, such as radiative two body decays (l i → l j γ) and leptonic three body decays (l i → 3l j ), remain highly suppressed, see Table.1 5 for current experimental upper bounds, making them difficult to observe. The limit on branching ratio of µ → eγ is the most recent result given by the MEG experiments at the 90% confidence level 6 . Nevertheless, various extensions of the SM, such as the seesaw model with or without GUT, supersymmetry, Z ′ models, etc., have predicted enhanced branching ratios of LFV processes to be accessible in current experiments. Thus, searching for LFV processes are a powerful way to prove physics beyond the SM. Table 1. Current limits and future expectations for µ − e conversion, l i → l j γ and l i → 3l j . The seesaw mechanisms have been recognized as the most natural scenario for understanding the smallness of neutrino mass up to now. In canonical Type(I) seesaw, three right-handed neutrinos are introduced, and to achieve sub-eV range of light neutrino masses, Grand Unified (GUT) scale (i.e., 10 16 GeV) of the righthanded neutrinos is required and that makes the LHC study of the new physics scale difficult. In order to make the right-handed neutrino masses down to the TeV scale, the small neutrino masses have to be effectively suppressed via other mechanisms rather than the GUT scale, such as radiative generation, small lepton number breaking, or neutrino masses from a higher than dimension-five effective operator 16 . Another option to relate small neutrino mass to TeV scale physics is the inverse seesaw mechanism 17,18 . The smallness of the light neutrino masses can be ascribed to the smallness of M µ , which breaks the lepton number by two unity.
The smallness of M µ is a key element of the inverse seesaw models. So far, a very appealing picture is the radiative origin of the two unity lepton number-breaking parameter as it has been proposed in Ref. 19: it is induced at two-loop level, thus explaining its smallness with respect to the electroweak scale (EW). Introducing new scalar fields, the two unity lepton number-breaking term can also be induced at twoloop level and is naturally around the keV scale, while righthanded neutrinos are at the TeV scale 20 . In the supersymmetric inverse seesaw mechanism, the smallness of M µ was related to vanishing trilinear susy soft terms at the grand unified theory (GUT) scale 21 . In warped extra dimension, one can have the M µ smallness dictated by parameters of order one that govern the location of the 5D profile of the S fields in the bulk 22 .
The effective mass matrix for the light neutrinos is given by So that scale of M R can be made small and many phenomena due to the non-unitary feature of the neutrino mixing matrix can be manifested, such as LFV, CP violation and non standard effects in neutrino propagation 23 . Non-unitary mixing between 10 3 GeV and the scale of M µ varies in the range of [10 −10 , 10 −8 ] GeV, and to be compatible with the experiment limit on µ → eγ, large value setup of M µ is favored 23 . Assuming ∆L = 2 interactions are absent from the model, i.e.,M µ = 0, Ref.31 estimate the BR(τ → 3e) or BR(τ → eµµ) can be large as 10 −6 and the limits are out of date. In the inverse seesaw model, the limits on degenerate values of M R and M µ from the photonic contribution are much more stringent than from the non-photonic contribution for µ − e conversion in nucleus, and the rates arising from virtual photon exchange are generically correlated to the µ → eγ decay 34 . It is also shown that prediction on the branching ratio of µ → eγ can be within the reach of MEG experiment in B -L extension of the SM with inverse seesaw mechanism 35 . In supersymmetric inverse seesaw model, the LFV decays can be enhanced by flavour violating slepton contributions, the non-unitary of the charged current mixing matrix or the Higgs mediated processes 36,37,38 . In the framework of a supersymmetric SO(10) model with inverse seesaw 39 , the expected branching ratio for (l i → l j γ) are several orders of magnitude below the future sensitivity in experiment with TeV scale slepton mass, and for (l i → 3l j ) and µ − e conversion, the predictions are much smaller than what can be probed in planned experiments.
In SM, the LFV decays mainly originate from the charged current with the mixing among three lepton generations. The fields of the flavor neutrinos in charged current weak interaction Lagrangian are combinations of three massive neutrinos: where g 2 denotes the coupling constant of gauge group SU(2), ν lL are fields of the flavor neutrinos, ν iL are fields of massive neutrinos, and U P MN S corresponds to the unitary neutrino mixing matrix 40,41,42 .
In this paper we have studied LFV decays l i → l j γ, l i → 3l j and µ−e conversion as a function of non-unitary parameterη, which is firstly introduced in Ref. 43 in the SM extended with inverse seesaw mechanism. Moreover, we also investigate the the dependence ofη on M µ . From this point of view, the paper proposed is different from others. We perform a scan over non-degenerate parameters M R and M µ , which vary in region of [1, 10 6 ] GeV and [10 −11 , 10 −3 ] GeV, respectively, by taking account of the constraints from neutrino oscillation data and several rare decays. We have give a discussion about the parameter spaces, which is more narrow than Ref. 34. For CR(µ − e, N ucleus), both photonic and non-photonic contribution are considered in this paper.
The paper is organized as follows. In Section.2, we review the inverse seesaw mechanism and give the expressions for the unitary violating parameterη. The numerical results and discussions are presented in Section.3. The conclusion is drawn in Section.4.
Inverse seesaw model
The inverse seesaw mechanism can be accommodated in SM by adding two kind of singlet fermions, N i R and S i R , and one gauge singlet scalar Φ to the SM field content, where N i R (i = 1,2,3) stand for the usual right-handed neutrinos, S i R (i = 1,2,3) stand for the additional gauge singlet neutrinos, and these two kind fermions share opposite lepton number (-1 and 1, respectively). The relevant gauge invariant Lagrangian for neutrino masses is given by 17,18,20,44 : where l L stands for the SU (2) L lepton doublet, H ≡ iσ 2 H * stands for the Higgs doublets, Y ν and Y ′ ν are the 3 × 3 Yukawa coupling matrices, and M µ is a symmetric Majorana mass matrix. In this mechanism, it introduces an extra U (1) gauge symmetry into the electroweak model, under which the right-handed neutrino must be a non-singlet. After spontaneous gauge symmetry breaking, the extra U (1) gauge group breaks into U (1) Y , the weak hypercharge of the standard model. The invariant Lagrangian in Eq.(3) would be: where with υ the vacuum expectation value of the SM Higgs boson. It shows that the right-handed neutrino mass term M R conserves lepton number and the Majorana mass term M µ violates the lepton number by two units.
The neutrino mass matrix in the flavor basis defined by (ν L , N c R , S c R ) is given by where M is a 9 × 9 matrix. The mass scales of M D ,M R and M µ in Eq. and it yields nine mass eigenstates N i . The light neutrino flavour states ν lL could be given in terms of the mass eigenstates via the unitary matrix U as It is obvious that the mixing matrix would be simply the rectangular matrix formed by the first three rows of U in Eq.(6) and the matrixŨ P MN S describing the mixing between the charged leptons and light neutrinos in inverse seesaw mechanism could be written by:Ũ In inverse seesaw mechanism, U in Eq. (6) It has been shown in Ref. 43 that large value ofη is responsible for the lepton flavour universality violation in K + and π + leptonic decays in SM extended with inverse seesaw mechanism. The diagonalization of M leads to an effective mass matrix for the light neutrinos in the leading order approximation 46 , which indicates that the light neutrino masses vanish in the limit M µ → 0 and lepton number conservation is restored. The effective mass matrix m ν is diagonalized by the physical neutrino mixing matrix U P MN S , and, in the standard parametrization 5 , U P MN S is given by where s(c) 1 = sin(cos)θ 12 , s(c) 2 = sin(cos)θ 23 , s(c) 3 = sin(cos)θ 13 , and the experimental limits on the mixing angles are given in Table.2. The phase δ is the Dirac CP phase, and Φ i are the Majorana phases. The remaining six heavy states have masses approximately given by M ν ≃ M R . Without loss of generality, we work in a basis where M R is assumed as diagonal matrix. Using a modified Casas-Ibarra parametrisation 47 , which is automatically reproducing the light neutrino data, Y ν can be written by with υ the vacuum expectation value of the SM Higgs boson.M is the relevant and R is a 3 × 3 complex orthogonal matrix, parametrized by three complex angles α 1 , α 2 , α 3 : with the notation c i = cos α i and s i = sin α i , with i = 1,2,3. For simplify, we will assume R is real in our calculation. The interactions of the nine neutrino mass eigenstates, N i,j , and charged leptons, l i , with the gauge bosons, W ± and Z, are correspondingly given by the Lagrangians: where g 2 is the coupling constant of gauge group SU (2), and c w is the cosine of the weak mixing angle. P L/R = 1 2 (1 ∓ γ 5 ). C ij is defined as Here, C ij is also not unitary.
Numerical Analysis
To quantitatively study the non-unitary effect on various LFV processes, we perform a scan over the parameter space described as following. Before the calculation, it is clear that present data on neutrino masses and mixing should be accounted for, which are listed in Table.2 5 . and ∆m 2 32 within 3σ experimental errors and set the value of sin 2 2θ 23 equal to 1. The light neutrino mass spectrum is assumed to be normal ordering, i.e., ∆m 2 32 > 0, and CP violating phases δ, Φ 1 and Φ 2 are set to zero. The lightest neutrino mass would vary in region of [10 −5 , 1] eV. We also assume the R matrix angles in Eq. (15) are taken to be real (thus no contributions to lepton electric dipole moments are expected), and randomly vary in the range [0, 2π]. The use of Y ν in Eq.(13) ensures us the above neutrino oscillation data satisfied.
In SM with inverse seesaw mechanism, the relevant input parameters are the right-handed neutrino mass matrix M R and Majorana mass matrix M µ . Here, as mentioned before Eq.(13), M R is diagonal matrix. We will make the minimal flavor violation hypothesis which consists in assuming that flavor is violated only in the standard Dirac Yukawa coupling. Under this simplification the 3 × 3 matrix M µ must be also diagonal. We have randomly varied the entries of (M R ) ii in the range of [1, 10 6 ] GeV and (M µ ) ii in the range of [10 −11 , 10 −3 ] GeV. Table 3. Constraints used in the scan over free parameters.
Channel
Fraction or Limit Channel Fraction or Limit The experimental measurements of several rare decays should be also considered cause the parameter spaces are strongly constrained by such measurements. These rare decays have been investigated in literatures 31,32,43,48 . The non-unitary nature of the neutrino mixing matrix can manifest itself in tree level processes like leptonic decays of W boson and mesons (B + ,D + s ,K + and π + ), and invisible decay of Z boson. It can also manifest in LFV decays of Z boson, LFV rare charged lepton decays like l i → l j γ, l i → 3l j , and LFV process µ − e conversion in an atom, which proceed via one loop processes, and hence can be constrained. The current experimental limits are listed in Table.1 and Table.3 at 1σ level. Current experimental limits are listed at the 90% confidence level 5 (except for Z → e ± µ ± , Z → e ± τ ± and Z → µ ± τ ± for which the 95% C.L bounds are given). We will use these limits to bound the parameter spaces. For the channels listed in Table.3, we require that our numerical results are compatible with the experimental values within 3σ experimental errors.
We versus (M R ) 11 . The shadow region is compatible with constraints in Table.1 and Table.3.
addition, the blank area in the upper right corner in Fig.1 is also excluded, which is not displayed in Ref. 34 We also investigate the dependence ofη on sin 2 2θ 12 , sin 2 2θ 13 , ∆m 2 21 , ∆m 2 32 , m νe , (M R ) ii and (M µ ) ii . It displaysη strongly depends on (M µ ) ii . In Fig.2, we display the determinant Det(M µ ) versus Log[η] from a scan over few 10 6 points in parameter space in the inverse seesaw mechanism. Here, It shows that large values of unitary violationη (e.g., 10 −4 ) correspond to small scales of Det(M µ ) (e.g., 10 −15 GeV 3 ) or (M µ ) ii (e.g., 10 −5 GeV). In models where lepton number is spontaneously broken by a vacuum expectation value σ 46 one has (M µ ) ii = (λ) ii σ , where M µ is diagonal as assumed. For typical Yukawas (λ) ii ∼ 10 −3 one sees that (M µ ) ii ∼ 10 −6 GeV corresponds to a scale of lepton number violation value σ ∼ 10 −3 GeV 34 . Thus, if the LFV processes are observed in experiment, the vacuum expectation value σ should be the scale of (1 − 10 −3 ) GeV, under the assumption of typical Yukawas.
In Fig.3, we show the area plot of CR(µ− e, Au) versus Log[η] in the inverse seesaw mechanism from the scan over few 10 6 points in parameter space. The expected conversion rates CR(µ → e, Au) are sizeable to compatible with the experimental upper limit and future experimental sensitivities in range of 10 −14 <η < 10 −4 . For η < 10 −14 , the upper limit of the CR(µ − e, Au) decreases. The expected conversion rates CR(µ → e, Au) could be very small in the whole region of 10 −18 <η < 10 −4 . The area plots for CR(µ − e, Al), CR(µ − e, T i) and CR(µ − e, P b) versus Log[η] have the same behavior.
in the inverse seesaw mechanism from the scan over few 10 6 points in parameter space. It shows most predictions of BR(µ → eγ) are just below the experimental upper limit in range of 10 −16 <η < 10 −4 . In a narrow range of 10 −18 <η < 10 −16 , the upper limit of the predictions decreases. The prediction of BR(τ → eγ) can reach to the current limits only whenη is large (η > 10 −10 ). The upper limit of the predictions decreases whenη < 10 −10 . It is noteworthy that µ → eγ is more constraining than τ → eγ in most cases from a compare between figures in Fig.4. However, there is still probability that both predictions of these processes are very close to the experimental upper limit (η > 10 −10 Table.1 and Table.3. Table.1 and Table.3. BR(τ → 3e) can reach the experimental limit at large values ofη (aboutη > 10 −12 andη > 10 −10 ). Also, the upper limits of predictions decrease whenη < 10 −12 andη < 10 −10 , respectively. The observation of LFV decays µ → 3e and τ → 3e indicate the large violation of unitary in light neutrino mixing matrix and a lower vacuum expected value σ . The area plot for BR(τ → 3µ) versus Log[η] has the same behavior with BR(τ → 3e) versus Log[η].
Conclusions
The non-unitary mixing matrix in seesaw mechanism is a generic feature for theories with mixing between neutrinos and heavy states and provides a window to probe new physics at TeV scale.
In this paper we have studied lepton flavor violation decays l i → l j γ, l i → 3l j and µ − e conversion as a function of non-unitary parameterη in the SM extended with inverse seesaw mechanism through a scan over the parameter spaces defined from the right-handed neutrino mass matrix M R and Majorana mass matrix M µ . Taking account of the constraints from neutrino oscillation and various rare decays, the relevant parameter spaces are more narrow than that in Ref. 34. The result shows that large values of unitary violationη are related to small scales of Det(M µ ) or a small vacuum expectation value σ in spontaneously lepton number broken models. In range of 10 −14 <η < 10 −4 , the upper limits of predictions of CR(µ− e, N ucleus) and BR(µ → eγ) can reach the sensitivity of experiment, and is promising to detect directly in experiment in near future. In range of 10 −10 <η < 10 −4 , the upper limits of BR(τ → e(µ)γ), BR(µ → 3e) and BR(τ → 3e(µ)) can also reach the sensitivity of experiment. Finally, searching for LFV processes can serve as a window to the new physics of seesaw nature of neutrino masses. | 2013-12-07T08:38:34.000Z | 2013-12-04T00:00:00.000 | {
"year": 2013,
"sha1": "82cb4e6bedbccd27b25fd5765cc3f7007c1a2542",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1312.2073",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "82cb4e6bedbccd27b25fd5765cc3f7007c1a2542",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
248763147 | pes2o/s2orc | v3-fos-license | Spatial and seasonal patterns of water use in Mediterranean coastal dune vegetation
This paper examines the water dynamics of a coastal dune plant community, addressing spatial and seasonal variations. We aimed to detect the patterns of water use by plants at the community level according to their distribution across a coastal dune gradient from beach to inland. Five sites were established: upper beach, embryo-dune, slack, foredune, and inland. Eight perennial species were collected seasonally to analyse the isotopic composition and water potential. Soil water samples at 3 depths, groundwater, and atmosphere, were obtained to determine plant water sources. The species from Inland and foredune plant communities, Retama, Juniperus, and Helichrysum, showed the most stable isotopic signal throughout the year. On the contrary, the species most abundant on the upper beach, embryo-dune, and slack (Ammophila, Achillea, and Polygonum) showed the highest variability. Water deficit decreased the dependence on shallow and mid-soil layers along the beach-inland gradient. Beach and embryo-dune sites showed less negative leaf water potential values than the other positions in the dune gradient. Three factors mark the proportion of water sources used for vegetation in coastal vegetation: community composition, distance to the sea, and seasonality. Coastal dune vegetation exhibited a species-specific response in water uptake that was modified by its location on the gradient. From upper beach to inland, the plant communities showed a slight progressive increase in the use of water from deeper layers. This pattern was similar and overlapped with the wet to dry seasonal pattern.
3
Vol:. (1234567890) represent for this vegetation stress factors that add to those mentioned above; especially as seasonal water deprivation is aggravated by the low field water capacity of sandy soils, and by the likelihood of eventual ocean water intrusion (Sternberg and Swart 1987). In sandy soils, compared to other mechanisms (indirect recharge from runoff), aquifer recharge occurs mainly through direct infiltration of precipitation (Schmidt et al. 2011). Variations in precipitation cause alterations in the structure and function of plant communities (Greaver and Sternberg 2010).
Different studies have provided evidence of the differential use of water by terrestrial vegetation in coastal dunes (Greaver and Sternberg 2006, 2010Antunes et al. 2018aAntunes et al. , b, 2019; therefore, plants living in coastal environments may extract water from different available sources, rainwater, groundwater, fog, ocean water, or their mixtures. Few works have focused on vegetation water uptake on Mediterranean coastal dunes (Valentini et al. 1992;Alessio et al. 2004;Antunes et al. 2018a, b).
Spatial and seasonal variations in water uptake and differences between species have also been studied from different perspectives (Dawson and Pate 1996;Pivovaroff et al. 2016;Ding et al. 2021). Nevertheless, there are few studies focused on water uptake dynamics across environmental gradients and the differences in the response among co-occurring species to the annual hydrological cycle.
The water-uptake strategy is a significant plant trait in dry or seasonally dry type environments that determines plant survival (Dawson and Pate 1996). Furthermore, different water uptake patterns between species are useful strategies to reduce competition for water in dry soils (Mooney et al. 1980;Verweij et al. 2011). Understanding the water uptake pattern of plant roots is important to improve our knowledge about the responses of plants to hydrological conditions and is particularly significant in water-limited habitats. Besides, in addition to seasonal fluctuations in soil moisture conditions (as happens in Mediterranean areas), fluctuations in water availability across short gradients must also be considered (Oliveira et al. 2005;West et al. 2012).
Previous studies have shown that a mere root distribution is not sufficient to identify plant root uptake strategies (Verweij et al. 2011, Tron et al. 2015Nehemy et al. 2021). Stable isotopes used as tracers are an effective tool in the determination of water uptake patterns by plants (Sternberg and Swart 1987;Dawson and Ehleringer 1993;Penna et al. 2018). The relative abundance of 18 O vs 16 O (expressed as δ 18 O) in xylem sap can be effectively used for differentiating the specific origins of the water taken up (Flanagan and Ehleringer 1991;Dawson et al. 2002;Barbeta et al. 2018;Amin et al 2020) since depending on its source water usually exhibits specific isotopic signatures, 18 O/ 16 O ratios (Craig 1961;Dansgaard 1964).
Soil water evaporation causes enrichment in the heavy isotopes ( 18 O, 2 H) in the remaining soil water (Allison et al. 1983), which is more pronounced in the superficial layers of the soil than in the deep layers. Accordingly, plants with roots exploring deep layers of the soil will present a lower and depleted isotopic signal in the heavy isotope, indicating little evaporated water. On the contrary, plants with shallow roots that therefore explore the surface layers of the soil will be subjected to greater evaporation and will present a higher and enriched isotopic signal (Querejeta et al. 2007;Nie et al. 2011). However, this difference in isotopic signals as a function of depth of soil layers is dependent on environmental conditions such as soil water content, temperature, or soil porosity (Sprenger et al. 2016). After a few days of dryness and with high evaporative demand, these differences will be more pronounced, while under high humidity or abundant rainfall, this effect decreases and can even be reversed.
Another aspect to consider is the vulnerability of coastal dunes to the effects of climate change. An increase in mean sea level in association with climate variation could alter the spatial distribution of dune plant species (Mendoza-González et al. 2013). The erosion of coastal dune systems will increase as sea level rises (Feagin et al. 2005;Ranasinghe et al. 2012) and since vegetation is essential in the stabilization of dunes, it is crucial to better understand the response of dune vegetation to changes in water availability. Climate model simulations foresee that the Mediterranean region will increase in dryness and warming, especially in summer (reduction in precipitation overpassing -25-30% and warming overpassing 4-5 º C) (Giorgi and Lionello 2008). As a response to current climate change, many species have shifted their geographic ranges, seasonal activities, and abundances (IPCC 2014). Knowledge of plant water uptake strategies is useful to know how species will be affected by changes in water availability and soil water resources under the predicted climate change.
Since an environmental gradient from the ocean to the inland exists and assuming seasonal differences along the year in the Mediterranean climate (mainly due to precipitation), we hypothesised a changing pattern in vegetation's water use both in space and time. Specifically, our starting hypotheses are: 1) the source of water used by dune vegetation varies seasonally, 2) there is a zonal distribution of the species across the beach-inland gradient according to water use strategy, 3) the water uptake strategy is species-specific but modulated by their distribution across the beachinland gradient.
Taking all this into account, the main objective of this study was to assess whether the water uptake pattern of dune plants is species-specific or, in contrast, is modulated by the spatial gradient from the upper beach to the inland and seasonal water availability.
We attempted to answer the following research questions: I) How do the main water sources of dune plants change over the seasons according to water availability? II) Is there a water-uptake pattern at the community level related to species distribution across the coastal dune gradient from the beach to the inland? III) Is the water uptake strategy speciesspecific and independent of the spatial distribution of plant dunes across the beach-inland gradient?
Study site and species
Research work was carried out on the El Rompido spit (Lepe, Huelva, 37º12'N, 7º07'W), South-West Spain. El Rompido spit is a sandy bar that extends for some 12 km parallel to the coastline, at the Piedras river estuary. It is 300 to 700 m wide and comprises dune ridges separated by tidal swales and salt marshes. The soil is fine sand with < 3% of fine particles (silt + clay). It is a very poor soil, with an organic matter content of 1 to 2.6 mg g −1 established at depths of 5 to 10 cm. Soil pH is alkaline, 9.5 (due to the high carbonate content of 4-7 mg CaCO 3 ) and the conductivity is low (< 100 µS cm −1 ) (Muñoz-Vallés et al. 2015).
The vegetation of El Rompido dunes is well described by Muñoz-Vallés et al. (2015). In the upper beach, the vegetation is sparse and composed of Polygonum maritimum, Cakile maritima, Elymus farctus, Pancratium maritimum and Euphorbia paralias, among others. On embryo-dunes Ammophila arenaria, Achillea maritima and Euphorbia paralias are present. On foredune, the vegetation is dominated by A. arenaria and other species such as Eryngium maritimun, Artemisia campestris subsp. maritima and Crucianella maritima. Finally, on the back of the dunes, inland, the plant community is dominated by the leguminous shrub Retama monosperma and other shrub species such as A. campestris subsp. maritima, Helichrysum italicum subsp. picardii and Thymus carnosus. This multi-aged and well-developed shrub community represents the late-successional stage of the coastal dune vegetation, where more woody species become established because of the more stable areas at the back of the dunes.
The climate in the study area is Mediterranean with Atlantic influence. The average annual temperature and rainfall are 18.1º C and 490 mm, respectively, including a long dry and warm period from May to September (30-year record, from 1971 to 2000; data from Huelva Meteorological Station, AEMET). We used monthly precipitation and mean temperature information from a meteorological station located 12 km away (Lepe, Huelva, Spain) (Fig. 1A). The precipitation pattern was wetter than usual during the periods 2009-2010 and 2010-2011, being 880.6 mm and 684.6 mm respectively. These values represent 80% and 40% over the 30-year average (490 mm). Nonetheless, the period 2011-2012 was exceptionally dry, with an annual precipitation of 289.4 mm, 41% below the average.
The study was conducted in May (spring) and December (autumn) 2010, selected as warm and cold dates respectively, of a wet year. These two sampling months would be representative of the most favourable periods of the year for Mediterranean vegetation and of a hydrologically optimal year. The sampling periods of February (winter) and July (summer) 2012, respectively the coldest and warmest months of the year, would represent the periods of the year with the greatest stress on vegetation in the Mediterranean climate, and in this case, accentuated by being a dry year. According to Ellsworth and Sternberg (2015), the natural inter and intraseasonal variations cannot be captured by single measurements, especially in seasonal climates. For that reason, we chose several 1 3 Vol:. (1234567890) measurements per species, dune site and season, in the four seasons distributed in two hydrological years.
Beach-inland gradient: dune profiles and vegetation pattern
According to the topography of the dune and proximity to the ocean, sampling plots were set at the following sites: upper beach, embryo-dune crest, slack, foredune crest and inland depression (hereafter beach, embryo-dune, slack, foredune and inland, respectively) (Fig. 2). The beach and inland sites were the closest and farthest points to the ocean, and marked the extremes of a hydrological gradient across the dunes. To determine the dune profile, we established three parallel transects starting at the mean high-tide point and ending at the inland depression (each transect around 80 m long, perpendicular to the dune line, and separated 100 m from each other). Topographic measurements were taken every meter with an optical theodolite to determine the difference in height between points. Along the dune system gradient, 1 × 1 m plots were set every metre to determine plant species distribution. In every vegetation plot, the presence-absence of every species was registered (Table 1).
Based on their abundance on dune communities, we selected eight perennial species distributed across the gradient of the dune system from the upper beach to the inland (Table 1) Groundwater level and salinity Groundwater (GW) depth and salinity, through its electrical conductivity (EC, mS cm −1 ), were measured monthly using a water level indicator (KLL mini, Seba Hydrometrie, Kaufbeuren, Germany) and a conductivity meter (HI 9835, Hanna Instruments, Woonsocket, USA). To reach the GW, we installed two piezometers (polyvinyl chloride tubes with an outside diameter of 90 mm), one in the slack (PZsl) and another one in the inland site (PZin). The buried end of the tubes were covered with a permeable polyethene fabric piece to avoid sand filling. The groundwater level was taken with the ground surface as a reference so that PZsl always appeared deeper than PZin due to topography (see Fig. 2). The water level in the piezometer is assumed to be the same as that of the phreatic level.
Water source sampling Plant water sources in adult plants were determined through the oxygen isotopic composition of xylem water (δ 18 O) and the possible water sources. (n = 6-9 plants per species and site in spring, summer, and autumn, n = 5-6 plants in winter). Bayesian mixing models were used to compare xylem δ 18 O. Comparing these values with those obtained from GW and soil at different depths, we could determine the origin of water used by plants for every species, site, and season utilizing MixSIAR. Soil, GW, and rainwater are possible water sources for plants available in the soil profile that vertically mix. Groundwater samples were extracted seasonally from the two piezometers mentioned above using a pump. The water samples were kept refrigerated in double cap polyethene bottles sealed with parafilm until analysed to prevent evaporation and isotopic fractionation.
Soil samples were collected seasonally at three different depths (topsoil: 10, mid soil: 25, and deep soil: 50 cm) in each site and in the three transects (three replicates per depth). The samples were stored in screw-cap glass vials, following the same procedure as plant samples. Soil samples were also collected in polyethene bags to measure soil water content in each site at the three depths. The samples were cleaned of plant materials and weighted before and after ovendried for 48 h at 100ºC to calculate the gravimetric water content (% g g −1 ). We sampled the first 50 cm because, in the case of the studied sandy soils, most root biomass is concentrated in the upper layers (75% of the root biomass is located in the upper 37.5 cm, Martínez et al. 1998). We also took into account that evaporation fractionation is generally limited to the upper 0.3 m of the soil (Sprenger et al. 2016) and that according to Amin et al. (2020), water uptake mainly occurs at a superficial layer of 30-50 cm depth in this type of climate.
In this study, rainwater has not been considered a potential water source. Rainwater always mixes with soil water stored during previous rain events before being taken up by roots and is often segregated in space and time even before being mixed in the soil or for groundwater recharge (Evaristo et al. 2015). Although under certain conditions, water can be absorbed through leaves or even bark during rain events, these takings are very low compared to transpiration in arid ecosystems (Cavallaro et al. 2020). However, rainwater was collected from two pluviometers installed on purpose on the sampling site to know its values for soil water. To prevent evaporation, a 5-mm layer of liquid paraffin was added to the pluviometer collector.
Atmospheric water can be another important source of moisture as some of these species have leaf morphological structures, which facilitate dew uptake. Seasonally, we collected atmospheric water (as either vapour or small water droplets) at dawn by pulling air through a dry-ice-cooled glass condenser (following Helliker et al. 2002).
Plant material sampling
We collected xylem samples seasonally in the morning (n = 6-9 samples per species and site in spring and autumn 2010 and summer 2012, n = 5-6 samples in winter 2012). Samples from small-size species could include more than one individual. For the isotopic analysis of xylem water, leafless, lignified and mature stem fragments (rhizomes in the case of Ammophila) were cut and directly preserved in screw cap glass vials, sealed with parafilm, and kept refrigerated during transport to the laboratory, where they were frozen until extraction of xylem water.
Water extraction and isotopic analysis
The water from the soil and plant samples was extracted employing a custom-made cryogenic vacuum distillation system at the Stable Isotopes and Instrumental Analysis Facility (SIIAF), Centro de Ecología, Evolução e Alterações Ambientais (CE3C), Universidade de Lisboa (Lisbon, Portugal). The guidelines of Ehleringer and Osmond (1989), Ehleringer and Dawson (1992) and West et al. (2006) were followed. In summer, we could not obtain enough water from several soil samples from 10 and 25 cm deep due to the excessive dryness of the soil. Consequently, the mean of the soil samples of the five sites was used for the MixSIAR analyses and the figures in summer.
According to Ellsworth and Williams (2007), during plant water uptake, δ 2 H may fractionate in xylem water samples in species adapted to saline or xeric environments. Hence, we used only δ 18 O to detect water sources in plants.
The abundance of the heavy isotope was expressed in delta notation (δ) in parts per thousand (‰) as: where R sample and R standard are the molar ratios of heavy to light isotopes of the sample and the international standard (Vienna Standard Mean Ocean Water, VSMOW). Oxygen stable isotope ratio analyses were performed at SIIAF by headspace equilibration on an Isoprime (Micromass, UK) SIRMS, coupled in continuous flow mode to a Multiflow (Micromass, UK) auto-sampler and sample equilibration system. The materials used as reference were Medium Natural Water (Elemental Microanalysis Ltd, UK; δ 18 O V-SMOW = -10.18 ± 0.2‰) and Zero Natural Water (Elemental Microanalysis Ltd, UK; δ 18 O V-SMOW = 0.56 ± 0.23‰), regularly checked against IAEA-VSMOW and IEAE-GISP (Coleman and Meier-Augenstein 2014). The analytical precision was < 0.1‰.
Leaf water potential
In order to evaluate the water status of vegetation, leaf water potential was monitored in the study species (n = 9 measures per species and site in spring 2010, autumn 2010 and summer 2012, n = 5-6 measures in winter 2012). Pre-dawn (Ψ pd ) and midday (Ψ md ) leaf water potential values were measured in the field on freshly excised terminal shoots through a pressure chamber (Scholander et al. 1965; modified by Manofrigido, Portugal). All samples were measured immediately after cutting. Samples for Ψ pd were collected and measured from 5:30 a.m. to 7:30 a.m. while samples for Ψ md were collected within an hour around noon.
Xylem water potential is an important indicator of the plant water status and reflects a balance between root water uptake and weather conditions (Bhaskara and Ackerly 2006). We measured the leaf water potential of vegetation to assess the relationship between water-source use and plant water status. Thus, we hoped to know how the rooting strategy influences the seasonal plant water status by integrating data of vegetation distribution, hydrology, and ecophysiology.
Midday water potential stands for the maximum water deficit that xylem and leaves may undergo (Pockman and Sperry 2000;Ackerly 2004), whereas predawn water potential shows the recovery capacity of every species during the night.
Statistical analyses
Data of perennial species in every site were analysed with a row-by-column contingency test (G-test of goodness-of-fit) to detect eventual statistical differences in species frequency among sites (following the method by Causton 1988).
Two-way and one-away nested MANOVAs were carried out to compare the differences in leaf water potential (Ψ pd and Ψ md ), xylem oxygen isotopic composition (δ 18 O) and relative contributions of soil water sources to vegetation uptake (top%, mid%, deep%) across the beach-inland gradient in each season. To determine how these variables differed across the beach-inland gradient, season and position were considered as a fixed factor and species as a random factor nested within the position where the plants were collected. Pairwise differences were tested using posthoc Tukey tests. Spearman's correlations between oxygen isotopic composition, water potential variables (predawn and midday water potential) and percentage of contribution to sources were performed to examine the influence of plant water sources on plant water status. All statistical analyses were conducted using SPSS 26 software package (Chicago, IL, USA). To achieve normality, the variables were transformed by ln (Ψ pd and Ψ md ) or square root (10 + δ 18 O, top%, mid%, deep%).
The most likely contribution of water sources to coastal dune vegetation uptake was estimated using the Bayesian mixing model MixSIAR (Stock et al. 2018) which have been recommend for determining plant water sources (Wang et al. 2019). MixSIAR is a model framework in R (https:// github. com/ brian stock/ MixSI AR) that allows creating and running Bayesian mixing models to analyse the uncertainties in biotracer data (in this study, the tracers were based on the δ 18 O values on the xylem). The model used δ 18 O values of the xylem water of individual ('mixture or consumers', raw data of the eight dune species separately), the water sources described in the methods (mean values), and the discrimination factor (which for water uptake is set as 0). MixSIAR incorporates source and discrimination (fractionation) uncertainty to assign the posterior probability distributions of source contributions to a mixture. We followed an a priori aggregation approach (Phillips et al. 2005) so that the combined sources were similar, but also that they had some biological meaning. Accordingly, we combined deep soil and GW sources (hereafter-deep soil) to reduce the number of sources and obtain less diffuse solutions (Phillips et al. 2005). The low isotopic values recorded for atmospheric 1 3 Vol:. (1234567890) water, -12.2‰ ± 0.23, compared to the xylem water values indicated that this water source, apparently, did not have an effect on the isotopic composition of the plants, so it was discarded from the statistical analysis. In summary, we analysed separately the four study seasons and narrowed down water sources to three (topsoil, mid soil and deep soil + GW). We set the Markov Chain Monte Carlo (MCMC) to 5000 000 burn-in sizes. We used Gelman-Rubin and Geweke diagnostics to assess the convergence of the model. Gelman confidence intervals close to 1 and < 1.05 indicate model convergence, while Geweke diagnostics is a standard Z-scores based on the equality on two parts of the Markov chains. At convergence, the means of the chains should be the same, ≤ 5% of variables in each chain outside of ± 1.96 (Stock and Semmens 2016). In our study, the convergence was satisfied with a number of iterations 'very long' model (1 000 000 chain length) in the four seasons.
Dune topographic profiles and plant species distribution
The maximum heights of the dune topography of the transects were respectively, 5, 4.5 and 3.8 mamsl (Fig. 2), the highest point being the foredune crest of the middle transect. As shown in Table 1, the frequency of species was statistically different between sites. The main contrast among areas was the higher vegetation cover on the inland site.
The lowest values of soil water content were recorded in July, ranging from 0.1% in the topsoil to 1.7% in the deep soil, while the highest values were recorded in winter, ranging from 2.1% in the topsoil to 6.9% in the deep soil.
Groundwater
The groundwater level usually followed the precipitation pattern (Fig. 1A), where rainy periods implied higher water levels (1.7 m and 0.4 m, respectively, in slack and inland) and dry periods, lower levels (2.7 m and 1.7 m respectively in slack and inland). EC followed an inverse pattern concerning GW levels, with increasing values when water table and precipitation diminished and vice versa (Fig. 1B). EC was usually lower in PZsl than in PZin. During the dry period 2011-2012, EC in PZin increased more than twice compared to previous years, reaching 10.1 mS cm −1 in winter. On the contrary, EC in PZsl followed the common pattern of the previous year, reaching the highest level (2.5 mS cm −1 ) at the end of summer.
Species-level: plants and water sources δ 18 O The δ 18 O value of ocean water remained between 1.1‰ in spring and summer to 0‰ in winter (Fig. 3). The δ 18 O value of rainwater was more enriched during the warm season (spring = -3.4‰) than during the cold seasons (autumn = -5.4‰, winter = -5.3‰), which is consistent with precipitation records expected in the cold season, in winter (Máguas et al. 2011). No rain was collected in summer. The isotopic composition of GW showed small differences throughout the study period, although they presented neither a temporal (spring = -5.1‰, autumn = -4.9‰, winter = -3.9‰, summer = -3.7‰) nor a spatial pattern (PZsl = -4.5‰, PZin -4.3‰, Fig. 3). The isotopic composition of the atmospheric water at dawn ranged from -11.6 in summer to -12.7 in winter.
The different δ 18 O mean values of xylem water indicated a clear effect of seasonality on the isotopic signal of the water sources used by dune vegetation. Summer was the season having the most variable xylem δ 18 O signature, ranging from + 11.5‰ in Ammophila plants growing in the embryo-dune to depleted values of -3.8‰ in Retama and Ammophila plants from the foredune and inland (Fig. 3C). On the contrary, in autumn, δ 18 O was similar throughout the soil profile and with low variability among sites or species, ranging from -2.3‰ in Ammophila plants from the beach to -5.98‰ in Retama plants from the foredune and inland (Fig. 3B).
The species with the most stable isotopic signal throughout the year, Retama, Juniperus and Helichrysum, were more abundant in inland and foredune, while the species with the highest variability, Ammophila, Achillea and Polygonum, were more abundant in the upper beach, embryo-dune and slack (Table 1).
Two-way nested ANOVA showed significant differences in δ 18 O among seasons (P < 0.001 in both cases). Foredune and inland sites were different from the rest of the positions on the gradient, while the 1 3 Vol.: (0123456789) four seasons were significantly different from each other (Table 2).
MixSIAR results at community level
Topsoil water did not represent a relevant water source for the vegetation in any season of the year, nor in any gradient position, except for slack and foredune plants in autumn and winter, for which it could account for up to 20% of the uptake (Fig. 4).
Deep soil water appears to contribute to water uptake in plant communities across the entire beach-inland gradient in a relevant way (Beach 47%, Embryo-dune 47%, slack 62%, foredune 65% and inland 74%), although significant differences were found among sites (P = 0.001, Fig. 4). Water uptake from deeper layers increased with water deficit and following the beach-inland gradient. At the community level, deep soil water use increased seasonally from the wet to the dry seasons by 15% (mean of the five sites), while the mean increase of deep water from the beach to the inland accounted for 27%.
Slack and embryo-dunes were the sites with the highest differences between seasons (Fig. 4). In contrast, the lowest seasonal differences in water sources was recorded in the beach inland sites (Fig. 4). In all the analyses, the confidence intervals of Gelman diagnostics were lower than the estimated factor of 1.01 (0 > 1.01). Similarly in Geweke diagnostics < 5% of the variables were ± 1.96, except in May (where the second chain had 9.2% of the variables ± 1.96) and November (where the second chain was 10%). In our mixing models, the largest uncertainty was obtained in autumn (Fig. 4) (with a variability of 11% in topsoil, 22% in mid soil, and 22% for deep soil) and the lowest in summer (mean variability between 5 and 10%).
Fig. 4
Annual and seasonal relative contributions in the percentage of water sources used by vegetation across the dune system gradient from MixSIAR Bayesian mixing models (upper beach: Bch; embryo-dune: Emb; slack: Sla; fore-dune: Fore; inland: Inl). Letters denote significant differences between sites and error bars represent standard errors. Confidence intervals of Gelman diagnostics should be < 1.05 and were lower than 1.01 for all the variables (0 > 1. 01) and Euphorbia (40%), all of them, species characteristic of the beach and embryo-dune sites. Retama and Juniperus were the species more dependent on deep soil layers (89% and 81% respectively) and both more abundant in the inner positions of the gradient. Additionally, the beach species Polygonum and Achillea displayed the greatest annual fluctuation, indicating a high level of plasticity in water uptake (Fig. 5). In contrast, the foredune and inland species, Juniperus and Helichrysum, presented the lowest variability. All species (except Polygonum, which only grew on the beach) slightly increased the use of water from mid and deep layers of the soil from the upper beach to inland (Fig. 7). In this regard, a significant correlation was detected between the proportion of deep soil water absorbed by plants and their position in the gradient (R 2 = 0.376, P < 0.001). Water potential Two-way nested ANOVA showed that differences in Ψ pd and Ψ md among seasons and sites were significant (Table 2, P < 0.001 in both cases). Beach and embryo-dune sites, with the less negative values, were different from the rest of the positions in the dune gradient. Regarding annual differences, the four seasons were significantly different from each other. The highest leaf water potential, both at predawn and midday were recorded in autumn (Ψ pd = -0.09 MPa, Ψ md = -0.93 MPa) and the lowest in summer (Ψ pd = -1.25 MPa, Ψ md = -1.75 MPa) (Fig. 8).
Concerning species, the lowest Ψ md values and the broadest annual fluctuations were reached in Juniperus (-5.8 Focusing on all sites and seasons together, the correlation between δ 18 O and water potentials, both midday and predawn, was negative so that water potential became more negative when δ 18 O was more enriched (R 2 = 0.496 for δ 18 O-Ψ pd and R 2 = 0.608 for δ 18 O-Ψ md , P < 0.004, Fig. 9A, B). However, giving attention only to the summer, the δ 18 O-Ψ relationship observed was inverse: a positive correlation existed from beach to inland since Ψ decrease was positively associated with δ 18 O depletion, (R 2 = 0.827 δ 18 O-Ψ pd P < 0.03 and R 2 = 0.648 δ 18 O-Ψ md ; P < 0.1). Furthermore, a positive relationship was detected between leaf water potential (Ψ pd and Ψ md) and topsoil water uptake proportion (both P < 0.001) throughout the year. Fig. 6 A Study plant species frequency (%) per zone across the dune system gradient. B Annual contributions of water sources (%) used by the study species from MixSIAR Bayesian mixing models (results obtained from data of water oxygen isotopes of plants, soil layers at three depths, and groundwater). Letters denote significant differences between species in the annual contributions to water uptake at that soil depth and error bars represent standard errors. Confidence intervals of Gelman diagnostics should be < 1.05 and were lower than 1.01 for all the variables (0 > 1.01
Discussion
In Mediterranean ecosystems, the origin of the water used by vegetation is expected to vary seasonally in a complex way (Antunes et al. 2018a, b). Accordingly, in the present study water-uptake strategy and water status of the coastal dune plants communities shifted from wet to dry seasons and across the beachinland gradient. We found both temporal and spatial patterns, but also, our data showed that coastal vegetation exhibited a species-specific response in water uptake. Our data indicate that no single source contributes to all species, but individual species track with specific sources and that this species-specific response is modified by its location in the gradient from the upper beach to inland. There are numerous pieces of evidence in other studies that niche segregation of water uptake sources is a process that varies at the species level at the ecosystem scale (Brinkmann et al. 2019). However, in our study, this species-specific water uptake pattern is modulated by their location, since the species showed a slight progressive increase in the proportion of the use of water from the deeper layers from the upper beach area to the inland dunes (Fig. 7).
Isotopic signal: plants and water sources
The potential water sources absorbed by plants changed seasonally and locally depending on the species. According to the xylem δ 18 O data (Fig. 3), in spring and summer, the species closer to the ocean, showed more inter-species variability (7.2‰) of water sources than inland species (3.1‰), as pointed out by the high diversity of xylem δ 18 O figures in the species of beach and embryo-dune sites. This higher variance of δ 18 O signature in plants growing in the proximity of the ocean denotes a diversified water Fig. 7 Annual contributions, in percentage, of water sources used by the species across the dune system gradient from MixSIAR Bayesian mixing models. Confidence intervals of Gelman diagnostics should be < 1.05 and were lower than 1.01 for all the variables (0 > 1.01). Abbrev.: upper beach: Bch; embryo-dune: Emb; slack: Sla; foredune: Fore; inland: Inl) uptake strategy among species, with species depending mainly on top and mid-layers (Achillea, Ammophila, Polygonum) and species relying more on mid and deep layers (Artemisia, Euphorbia). In the case of Artemisia, this outcome is consistent with previous studies in other species of this genus (A. gmelinii and A. tridentata) that have described their capacity of exploring deep soil layers, to switch uptake to shallower soil layers when water is sufficient, or even to supply water to upper soil layers at night (hydraulic lift), (Richards and Caldwell 1987;Lü et al. 2017;Wang et al. 2017).
On the contrary, the assemblage of species growing inland (foredune and inland sites) showed a narrower range of water sources in spring and summer. The mean δ 18 O signature of the xylem water in these sites was lower and matched with deep soil water and groundwater signatures, indicating that plants restrict their water uptake to deeper layers. These data suggest that dune plant communities may diversify their water uptake strategy, where some species living in areas closer to the sea can uptake water from upper soil horizons more successfully than those of the inland community, which would share more similar water uptake strategies.
Fig. 8
Predawn and midday leaf water potential across the dune system gradient in the four study periods (spring: A; autumn: B; summer: C; winter: D). Significant differences between sites for predawn and midday leaf water potential are shown in Table 2 1 3 Vol.: (0123456789) Regarding the δ 18 O positive values recorded in spring and especially in summer and considering that, they matched the isotopic composition of soil water, our data support that these plants use soil water that is highly evaporated due to high temperatures. Conversely, it is worth mentioning the positive δ 18 O values observed in plants of Ammophila from embryodune sites in summer, 11.5 ± 2.92‰, as the isotopic composition of soil water at no depth (nor groundwater) could explain the isotopic signature of the xylem water. Therefore, other factors should explain these enriched δ 18 O values, such as bark evaporation, exchange of evaporated leaf water with xylem water, a decline in the sap flow rate through reverse flow, uptake of dew water, or mixing of phloem and xylem water (Dawson and Ehleringer 1993;Gan et al. 2003;Alessio et al. 2004;Cernusak et al. 2005;Ellsworth and Williams 2007;Ellsworth and Sternberg 2015;Palacio et al. 2014). Although it is problematic to assess the fractionation processes involved without combining data of 2 H excess or δ 2 H with the enriched 18 O values recorded, according to our results, the probable explanation for the enriched δ 18 O values in this perennial grass might be fractionation in cuticular evaporation and redistribution of the enriched water (Dawson and Ehleringer 1993;Eller et al. 2013;Martín-Gómez et al. 2016;Poca et al. 2019). In this regard, the high δ 18 O data combined with the lowest Ψ md values of beach and embryo-dune Ammophila plants would support the explanation mentioned above of high evaporation at leaf level, which would allow the water potential to drop to those low values (Pivovaroff et al. 2016).
Water uptake pattern
We are aware that the isotopic composition of xylem water is modified by the heterogeneity in physical and physiological processes in soils and plants and the complexity of the hydrological systems that involved plant water uptake (Penna et al 2018;von Freyberg et al. 2020;Beyer and Penna 2021). As a result, mixing models may have associated a large uncertainty that must be taken into account when interpreting analysis results.
Even so, the MixSIAR results obtained are consistent with the different strategies across the beachinland gradient defined by δ 18 O. In this way, the species most abundant in the inner position of the gradient (Retama and Juniperus) were the most dependent on deep water, while the most abundant in beach and embryo dune sites (Polygonum and Achillea), used the largest proportion of shallow soil water. This spatial segregation in water uptake could be related to life form, as the two beach species are perennial herbs (Chamaephytes), while the two inland species are large shrubs (Phanerophytes). This would be in agreement with the two-layer hypothesis stated Walter (1939) in savannas (revised by Ward et al. 2013) or the niche partitioning in other ecosystems (Weltzin and McPherson 1997;Ward et al. 2013). According to this model, grasses and herbs predominantly utilize water from the upper soil layers, while woody plants rely on deeper soil layers beyond the reach of grasses (Weltzin and McPherson 1997;Wang et al. 2017), especially during the dry season (Dawson and Pate 1996;Antunes et al. 2018a, b, c). Nevertheless, the spatial segregation in water uptake would be related not only to root distribution, but also to environmental conditions, water availability, and also to physiological and hydraulic traits of plants.
Even though we observed a species-specific water uptake pattern, this was not a fixed trait, but rather, it varied in response to the beach-inland gradient and seasonality. As a result, both factors modulated the specific response of the target species. In this way, the species increased deep soil water dependence following the beach-inland gradient and with the increasing aridity as the summer period began. At the community level, the spatial gradient will play a more relevant role in modifying the species-specific water use strategy than the seasonal gradient. This trend can be explained in two ways. First, higher water competition sites due to the greater vegetation cover (Table 1) in the inland and secondly by the mitigation of water stress by the ocean spray in beach and embryo-dune sites.
Regarding seasonality, through the wet seasons, recent rainwater is mixed within the upper soil layers so plant roots acquired water mainly from these soil layers during this wet period Pate 1996, Amin et al. 2020). As the shallower soil layers dried out during dry seasons, the dependence on deep layers increased, as suggested through the more negative δ 18 O values in xylem water and higher contributions to the water uptake of a mixture of mid and deep soil layers in spring and summer.
This shift to deeper soils for water uptake following the beach-inland gradient (Fig. 7) and the increasing aridity (Fig, 5) is in line with the observations of other authors. Accordingly, they reported that the depth of water extraction by plants is dynamic and plants could shift water uptake from shallow to deep soil layers as water availability decreased (Ellsworth and Sternberg 2015;Pivovaroff et al. 2016;Antunes et al., 2019;Barbeta and Peñuelas 2017). Furthermore, the plasticity in the root system is a major factor for plant acclimation to water deficit Kano-Nakata et al. 2011;. Our finding suggests an intracommunity variation in water uptake depending on the environmental conditions, (as in Barbeta et al. 2015;Voltas et al. 2015;Antunes et al. 2018cAntunes et al. , 2019 in our study case, water availability and proximity to the sea. Consequently, plants in the entire spatial gradient presented small differences and the water uptake pattern was not only conditioned by its availability, but also by vegetation root distribution (Yadav et al. 2009;Ellsworth and Sternberg 2015).
Water potential and water-uptake strategies Leaf water potential is an important variable of plant water status since it indicates the tolerance to water deficit of plants to maintain physiological activity (Bhaskara and Ackerly 2006). In our study, the inland vegetation relied primarily on deep soil water, which should lead to better physiological performance, but conversely, the summer Ψ pd and Ψ md and the winter Ψ md of some inland plants were the lowest in the whole spatial gradient. The relationship between Ψ and δ 18 O across the community of dune plants showed a different pattern in summer compared to the rest of the year. In summer, the most negative Ψ values were associated with the most depleted δ 18 O values for xylem water, suggesting deeper water sources.
The dependence on more reliable water sources (eg groundwater or deep soil layers) is normally associated with less negative water potentials. In fact, the relationship δ 18 O-Ψ usually behaves negatively so that plants with favourable water status show more depleted 18 O signatures (Jackson et al. 1999;Otieno et al. 2006;Martín-Gómez et al. 2017), whereas the enriched isotopic values (because of isotopic fractionation during high evaporation rate) correlate with values of water potential more negative, as we found in the whole annual cycle (Fig. 9).
Nevertheless, under water-deficit conditions, the opposite pattern can also be found: Even though plants extract water from progressively deeper soil horizons as summer drought advances, the lowest Ψ can be measured under these conditions. This is the case of Pivovaroff et al. (2016), who proved that deeper soil water uptake is important to allow midday Ψ to achieve more negative values. Zunzunegui et al. (2018) found the same water-use strategy in argan trees (Argania spinosa): under drought conditions, trees extracted water resources from deeper soil layers and the most depleted δ 18 O values were coupled to the lowest Ψ pd . Overall, we show that combining physiological measurements with traditional isotope tracing can reveal mechanistic insights into plant responses to changing environmental conditions (Nehemy et al 2021), In summer, when plants from foredune and inland sites were the ones using the highest proportion of deep soil water, they were the ones with the lowest water potentials. Differences among sites could be attributed to differences in the water-use strategy. The response of the lowest-Ψ in GW-dependent plants might be explained by the hypothesis thrown by Miller (1975, 1978), who stated that deeprooted plants were more sensitive to water stress. In this sense, studying a Mediterranean shrub species, Zunzunegui et al. (2000) found that Halimium halimifolium populations with shallow soil water dependence responded better to drought than groundwaterdependent ones. Furthermore, Antunes et al. (2018a) found that plants that were more dependent on groundwater, when subjected to severe water deficit must readjust their root architecture and water extraction strategies, which would involve physiological adjustments, to survive the drop in the water table.
In autumn and winter, the physiological responses of the foredune and inland plants were significantly different. Even though plant species of both sites were the same (Table 1), Ψ pd and Ψ md were lower in the inland site plants. This different behaviour in the same species but different positions of the gradient could be explained by differences in root access to deeper soil layers and more stable water sources as GW. The fact that GW becomes brackish in a dry year would affect phreatophytic vegetation, less used to salty water than plants living closer to sea. The reason could be in the distance and accessibility to GW, with plants from inland using groundwater with high EC, whereas plants in the foredune crest also use water from deep soil layers, less saline. Groundwater salinization was proposed to have occurred in this same area in a previous study (Esquivias et al. 2014), probably due to ocean water intrusion. This is consistent with the high EC and δ 18 O values recorded in GW in the present study in the inland site during low rainfall seasons, as a positive relationship exists between δ 18 O and EC (Esquivias et al. 2014). According to the seasonal dynamics of GW in the area, the only possible freshwater source is rainfall infiltration into the water table. The GW in both piezometers was not homogeneous the EC in PZsl being lower than in PZin, thus implying a differential phreatic recharge. Infiltration of precipitation from both embryo-dune and foredune would be the main source for the PZsl, whereas ocean water infiltration would be more remarkable in the PZin when precipitation is scarce. Further research is needed to explore the role of seawater in coastal dunes vegetation.
To summarize, the water-source dynamic of Mediterranean coastal dune vegetation changes according to spatial and temporal patterns. It can be seen that during water scarcity, the water sources used by vegetation along the beach-inland gradient are less dependent on shallow and mid soil layers; in addition, this response pattern is similar and overlaps with the effect of the beach-inland gradient. It is noteworthy that the vegetation from lower zones, slack and inland, maintains a marked dependence on deep water (from soil or GW) throughout the year, especially in the case of inland vegetation. In contrast, the zones closest to the beach shift water sources seasonally from top to mid soil and deep soil layers. Antunes et al. (2919) studying coastal vegetation found that in dry periods (no precipitation and a deeper phreatic level) vegetation generally relied on deeper soil horizons and presented a greater water status evenness. Furthermore, Bermúdez and Retuerto (2014) demonstrated the diversity in dune vegetation as species coexisting in the front dune exhibited a wide variation in relevant functional traits concerning water use.
Concluding remarks
Our results indicate that three factors mark the proportion of use of the water sources available for vegetation in coastal vegetation: the species composition of the community, distance to the sea and seasonality (precipitation and temperature). The interactions between species-specific strategies with spatial gradients and weather dynamics that are identified here will address how dune ecosystems will be affected by future scenarios of global change. The proximity to the beach is of primary relevance in determining the structure and function of Mediterranean coastal dune vegetation, especially during dry and warm seasons and even more in dry years. Predictions on global climate change point out that drought periods will be longer and more severe (Giorgi 2006;Sheffield and Wood 2008) thus increasing stressing factors for dune vegetation. It will be useful to identify a threshold of plant community survival through the quantification of each species tolerance to drought and its location on the beach-inland gradient.
The present study indicates that a decrease in precipitation could put at risk the continuity of dune vegetation through two mechanisms: altering root distribution and altering community composition. Maintenance of the perennial species of coastal dune vegetation is critical, as they are usually the most important species in the building of dunes, binding of sediments and reduction of erosion (Hesp 2002;Feagin et al. 2005). Furthermore, this perennial vegetation provides permanent coverage of the dunes, and hence its eventual loss would certainly increase erosion rates (Gracia et al. 2018). Our study reveals that plant species closer to the sea would respond better to water scarcity than the inland community. Therefore, species from the upper beach and the embryo-dune would expand due to climate change previsions. The results presented reveal new insights into how coastal ecosystems may be affected by changes in precipitation patterns as a result of climate change.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-05-14T15:22:06.125Z | 2022-05-12T00:00:00.000 | {
"year": 2022,
"sha1": "93fdb0fd4df30ff34e7efa9882780033e6975b29",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11104-022-05443-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "4ddbac6f584b45250ed79b136e331bcdd09e2007",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
221379836 | pes2o/s2orc | v3-fos-license | Internet Financial News and Prediction for Stock Market: An Empirical Analysis of Tourism Plate Based on LDA and SVM
Internet financial news plays an important role in stock market forecasting. This paper discusses the relationship between the content of the Internet financial news and the yield of the stock market by using text mining technology and machine learning technology. The Latent Dirichlet distribution (LDA) model is used to analyze the Internet financial news. And the support vector machine (SVM) algorithm is used to predict the trend of the sector. Afterward constructs a trading strategy. The results show that the introduction of the information of tourism topic distribution in the Internet financial news can effectively improve the accuracy rate of forecast, thus increasing return of investment, especially when the stock market is in a volatile period. To sum up, the information of Internet.
I. INTRODUCTION
Nowadays, the Internet has become the main source of information for public to access, especially the Internet Finance Module, which has become an indispensable way for investors to obtain market information [1]. In this context, the extraction and mining of internet financial news is of great significance for discovering market conditions. This paper takes the tourism sector as the research object, and obtains more than 80,000 financial news from November 16th, 2011 to July 11th, 2015 by text mining technology, which is on the financial news column of Phoenix Finance website. The Latent Dirichlet distribution (LDA) model is used to analyze the Internet financial news in depth, and then combined with the historical information of the stock market, the support vector machine (SVM) algorithm is used to predict the Manuscript received February 11, 2019; revised July 23, 2019. trend of the sector, and finally the trading strategy is constructed.
Compared with the existing research on the relationship between Financial News and stock price prediction [2], this paper has the following sparking points. Firstly, we study on the tourism sector specifically, and combine the historical information of the stock market with the news information related to the tourism sector on the financial and economic websites to build the prediction model. Secondly, by comparing the changes in the accuracy of the prediction model before and after the introduction of Internet financial news information, the role of Internet financial news can be objectively demonstrated [3]. Thirdly, the data span is longer, which improves the shortcomings of the existing literature research. Fourthly, we conducted a detailed study of different stock market stages (up, down, and volatility), respectively exploring the role of Internet financial news in predicting the trend of the stock market in different stages.
A. Latent Dirichlet Distribution (LDA) Model
This paper selects the Latent Dirichlet distribution (LDA) model as the extraction method of the hot topics in Internet financial news. As a probabilistic generation model, the LDA model can map high-dimensional feature vectors to low-dimensional semantic space. Since text is composed of different topics and topics are the main ideas composed of different words, the LDA model can effectively identify the topic information contained in large-scale documents [4]. Fig. 1 shows the LDA model diagram, where the solid point represents implicit variables such as the distribution of words in the topic model, the hollow points represent implicit variables such as topic distribution parameters in the model, and the rectangle represents the process of repeated sampling of the document. The outer rectangle represents the corpus, and the inner rectangle represents the repeated sampling of the subject and words for each document. The relevant symbols are defined as follows: 1) For a text, the basic data unit is the feature item, and here is the word of the text, with the item {1, …, V} representing the vocabulary. The v th word in the vocabulary can be expressed as a V-dimensional vector.
3) Use D to represent a collection which contains M texts, ie a corpus; a text set D can be represented as = { 1 , 2 , … , }. The premise of classifying text using LDA is the determination of the distribution of implicit variables, that is, the process of generating a document by the implicit subjects in the text. In the LDA model, the process of generating each document M is as follows: 1) First, to get the number of words in a document, the process is implemented by Poisson( ) (~( )).
2) Calculate the probability distribution vector of the topic for each piece of text using the Dirichlet distribution (~( )).
3) For each word in N: a) Select a topic item ~( ) from the topic distribution; b) Select from a ditional probability distribution p( | , ).
Then give the parameters , ; you can get the joint distribution of an article as follows: By iterating and summing up z, we can get the edge probability distribution of an article: Finally, based on the edge probability distribution of each article, the joint probability distribution of the entire corpus can be obtained: The solution of the model is obtained by Gibbs Sampling's method to get the posterior distribution of the topic distribution and word distribution to determine the parameters .
B. Support Vector Machine (SVM)
Support Vector Machine (SVM) is a data mining technology based on statistical learning theory, which is essentially a binary classification model. It aims to maximize the distance between categories and automatically find the support vector with the strongest ability to distinguish categories.
Schematic diagram of support vector machine segmentation hyperplane.
Suppose the training set of the sample is = { 1 , 2 , . . . , }, ∈ . The corresponding mark of training set X is training { 1 , 2, . . . , }, ∈ {1, −1} , which is the dimension of the training set sample space. Now, we need to find a discriminant function g(x)= · + to make ( ) ∈ {−1,1} for any x in X, and the classification interval can be described as 2/|| ||. If you want the maximum interval between categories, the value of w should be the smallest. The problem above can be seen as the optimization of the following equation: The above two expressions are convex functions, so the optimization problem of SVM is to solve the abovementioned quadratic convex optimization. Then, in the two-class problem, the global optimal solution of the above quadratic programming is the solution of the SVM. With Lagrange multiplier optimization, there are: In the above formula, a* and b* are the classification hyperplane parameters, ( · x)represents the vector product of the two vectors. For nonlinear problems, the SVM is processed by transforming the nonlinear problem into a linear problem by the change of the kernel function, so that the SVM can map the low-dimensional space corresponding to the nonlinear problem to the highdimensional space corresponding to the linear problem. Then, in high-dimensional space, nonlinear problems are transformed into linear separable. The problem at this point can be transformed into the following form:
A. Financial News Text Source and Pretreatment
The research object of this paper is Internet Finance News. And we use text mining technology to convert a large amount of unstructured text into structured data that can be processed by computers. This paper mainly uses Python to get 80,000 financial news from the Phoenix Financial website's securities news column, the time span is from November 16, 2011 to July 11, 2015. Some special noise URLs were also processed during the crawl.
B. Extraction of Web Page Text Information
On the basis of crawling financial news, it is necessary to preprocess the text to effectively extract the information. This paper focuses on three key processes and techniques.
1) Text segmentation
Compared to a single word, phrases include more complete semantic information, which can express the content of the text more accurately. Therefore, we use the steps of word segmentation to extract valid information from Chinese text. The word segmentation system used in this paper is ICTCLAS (Institute of Computing Technology, Chinese Lexical Analysis System), which is the best system for Chinese word segmentation. This system includes the functions of multiple different modules such as named entity recognition, Chinese word segmentation and part-of-speech tagging. This paper will mainly use the participles and part-of-speech annotations functions, the final selection is some practical verbs, nouns, adjectives, quantifiers and so on.
2) Feature expression and key techniques for dimensionality reduction
After the word segmentation, as the structure of the text is more complicated, the dimension of the obtained word collection is very high and the obtained word collection cannot be directly extracted from the feature. Therefore, it is necessary to extract as few features as possible from the text to represent its content. The feature dimension reduction method selected in this paper is TF-IDF (term frequency-inverse document frequency), which is based on the document frequency. TF-IDF not only has a high degree of accuracy, but also combines weighting the importance of a feature.
3) Hot topic recognition of financial news
Enter news texts that have undergone text vectorization and dimensionality reduction. This paper uses LDA to output the appearing probability of each text under each theme and the corresponding high-weight keywords, and extract hot topics from financial news. The results show that there is a clearer meaning of a set of topics. Below Table I and Table II are some of the keywords that correspond to the travel theme and the probability of the daily occurrence of the topic :
C. Stock Data Source and Pretreatment
This paper selects the Shanghai and Shenzhen 300 Index to reflect the overall situation of the stock market. Market yield is , = 100 × ( − ln −1 ), , represents the closing price of the Shanghai-Shenzhen 300 Index on the t-day. The tourism sector yield is represented by r, the tourism sector yield is , = 100 × (ln ' − ln −1 '), ' represents the closing price of the t-day tourism sector index.
IV. EMPIRICAL ANALYSIS
First, we select the data from November 16, 2011 to July 11, 2015. According to the ups and downs between the closing price of the tourism sector index and the closing price of the day before, that is, the positive and negative of , the rise of the tourism sector (denoted as 1) and the decline (denoted as -1).We divide the data into two parts: 70% of the data in trading day is classified as a training set, and the remaining 30% is a forecast set. The rising and falling of the tourism sector is used as a classification label, and the previous day's CSI 300 index yield is used as the classification basis. The prediction accuracy rate of the SVM model is 50.9506%. After the probability information of the topic is added as the basis of judgment, the discriminant accuracy is increased to 54.5113%.
In addition, the paper also divides the stock data into three segments according to the trend of the market for detailed research. During the period from December 5, 2012 to February 8, 2013, the overall trend of the market rose. At this stage, the prediction accuracy rate is as high as 81.6092%. Judging from the relevant information of Internet financial news, the forecast accuracy rate has risen slightly to 82.7586%. Fig. 2 shows the results of the discriminant analysis at this stage. The classification label for the blue sample points is rising (denoted as 1), and the label for the red sample points is falling (denoted as -1). The abscissa of the sample is the CSI 300 yield after standardization on the day before. The ordinate of the sample is the probability of traveling related topics in the Phoenix Finance website. It can be seen that when the whole stock market is rising, the plate yield and the large-cap yield are closely linked. The introduction of news information has a certain improvement effect on the stock market forecast. the tourism sector, the accuracy rate can reach 50.7937%. If the Internet financial news information is used for identification, the accuracy can be improved to 55.5556%. This paper also selects the volatility section for research, from March 17, 2014 to July 30, 2014. The result shows that when the Internet financial news information was not included for forecasting, the prediction accuracy rate was as low as 41.3793%. After adding the information, the accuracy of the evaluation of the tourism sector rose sharply to 62.069 %. In general, after adding the relevant topic probability information of the tourism sector in the Internet financial news on the day, the forecast accuracy rate of the day's tourism sector has increased. The results of the segmentation forecast show that in the bull market or bear market stage, the previous day's ups and downs of the market can provide more prediction basis for the day's yield, and the accuracy can be improved slightly after adding news information. The stock market is relatively volatile in fluctuating segment due to the trend of the market is not clear. The ups and downs of the sector cannot rely on the information of the previous day's market earnings to make effective predictions. At this time, the introduction of Internet financial news information can greatly improve the forecasting effect.
From the training SVM model (as shown in Fig. 3 and Fig. 4), it can be seen that the sample points of plate uptrend (shown as blue in the figure) are mostly if there are many Internet financial news related to tourism themes on that day, the tourism sector tends to rise. According to this phenomenon, we can make further study to learn the impact of Internet financial information on the stock market mechanism.
After joining the Internet financial news information, the SVM prediction model proposed by us reaches a correct rate of over 55% to forecast the intraday plate trend. This result is a very meaningful result for the people that believes in the law of large numbers in quantitative trading field. The higher correct rate of prediction can bring considerable profits, and the investment strategy can be further constructed on this basis.
A. Investment Strategy
One of the most important purposes of studying the stock market is to study trading strategies. When there is a positive rate of return, the buying transaction is executed; when the yield is lower than expected, it is not traded or closed, reducing economic losses. Investment Strategy 1: This paper studies whether to buy at the close of the previous day and sell it at the close of the day. The transaction costs such as transaction costs are not considered here. The construction of investment strategy 1 is only based only on the historical revenue information of the market. If the closing price of the CSI 300 Index on Tuesday is higher than that on Monday, it is considered that the market's yield on Tuesday is greater than 0. Therefore, the investors will buy the portfolio of the tourism sector at the close of Tuesday and sell it at the close of Wednesday, and do not trade on the contrary. The data from 64 trading days from March 2014 to July 2014 were selected as training samples, and 29 data in 2014 were used as prediction samples. The final result is shown in Table III.
B. Validation of Investment Strategy
It can be seen from Table IV that the classification accuracy of the prediction set by the model obtained after training reaches 62.069%, which is significantly better than the accuracy of 41.379% based solely on the historical information of market return. In the forecast period, only based on the historical information of the market rate of return, the cumulative rate of return can be 6.13%, and according to the investment strategy 2, the cumulative rate of return that can be obtained is 12.12%. Besides, after making sector investment by this strategy, the rate of return is 9.41% higher than the year-on-year increase of the Shanghai index in the same year, which indicates that the Internet financial news has great application value for constructing investment strategy.
VI. CONCLUSION
With the economic development brought about by reform and opening up, people's investment awareness has gradually changed, and stocks have become an important part of Chinese investment and financial management [5]. At the same time, the influence of the news media in the stock market is also growing. In this context, quantifying text information to analyze the stock market has important theoretical and practical value.
This paper innovatively constructed a model to predict the ups and downs of the sector, mainly taking into account the historical information of the stock market. And we can find that the Internet financial news has a significant effect on improving the accuracy of the forecasting model. Besides, this paper also discusses the role of Internet financial news in different stages on the forecast of the trend of the sectors. The research results of this paper show that whether it is the whole or the segmentation study, the information about the distribution of the relevant topics of the tourism sector in the Internet financial news on the day has improved the accuracy of forecasting the ups and downs of the tourism sector. Especially when the stock market is in a period of volatility, the information related to financial news of the sector can greatly improve the accuracy of the forecast. Finally, this paper constructs an investment strategy based on the prediction model of the Internet financial news topic.
The future research can further study the mechanism and path of Internet financial news influencing stock market based on the existing research technology [6], [7]. At the same time, this paper only mines qualitative news ontology information, and there are still many factors affecting the trend of stock price, so this paper hopes to expand the scope of text mining and analysis in the future research, so as to improve the accuracy of prediction model. | 2020-04-30T09:10:41.551Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "d01c9c6eaf67dde64a13192f42e956bcae823927",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.12720/jait.10.3.95-99",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "da7402d45c6048302ac297583c750764d47c97a8",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": []
} |
249227763 | pes2o/s2orc | v3-fos-license | The study of statistical features of the evolution of complex physical systems using adaptive machine learning methods
In this work, we discuss various machine learning methods and their implementation in the field of complex physical systems for the analysis of experimental data. These methods: classical machine learning, neural nets and deep learning allow greatly outperforming classical analysis methods by giving the algorithm the ability to “learn” and perform tasks adapting to the data provided and search. Neural nets and deep learning approaches are used to search for hidden patterns of the suggested input data that can’t be analyzed using common methods. This variety of methods can be applied to study collective phenomena in plasma and thermonuclear fusion on the basis of experimental data of physical experiments with a higher level of performance than classical approaches.
Introduction
Nowadays, one of the most relevant and remarkable areas in physics is the physics of complex systems. Such systems are widely distributed in nature on scales from a group of cells to plasma in stars and stellar cluster systems. A complex system is a composite object, the parts of which relate to each other by certain relationships, as a result of which such a system acquires new properties that cannot be reduced to the properties of its parts. Complex systems are distinguished by the following properties: non-linearity, limited predictability, evolutionary dynamics, self-organization, openness, and adaptability [1,2].
Machine learning is one of the new and actively developing methods of analysis, combining approaches that can "learn" based on the received data, which allows you to perform a wide range of different tasks. Machine learning can be used to solve problems of detection, recognition, prediction, prediction, diagnostics, and optimization. In the area of physics of complex systems machine learning methods are widely used in the study of complex systems structure and for analysis of the dynamic behavior of nonlinear physical complex systems: forecasting of the future evolution of the systems and establishing a causal relationship [3].
Application of machine learning methods in the analysis of the dynamic behavior of complex physical systems
In this paper, we discuss the prospects for applying the latest methods of machine learning, neural networks and deep learning for the analysis of experimental data from physical experiments in modern science. Figure 1 shows the relationship between machine learning and other learning methods within artificial intelligence technologies [4]. Machine learning is a rapidly growing area and one of the latest technologies being used in the modern information technology field. Machine learning is a set of techniques that will allow computer algorithms to be able to learn [5]. It is based on the input and required output of the algorithms, some of which are based on the way how humans can carry out a task [6]. Machine learning is usually classified into the following categories: reinforcement learning, ensemble methods, supervised and unsupervised learning. In reinforcement learning the algorithm needs not just to analyze data, but to act independently in real conditions. The task is to minimize errors, for which it gets the opportunity to continue working without obstacles and failures [7]. Ensemble methods are groups of algorithms that use several machine learning methods at once and correct each other's errors. Supervised learning is a type of algorithm where the method is supplied with example inputs along with the required output, which then allows it to learn a rule that maps inputs to outputs [8]. In unsupervised learning, on the contrary, only the inputs are supplied, and the learning algorithm is required to determine the structure of the input and perform according to unknown characteristics [9].
Machine Learning methods
These algorithms are widely used by scientists in different areas. For example, in the area of construction of accurate density functionals for realistic molecular systems [10]. Figure 2 shows the principle of the authors' method to directly learn the Hohenberg-Kohn map. Besides, in the field of complex physical systems, machine learning methods are used in plasma physics. In [11] the supervised and unsupervised learning methods are used to predict matrix effects severity and analyte recovery prediction in plasma optical emission spectrometry. The study used nonanalyte signals as inputs. The authors declare that the efficiency of collecting and interpreting responses from plasma species may be based on this analysis workflow. Moreover, machine learning methods are also widely used in the tasks of thermonuclear fusion. In the study [12] the authors compare two machine learning tools: Gaussian Mixture Models and Support Vector Machine to carry out the classification taskdistinguishing neutrons and gamma-rays in thermonuclear fusion. As a result, the authors declare that the approaches are in very good agreement and these methods greatly outperform previously used classification algorithms, by providing the probability of each example being a neutron or a gamma-ray.
Neural Nets and Deep Learning methods
The next group of methods is neural nets and deep learning approaches. It is a more complex level of learning algorithm than machine learning. Neural nets are designed in a way that resembles the way the neurons work in the human brain [13]. Neurons form layers through which the signal passes consistently. All this is connected by neural connectionschannels for which data is transmitted. Each channel has its own "weight"a parameter that affects the data it transmits. The input layer takes input, which then is processed in the hidden layer and the output layer sends out the calculated output. Neural nets are a powerful tool for image, speech and signal processing and are widely used in modern science [14]. Deep learning is a set of learning algorithms that can be used to learn complex forecasting models, e.g., multi-layer neural networks with many hidden layers. Figure 3 shows the variety of neural nets and deep learning methods.
Both sets of methods are recently being used in the field of physics of complex systems for the analysis of experimental data. For example, in [15] the authors develop a new method to produce a predictive model for turbulent fluxes in drift-wave turbulence systems, which can be applied to forecasting of turbulence behavior in plasma. Authors use a supervised deep learning methodology that infers a mean-field model for the cross-phases from direct numerical simulation. As a result, the authors develop a model for the drift-wave/zonal flow system directly from numerical solution data. The method reproduces analytical results for the Reynolds stress, such as negative viscosity effects and terms regularizing it. Figure 3. Set of the neural nets and deep learning methods.
Conclusion
In this paper, we discuss recently developed methods of analyzing data of various nature: machine learning [16], neural nets and deep learning approaches. We describe different approaches, their meaning and their implementation in modern science. These methods are applied in different fields, including information technologies and physics. In the area of physics of complex systems, the methods are actively used to study experimental data of experiments, e.g., in plasma physics and thermonuclear fusion, in order to replace classical analysis methods due to the higher level of performance.
These approaches are not the only ones that can outperform classical analysis methods: in the study of collective phenomena in physical complex systems, Memory Functions Formalism [17] and Flicker-Noise Spectroscopy [18] are also can show significant results. These methods can be applied to analyze autocorrelations, cross-correlations and statistical memory effects in the recorded experimental data [19], which reflect the manifestation of collective phenomena in plasma and during thermonuclear fusion [20,21]. | 2022-06-01T20:07:46.208Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "1dd5c8734c4aa2ba4b9b37518e5a977ebca260d8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/2270/1/012042",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1dd5c8734c4aa2ba4b9b37518e5a977ebca260d8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
236154787 | pes2o/s2orc | v3-fos-license | On the Black Hole Acceleration in the C-metric Space-time
We consider the C-metric as a gravitational field configuration that describes an accelerating black hole in the presence of a semi-infinite cosmic string, along the accelerating direction. We adopt the expression for the gravitational energy-momentum developed in the teleparallel equivalent of general relativity (TEGR) and obtain a possible explanation for the acceleration of the black hole. The gravitational energy enclosed by surfaces of constant radius around the black hole is evaluated, and in particular the energy contained within the gravitational horizon is obtained. This energy turns out to be proportional to the square root of the area of the horizon. We find that the gravitational energy of the semi-infinite cosmic string is negative and dominant for large values of the radius of integration. This negative energy may explain the acceleration of the black hole, that moves towards regions of lower gravitational energy along the string.
Introduction
The C-metric is a very curious gravitational field configuration. It was first understood as a solution of Einstein's equations that describes an accelerating black hole. Nowadays, it is clear that the line element describes not only a pair of black holes accelerated in opposite directions, but a sequence of pairs of black holes. A semi-infinite cosmic string is assumed to be attached to each one of the black holes, along which the black holes are accelerated. The acceleration of the black holes is generally supposed to be due to these semiinfinite cosmic strings, but how exactly these strings act on the black hole is not clear.
In this article we present an explanation for this acceleration. The explanation is based on energy considerations. We consider the expression for the gravitational energy-momentum established in the teleparallel equivalent of general relativity (TEGR), and evaluate the gravitational energy enclosed by surfaces of constant radius R, such that R lies between the gravitational and acceleration horizons. This energy expresses both the energy of the black hole and of the cosmic string. The gravitational energy density due to the semi-infinite cosmic string only is negative (the semi-infinite cosmic string is characterised by both the mass parameter and the acceleration parameter), and is dominant for large values of the radius of integration R. The total gravitational energy (black hole plus the cosmic string) is negative and de-creases with increasing values of the radius R, as we will show. Assuming that the physical systems in nature move towards states of lower energy, the results obtained here suggest that the black hole is dragged to regions of lower energy state along the semi-infinite cosmic string. The absolute value of the (negative) gravitational energy density of the cosmic string increases along the negative z axis. We also obtain the energy contained within the gravitational horizon. This is the energy that cannot escape from the horizon, and is related to the irreducible mass of the black hole. It turns out that this energy is proportional to the square root of the area of the black hole horizon.
We will establish a set of tetrad fields adapted to observers that accelerate together with the black hole. The dynamical features of this frame are investigated by means of the acceleration tensor. This tensor will be reviewed in Section 3, and applied to the C-metric black hole in Section 4. The outcome of the analysis of the acceleration tensor will help to understand that the semi-infinite cosmic string is indeed the configuration that accelerates the C-metric black hole, as demonstrated in section 6.
Description of the C-metric
The C-metric is an exact vacuum solution of Einstein's equations that depends on three parameters: m, α and C. When α = 0 = C, the metric describes the Schwarzschild solution. The parameter α is related to acceleration, and C to a deficit angle, or conical singularity. The exposition below is based on the presentations of Refs. [1,2]. In the latter references, one finds the history of this solution, starting with Levi-Civita in 1918 [3], and ending up with the work of Ehlers and Kundt [4] (see also Ref. [5]). The metric is interpreted as describing an infinite sequence of alternating black holes and asymptotically flat regions [1], and each asymptotically flat region is related to a pair of causally disconnected black holes. The black holes in each pair are supposed to accelerate away from each other along an axis of symmetry of the space-time. This axis of symmetry contains a conical singularity that may be physically interpreted as a cosmic string, that is related to the acceleration of the black hole. One does not expect this multitude of pairs of black holes to be realized in nature. The very concept of acceleration of a black hole is not of straightforward comprehension. Nevertheless, this metric will be used here to model the acceleration of a single astrophysical black hole within a physical region that can be identified with the surroundings of an ideal observer. Such model could eventually describe features of the outcome of the merger of two black holes that are presently considered in the generation of the recently observed gravitational waves. However, here we will be mostly interested in the conceptual issues related to the characteristics of an accelerated black hole.
It is easy to see that when α = 0, we arrive at the line element of the Schwarzschild space-time. The angular coordinate Φ varies in the interval −Cπ < Φ < Cπ.
By drawing a small circle around the half-axis θ = π, with (t, r) constant, we obtain [1] circumference which implies the existence of a conical singularity, and doing the same around the half-axis θ = 0, we find which also implies the existence of a conical singularity, but with a different conicity. We choose to eliminate the excess of angular variation around the upper half-axis θ = 0 by fixing the constant C to satisfy C = (1 + 2αm) −1 .
In this way, the deficit angles at the half-axis θ = 0 and θ = π are [1] respectively. The negative z axis is then identified with the semi-infinite cosmic string. This semi-infinite cosmic string makes sense only if m = 0 and α = 0. Finally, we define the coordinate φ such that Φ = Cφ, where −π < φ < π, and arrive at the final form of the line element, The functions f and g are the same as in Eq. (1).
The C-metric space-time has a curvature singularity at r = 0, and two coordinate singularities: at r = 2m, that yields the event horizon H g , and at r = 1/α, that yields the acceleration horizon H a . Thus, ignoring analytic extensions, the space-time may be divided in three regions [1]: I) 0 < r < 2m, which is the interior of the black hole (non-static region); II) 2m < r < 1/α (static region); III) 1/α < r < ∞ (non-static region). The coordinates in Eqs. (1) and (5) are suitable to Region II, which is the region of interest to the present analysis. The maximal analytic extension of these coordinates yields the description of a pair of accelerating black holes in opposite directions, each of them being in space-time regions that are causally disconnected.
The limit m → 0 of the C-metric is taken by first considering Eqs. (2) and (3), and by noting that the coordinates Φ in Eq. (1) and φ in Eq. (5) are related by Φ = Cφ. The positive and negative half z axes have now the same angular deficit, and the line element reduces to If the acceleration parameter α vanishes, the line element above can be transformed into the standard form ds 2 = −dt 2 + dρ 2 + β 2 ρ 2 dφ 2 + dz 2 of a conical defect in cylindrical coordinates, provided we identify β = C. By means of a suitable coordinate transformation, the space-time described by the line element (6) can be transformed into the uniformly accelerated Rindler space-time in cylindrical coordinates. However, this coordinate transformation cannot be carried out globally, because the topological defect on the z axis is eliminated by such a global transformation.
In summary, we see that the C-metric space-time described by Eq. (1) is a non-linear superposition of a static black hole space-time and of a semiinfinite cosmic string along the negative z-axis.
The acceleration tensor
In this section we will make a brief presentation of the acceleration tensor, in order to characterise the acceleration of frames adapted to observers in the C-metric space-time. The tetrad field and the inverse frame field are denoted by e a µ and e a µ , respectively. [Notation: a and µ are SO(3,1) and space-time indices, respectively. The time and space components are denoted as a = ((0), (i)) and µ = (0, i). The metric tensor g µν and the flat, tangent space metric tensor η ab = (−1, +1, +1, +1) are related by e a µ e b ν η ab = g µν .] Along an arbitrary timelike worldline C, the velocity of an observer is denoted by U µ . We identify this velocity with the timelike component of the frame field, U µ = e (0) µ . The acceleration of the observer along this worldline is defined by the covariant derivative of U µ along C, where τ is the proper time of the observer along C, and the covariant derivative is constructed out of the Christoffel symbols. We have considered U α = dx α /dτ along C. Thus, e a µ yields the velocity and acceleration of an observer along the worldline. Therefore, a given set of tetrad fields, for which e (0) µ describes a congruence of timelike curves, is adapted to a particular class of observers, namely, to observers characterized by the velocity field U µ = e (0) µ , endowed with acceleration a µ . If e a µ → δ a µ in the limit r → ∞, in an asymptotically flat space-time, then e a µ is adapted to static observers at spacelike infinity.
An alternative characterization of tetrad fields as an observer's frame may be given by considering the acceleration of the whole frame along an arbitrary path x µ (τ ) of the observer. The acceleration of the whole frame is determined by the absolute derivative (constructed out of the Levi-Civita connection) of e a µ along x µ (τ ). Thus, assuming that the observer carries an orthonormal tetrad frame e a µ , the acceleration of the frame along the path is given by [6,7,8,9] De a where φ ab is the antisymmetric acceleration tensor. As discussed in Refs. [6,7], in analogy with the Faraday tensor we may identify φ ab ↔ (a, Ω), where a is the translational acceleration (φ (0)(i) = a (i) ) and Ω is the frequency of rotation of the local spatial frame with respect to a non-rotating, Fermi-Walker transported frame. It follows from Eq. (8) that The acceleration vector a µ may be projected on a frame in order to yield Thus, a µ and φ (0)(i) are not different translational accelerations of the frame. The expression of a µ given by Eq. (7) may be rewritten as where 0 Γ µ αβ are the Christoffel symbols. We see that if U µ = e (0) µ represents a geodesic trajectory, then the frame is in free fall and a µ = φ (0)(i) = 0. Therefore we conclude that non-vanishing values of the latter quantities represent inertial accelerations of the frame.
An alternative expression of the acceleration tensor is given by [8,9] where The tensor φ ab is invariant under coordinate transformations and covariant under global SO(3,1) transformations, but not under local SO(3,1) transformations. Because of this property, φ ab may be used to characterise the inertial state of the frame. If the frame is maintained static in space-time, then the six components of the tensor φ ab must cancel the six components of the gravitational acceleration on the frame.
Inertial accelerations in the C-metric spacetime
We will establish a set of tetrad fields constructed in terms of the coordinates (t, r, θ, φ), whose origin coincide with the centre of the accelerating black hole. Therefore, the tetrad field below is adapted to observers that see the accelerating black hole at rest, and yields the line element (5). It reads where Recall that we are assuming 0 < 2αm < 1. Since the black hole is accelerated in the negative z direction, the observer is likewise accelerated together with the black hole. From the point of view of the frame established by Eq. (14), the observer verifies that the black hole is at rest, i.e., the 4-velocity of the observer is of the type U µ = e (0) µ = (U 0 , 0, 0, 0). In order to calculate the acceleration tensor out of the frame given by Eq. (14), we need the expressions of the torsion tensor T λµν = e a λ T aµν . They are given by The non-vanishing components of the acceleration tensor are then easily calculated, and read where F (r; α) = α cos θB By combining these quantities, we have The expression above represents the inertial accelerations that are necessary to impart to the frame (14) in order to satisfy the properties that Eq. (14) must satisfy. For instance: (i) by making α = 0, we obtain which is the outward radial acceleration necessary to compensate the attractive radial acceleration due to the black hole (the function F (r; α) generalises the expression above, for non-vanishing values of α); (ii) the term −αBẑ represents the component of the acceleration on the frame along the negative direction of the z axis, since the frame is accelerated together with the black hole.
In the absence of the black hole, i.e., in the case m = 0, the set of tetrad fields obtained from Eq. (6), that represents a frame adapted to observers accelerated along the negative z direction, reads It follows from the expression above that Therefore, At the centre of the coordinate system, r = 0, we have a i = −αẑ. The constant C, that appears in Eq. (25) and that characterises the cosmic string, does not affect expressions (26) and (27) above. In fact, these expressions can be obtained directly from (22) and (23) by making m = 0 in the latter equations.
Finally, we mention that for both sets of tetrad fields, Eqs. (14) and (25), the frequency of rotation, given by the φ (i)(j) components of the acceleration tensor, vanish. Both frames are Fermi-Walker transported.
A brief review of the TEGR
The gravitational energy of the C-metric space-time will be investigated in the context of the TEGR. This issue is somehow intricate, because we have, in fact, an accelerated black hole in the presence of a semi-infinite cosmic string. Up to a certain extent, we will manage to disentangle these two gravitational field configurations.
As in previous presentations, we assume that the space-time geometry is established by the tetrad fields e a µ only. Thus, the only possible non-trivial definition for the torsion tensor is given by T aµν = ∂ µ e aν − ∂ ν e aµ , as in Eq. (13). This quantity is trivially related to the torsion of the Weitzenböck connection Γ λ µν = e aλ ∂ µ e aν . A geometry defined solely by the tetrad field is more general than the pure Riemannian geometry, since one can make use of both the curvature and torsion tensors, and of the Weitzenböck and Levi-Civita connections. Of course, the Riemann-Christoffel and Ricci tensors must exist in order to establish the equivalence between the TEGR and the ordinary metric formulation of general relativity.
In the TEGR, it is possible to rewrite Einstein's equations in terms of e a µ and T aµν . The Lagrangian density of the theory is defined by [10,11] L M stands for the Lagrangian density for the matter fields. The Lagrangian density L is invariant under the global SO(3,1) group. Invariance under the local SO(3,1) group is verified as long as we take into account the total divergence that arises in the identity where R(e) is the scalar Riemannian curvature. However, the field equations derived from Eq. (28) are covariant under local SO(3,1) transformations, and are equivalent to Einstein's equations. They read where δL M /δe aµ = eT aµ . Although the definition of the gravitational energy-momentum is established in the Hamiltonian framework, it may also be obtained in the framework of the Lagrangian formulation defined by (28), according to the procedure of Ref. [11] (we are now assuming c = 1 = G). Equation (31) may be rewritten as where T λµ = e a λ T aµ and t λµ is defined by In view of the antisymmetry property Σ aµν = −Σ aνµ , it follows that ∂ λ e e a µ (t λµ + T λµ ) = 0 .
The equation above yields the continuity (or balance) equation, where S is the boundary of an arbitrary 3-dimensional volume V . Therefore we identify t λµ as the gravitational energy-momentum tensor [11], as the total energy-momentum contained within the volume V , as the gravitational energy-momentum flux [11,12], and Φ a m = S dS j (e e a µ T jµ ) , as the energy-momentum flux of matter [12,13]. In view of (32), Eq. (36) may be written as P a = − V d 3 x∂ j Π aj , from what follows where Π aj = −4ke Σ a0j . A summary of all issues discussed above may found in Ref. [14]. The passage from a volume integral to a surface integral such as Eq. (39) cannot be carried out in the presence of singularities (admitting that the space-time has singularities), and for this reason we consider Eq. (39) as the definition of the gravitational energy-momentum. It must be noted, however, that the same feature takes place in the definition of the ADM gravitational energy-momentum, where the integrals of total divergences are transformed into surface integrals. The surface integral is superior with respect to the volume integral, because the gravitational field on the surface of integration S carries information about the interior region, and the integral can be carried out more easily. In addition, definition (39) represents the total energy of the space-time within the surface S.
Equation (39) is the definition for the gravitational energy-momentum presented in Ref. [15], obtained in the framework of the vacuum field equations in Hamiltonian form. It is invariant under coordinate transformations of the three-dimensional space and under time reparametrizations. Note that (34) is a true energy-momentum conservation equation. In the ordinary formulation of arbitrary field theories, energy, momentum, angular momentum and the centre of mass moment are frame dependent field quantities, that transform under the global SO(3,1) group. In particular, the energy transforms as the zero component of the energy-momentum four-vector. These features of special relativity must also hold in general relativity, since the latter yields the former in the limit of weak (or vanishing) gravitational fields.
The problem of defining the gravitational energy-momentum has a long history, and is probably as old as general relativity itself. It is well known the existence of several expressions of pseudo-tensors, including one by proposed by Einstein, and all these expressions have an obvious limitation since they are not tensors. Nowadays, the majority of the objections (if not all) against the existence of a localized expression for the gravitational energymomentum is justified by invoking the principle of equivalence. The idea is that the affine connection in general relativity can be made to vanish at a point in space-time, or even along an arbitrary worldline (timelike or spacelike). However, as argued before [16], the vanishing of the affine connection is a feature of differential geometry, and not a principle of nature. The problem regarding the definition of the gravitational energy-momentum has to do with transformation of frames, not transformation of coordinates.
This whole issue has been thoroughly discussed in section 5 of Ref. [14]. In subsection 5.3 of the latter reference, we have shown that the definitions of energy-momentum and 4-angular momentum that arise in the TEGR satisfy the Poincaré algebra in the phase space of the theory. This result, together with the calculation of the gravitational energy contained within the external event horizon of a Kerr black hole [15], distinguishes our definition from all other existing definitions. However, the gravitational energy-momentum and 4-angular momentum must be frame dependent, as we argued above. The tetrad frame may be feely chosen, since every observer in space-time, along arbitrary timelike worldlines, carries his/her own tetrad frame.
The local SO(3,1) symmetry is not present in expressions (36) and (39) for the gravitational energy-momentum but, in practice, the latter can be evaluated in any frame in space-time, static (with respect to the spacelike infinity), stationary, free-fall, etc.
Gravitational energy in the C-metric spacetime
Expression (39) for the total gravitational energy takes into account altogether the contributions of the black hole and of the infinite cosmic string. Both gravitational field configurations are formally given by the integrand in Eq. (36). As we already mentioned, the analytical expression of the semiinfinite cosmic string is not yet known. Therefore, expression (39) is better suited to the analysis of the total gravitational energy of the C-metric spacetime, because it incorporates the features of the non-linear superposition of the two geometrical field configurations. We will evaluate the gravitational energy of the C-metric space-time in a region not close to the acceleration horizon H a determined by r = 1/α. We are interested in situations of present astrophysical interest, and thus we will ignore the acceleration horizon and possible maximal extensions of the C-metric space-time. In order to evaluate the surface integral in Eq. (39), we need the quantities Π aj = −4k Σ a0j . We find The gravitational energy P (0) contained within a surface of constant radius r is determined by where dS 1 = dθdφ. The surface of constant radius R is depicted in Figure 3 of Ref. [1], but as noted in this reference, there is no sharp vertex in the negative z axis, at θ = π, in spite of the presence of the semi-infinite cosmic string. The surface is regular at this point, so that Eq. (48) below (where r = 2m) may be easily obtained. By carrying out the integral above on the surface of constant radius r = R, we obtain In view of the relation tan −1 z = − i 2 ln ( i−z i+z ), we have After a number of simplifications, we finally arrive at The gravitational energy P h contained within the event horizon H g can be evaluated by taking the limit r → 2m in the expression above. We find When α = 0, Eq. (45) simplifies to which is a well known result (obtained previously in the TEGR and by means of quasilocal expressions for the gravitational energy) that yields P (0) = m in the limit R → ∞. It is very interesting to note that the energy contained within the event horizon H g given by Eq. (46) is related to the area A h of the event horizon calculated in Ref. [1]. In the latter reference, the area A h is shown to be Considering the value of C adopted in Subsection 3.1 (as well as in Ref. [1]), C = (1 + 2αm) −1 , it is straightforward to obtain the relation This relation may be useful in the study of the thermodynamics of the Cmetric black hole. In the limit m → 0, C is no longer given by C = (1 + 2αm) −1 , but it can acquire arbitrary values (see Eq. (6)) 1 . The expression of P (0) in this limit is obtained directly from the tetrad fields (25), and represents the energy of an infinite cosmic string only, evaluated in an accelerated frame along the negative z axis. It is given by In the Figures below, we consider expression (45) for P (0) and display the total gravitational energy enclosed by a surface of constant radius R, considering m = 1 in natural units. In Figure 1 we display altogether: (i) m = 1, α = 0.01; (ii) m = 1, α = 0 (Schwarzschild); (iii) m = 0, α = 0.01.
In the latter case (iii), we have considered Eq. (50) and have chosen C = [1 + 2(0.01)] −1 in order to make a consistent comparison with the first two cases, i.e., the value of C is the same in the three cases. In Figures 2 and 3 we consider α = 0.02 and α = 0.03, respectively, and the corresponding values of C. In all cases, we see that for higher values of the radial coordinate R, the energy of the infinite cosmic string dominates. In Figures 1, 2 and 3 we see that the energy in the space-time of a pure infinite cosmic string is negative. As we mentioned above, this energy dominates when we consider higher volumes of integration. Thus, the energy density in regions for higher values of the radius of surface integration R (i.e., R approaching 1/α) is negative. It is likely that this negative energy density is responsible for the acceleration of the black hole, since the black hole is moving towards the region of negative energy density. One argument in support of this conclusion is the following. Let us consider the gravitational energy contained within a surface of constant radius r in a Schwarzschild space-time. It is given by Eq. (47). By making r = 2m and r → ∞ in the latter equation, we obtain P (0) = 2m and P (0) = m, respectively. The case r = 2m is in agreement with Eq. (46). These results can also be obtained by means of the the quasi-local expression for the gravitational energy given by Brown and York [20]. Thus, in the region between r = 2m and r → ∞, the gravitational energy density is negative. The negative gravitational energy outside the event horizon may be identified with the negative Newtonian binding energy, which is attractive, as noted by Brown and York (see Eq. (6.16) of Ref. [20]). The same feature may be occurring here: a region of negative gravitational energy density exerts gravitational attraction, but in this case the black hole is being attracted, or accelerated along the semiinfinite cosmic string in the negative z axis. One may think that the black hole is approaching a state of lower energy, as do ordinary bodies in classical physics. The energy of space-time (topological) defects may be positive or negative, according to an "addition" or "removal" of a continuum medium to the space-time. Cosmic strings are disclination-type defects, and are highly energetic defects compared to dislocations. This issue is discussed in Refs. [17,18]. (See also Eq. (32) of Ref. [19], which presents the energy per unit length of a cosmic string. For a parameter β 0 > 1, this energy is negative.) When a substantial fraction of a space-time is "removed", as in the case of the infinite cosmic string (according to Eq. (4)), the total energy of the space-time may be negative in the frame accelerated along the negative z axis.
In view of Eqs. (45) and (50), we may identify the energy of the black hole only as In the evaluation of P (0) cs , the constant C is numerically chosen to be C = (1 + 2αm) −1 , where m and α are the values that yield P (0) . Thus, both P (0) and P (0) cs are endowed with the same constant C. The identification above turns out to be consistent, as we see in Figure 4, because the difference between this energy and the energy obtained from Eq. (47), that represents the energy enclosed by surfaces of constant radius R in the Schwarzschild space-time, is not too much significant. Although the surfaces of constant radius R are not strictly the same in the space-times with and without the acceleration parameter α (i.e., there is no covariant relation between the radius R in the two situations), the result displayed by Figure 4 is qualitatively relevant to indicate the consistency of our analysis. As a final remark, we note that P (0) cs given by Eq. (50) vanishes in the flat space-time limit, which is obtained by requiring simultaneously α = 0 and C = 1. In addition, the following limits are verified: (i) when α → 0, Eq. (45) reduces to the energy of the Schwarzschild space-time, as given by Eq.
Conclusions
In this article we have addressed the C-metric space-time and have presented an explanation for the acceleration of the black hole. We recall that the C-metric space-time is a gravitational field configuration that describes an accelerated black hole along a semi-infinite cosmic string. The black hole is characterised by the mass parameter m, and the acceleration α yields the angular deficit C in the negative part of the z axis (θ = π), characterised by C = (1 + 2αm) −1 .
We obtained the expression for the gravitational energy contained within a surface of constant radius R, around the centre of the accelerated black hole. In the limit r → 2m, we found the energy contained within the gravitational horizon, given by Eq. (46). This is the energy that cannot escape from the black hole. This energy may be identified with 2M irr , where M irr is sometimes defined as the irreducible mass of the black hole, in analogy with the definition of the irreducible mass of the Kerr black hole. For large values of the radius of integration R, the total gravitational energy (black hole plus the infinite cosmic string) is negative, according to Figures 1, 2 and 3. It is clear that this negative energy is dominated by the energy of the infinite cosmic string. As we argued at the end of Section 6, we may interpret the black hole as being dragged (accelerated) towards a state of lower energy, along the infinite cosmic string. The larger is the value of the radius of integration R, the more negative is the gravitational energy density of the cosmic string. Therefore, the black hole moves towards regions of lower gravitational energy density.
The accelerated black hole, as described by the C-metric space-time, is not physically equivalent to the situation where the black hole is a rest, and the observer undergoes an acceleration −α. In particular, by means of a local Lorentz transformation, we cannot remove the acceleration of the black hole in the C-metric space-time.
We mention finally that we carried out a local Lorentz transformation on the set of tetrad fields (14) such that the new frame is accelerated in the positive z direction with acceleration +α. This new frame represents a static (or nearly static) frame in space-time, where the observer is no longer attached to the black hole. We calculated the gravitational energy in this new frame and found that the resulting relation between P (0) and R is extremely similar to Figures 1, 2 and 3, i.e., there is not a single qualitative difference between the relation of P (0) and R in the two situations. This result ensures the frame independence of our main conclusion, in spite of the quantitative differences for the gravitational energy arising in the consideration of the nearly static frame, as compared to Eq. (45) (i.e., the latter equation is not frame independent). The quantitative differences are due to the emergence of the quantity γ(t) in some terms in the expression of the gravitational energy in the nearly static frame, where γ(t) = (1 − v(t) 2 /c 2 ) −1/2 . | 2021-07-22T01:16:25.733Z | 2021-07-20T00:00:00.000 | {
"year": 2021,
"sha1": "efe971040cdfc8047ec1e47b1e6fd2fbc9af83f2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bc57e7e6722c937326eadb801312df49d4d76326",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
265186842 | pes2o/s2orc | v3-fos-license | Weaning Habits and Nutritional Status Determinants Among Under-5 Children in Lagos State, Nigeria
Introduction: High household income may not guarantee a satisfactory nutritional outcome of the children if households are lacking of care, dietary quality and health care access. Malnutrition will persist despite rapid income growth if more effective approach to combat the problem is absent. This study was carried out to determine the in�uence of correct feeding habits on the nutritional status among under-ves in an urban city in southwestern Nigeria. Methodology: This was a community based, cross-sectional, descriptive study, investigations such as dietary survey, anthropometry and clinical examination were done. A multistage sampling technique was used in the study. Data were collected via pretested, close ended, interviewer administered questionnaire. The questionnaires obtained from the study were analyzed using statistical package for social sciences (SPSS version 20) software programme. Result: A total of 407 participants with the mean age of 26.58 ± 10.10 months were recruited into the study. Exclusive breastfeeding was practiced by 39.8% of the mothers. About 56.5% of the mothers reported having given their babies the �rst milk (colostrum). The mean age of weaning off breast was 7.4 ± 1.5 months. Immunization coverage among the respondents were BCG-82.1%, Oral polio = 81.3%, Pentavalent [DPT 3] = 87.2% and Measles = 88.7%. The mid upper arm circumference revealed 30(55.6%) of male were at greater risk of malnutrition than female 24(44.4%), while 45(56.1%) of males had severe malnutrition. Mean head circumference for age Z-score test between boys and girls in age group 13 to 24 months [p = 0.009, C.I= -0.27-0.19] and 25 to 36 months were signi�cant statistically (p = 0.003, − 0.35 − 0.22). The value for age group 37– 48 months was p = 0.53, C.I = 0.015–0.59 and p = 0.57, C.I = 2.59–1.70 for 49–60 months was not statistically signi�cant. Conclusion: This study shows that the mean age of weaning was 7.4 ± 1.5 months. It also reported the mean values of head circumference to be signi�cantly lower among girls than boys below 36months and this may have developmental and nutritional implications in Nigeria and other African countries.
Introduction
One in every three under-ve children have been reported to be malnourished globally while 165 million were underweight, 101 million were stunted and 52 million were wasted [1][2].Malnutrition among these under ve children is in uenced by several interconnected network of factors such as cultural, behavioral and environmental factors.Cultural factors determine quality and quantity of food intake.It can also be a strong determinant in social accessibility to health care among these children especially in the developing countries [3][4].
Several studies have demonstrated that low income among the caregivers of the under ve children is a major cause of childhood malnutrition [5][6].However, the relationship between poverty and childhood malnutrition may be unpredictable and at times di cult to determine.Several studies has shown that high household income may not guarantee proper nourishment of the children if households are lacking of care, dietary quality and health care access [7][8][9][10].Malnutrition can affect any households irrespective of economic status.It will continue to be a major threat despite increasing income growth even if there is improved and more effective approach to combat its absent [11][12].
Two major primary determinants of malnutrition among young children as reported by several authors are unsatisfactory food intake and severe and repeated infections.The interactions of these two conditions determine the nutritional status and overall health of the child.The interaction of these two factors with the culture and way of life of the caregivers have been major predictors of child health.This have been re ected in the UNICEF Conceptual framework of child survival [12].Brie y, the model characterizes the correlates of malnutrition as factors that impair access to food, maternal and child care, and health care.It is these factors that determine the growth of these young children.Consequently, anthropometric measurement has been shown to be a valid indicator for young children's growth and wellbeing.Infant mortality and morbidity rates have also been shown to be a suitable indicator for predicting households' access to food, health and care [14].
Many previous studies concentrated majorly on the socio-economic status differences measured in terms of income, wealth status and housing index [12].Very few studies explored the impact of other determinants such as dietary intake, food security, caring practices and role of women on children's nutritional status [13][14].Every healthy child should have weight and height measurements that compare very well with the standard normal distribution of heights (H) and weights (W) of children of the same age and sex.Therefore, the best way to assess the nutritional status and overall health of a child is to compare the child's growth indices with the set cut-off points in the standard normal distribution of adequately fed children that are associated with normal growth [4,12]].This study was therefore carried out to determine the in uence of correct feeding habits on the nutritional status among under-ves in an urban city in southwestern Nigeria.
SELECTION CRITERIA
All children aged 0-59 months whose parents resides within Agbowa-ikosi, and whose guardian or parent consent to participate in the study were included.Children with acute illness such as fever, diarrhea etc in the last one month, children with cardiovascular diseases and congenital anomalies were excluded from the study.
SAMPLE SIZE DETERMINATION
The sample size was determined by using the formula for descriptive studies, n =Z 2 p q/ d 2, Where n = calculated sample size, Z= the standard normal deviate, usually set at 1.96 which corresponds to the 95% con dence interval.P = 71% the prevalence of the proportion of nutritional status in the target population [15], d = Permissible error of estimation (0.05) and q = 1.0-p 10% attrition 316.394 + 31.396= 347.79≈ 400 questionnaires SAMPLING TECHNIQUE A multistage sampling technique was used in the study.
Out of the 20 local government in Lagos state, Epe local government area was selected by balloting and Agbowa Ikosi political constituency was selected from Epe LGA.Furthermore, simple random sampling (by balloting) was used to select 3 wards out of the total 6 wards in the local government area which were Agbowa 1, Agbowa 2 and Ejirin.
At the ward level, simple random sampling (by balloting) was used to select 15 streets per ward totaling 45 streets to create for equal proportion of the participants streets (estimated number of streets in each ward are 48, 39 and 27respectively) the streets were numbered, small pieces of papers were wrapped with a number each representing each streets, any paper randomly picked were selected to be used for the study At the street level, systematic random sampling was used to select 10 houses per street.Individuals household were selected at regular intervals from the sampling frame.The intervals are chosen to ensure an adequate sample size.The houses were numbered and the sum was divided by the number of questionnaires to be distributed.In all, there was an average of 45 streets containing 3150, it then follows that 3150/400 = 7.875 ≈ 8 th house.Therefore, every 8 th household were selected into the study.One household per house was selected.In houses where there were more than one household in the house, the selection was done using simple random sampling (by balloting).In total 450 households were selected.
DATA COLLECTION
This study commenced at the community with advocacy visit to the Head of the community.Permission and cooperation was also sort from the staff of the health centre in the area.Three community health extension workers and ve health attendants were trained for three days to assist in administering the questionnaires.
Checklist was used for clinical examination.This study was carried out within the period of two months (April-June, 2018).
The pretest comprised of 10% of the total questionnaire was administered at Sotubo, a town under Sagamu Local Government.The data were collected and analysed using SPSS version 20 to ascertain the validity of the instrument.
STUDY INSTRUMENT
The study instrument was interviewer administered questionnaire containing 5 sections to collect Information regarding patients' age, sex, residence, birth weight, natal history of prematurity, breast feeding practices, age at introduction of other types of food, types of complementary food in the rst year of life and habit of frequent intake of food was obtained from parents/guardians.Additional information that was elicited include caretakers occupation, information about family size and education level of the parents.
ANTHROPOMETRIC PARAMETERS
Anthropometry is a technique that uses human body measurements to draw conclusion about the nutritional status of individuals and population and often applied to pre-school children below the age of 5 years.Measurement were taken by exible, non-stretch tape made of ber glass or steel for children of 1-5yrs.
In order to ensure consistency and reduce error in taking the measurements during eld work, each measurement was taken twice, and the mean of the two readings was recorded during training.If any pair of readings exceeded the maximum allowable difference for a given variable, the measurements were repeated.
Steps followed in taking the MUAC measurement of a child were The mid-point between the elbow and the shoulder (acromion and olecranon) was determined as shown on the picture below.
The tape measure was placed around the LEFT arm (the arm should be relaxed and hang down the side of the body).
The MUAC was measured while ensuring that the tape neither pinches the arm nor is left loose.
The measurement was read from the window of the tape or from the tape and was recorded to the nearest 0.1 cm or 1mm.Using a 3-colour tape: a measurement in the green zone means the child is properly nourished; a measurement in the yellow zone means that the child is at risk of malnutrition; a measurement in the red zone means that the child is acutely malnourished.The measurement was repeated two times to ensure an accurate interpretation.This is as illustrated in Dietary survey: Dietary assessment protocol is designed to assess nutrient intakes after implying questionnaire, records, and recall methods [16] The Road to health chart (RTHC) was used to record immunization coverage and to calculate the Z-score
DATA ANALYSIS
The questionnaires obtained from the study was analyzed using statistical package for social sciences (SPSS version 20) software programme.The data was presented in frequency distribution tables with percentages.Frequency tabulation was used to describe the socio-demographic characteristics of respondents.Inferential statistical analysis was also used to determine association between some variables and nutritional status of the study population.The level of signi cant was taken at p ≤ 0.05.
Participants were categorized using the indices that were compared with standard reference values of World Health Organization (WHO) standards recommendations to obtain the Z-scores.All the caregivers 407(100.0%)Seek health care assistance whenever their child was sick, 105(25.8%)resolved to pharmacy for medication while 86(21.1%)showed one symptom or the other.Table 2 Only 301(74.0%)completed the immunization for the children at appropriate time.While 226(55.5%) of the children were dewormed in the last 3 months.As shown in Table 3 Immunization coverage among the respondents were BCG-82.1%,Oral polio= 81.3%, Pentavalent [DPT 3] = 87.2% and Measles=88.7%. Figure 3 Breastfeeding and weaning practices Exclusive breastfeeding was practiced by 39.8 % of the mothers.About 56.5% of the mothers reported having given their babies the rst milk (colostrum) while the rest (43.5%) discarded it.Majority (77.4%) of the mothers introduced complementary feeding at 5-8 months.The mean age of weaning off breast was 7.4 ±1.5 months.This is shown in Table 3. Tea (90.3%) and Porridge (61.5%) were the most commonly used food for complementary feeding followed by maize with (50.0%), and milk (31.3%).The percentage range of respondents that fed their children with diver's type of food from one week to one month, only 31.3% fed their children with milk 7 times a week, 10.8% fed with sh 7 times per week while 61.5% gave porridge to the children 7 times per week and 90.3% fed the children with tea 7 times per week, this showed that the common food for the children after one month weaning of weaning was tea affordably while only fewer percentages were able to afford other forms of food.The food recall frequency table is shown in Table 4 Anthropometric indices of the respondents The mid upper arm circumference revealed 30(55.6%) of male were at risk of malnutrition greater than female 24(44.4%),while 45(56.1%) of males were also severe compared to 40 (43.9%)females.The mean head circumference increased with increasing range of age and was higher for boys in comparison with girls among age group 13 to 24 months (CI: -0.75 to -0.71; p = 0.001).The mean HC of age range 25 to 36 months among boys and girls were also statistically signi cantly different (CI: 0.59 to -0.68; p =0.037).However, mean HC between boys and girls among age range 37 to 48 as well as 49 to 60 months was not statistically signi cant (p>0.05).
Discussion
The study revealed the apparent signi cance of breastfeeding and complementary feeding pattern of the mothers.Majority of the mothers do not practice exclusive breastfeeding while only 56.5% gives rst milk (colostrum).However this nding is higher than the 17% reported by the National health survey [19].Exclusive breastfeeding is being recommended globally as the mainstay of nutrition in the rst 6months of life especially in poor income countries.Early initiation of breastfeeding (within one hour of birth) facilitates breast milk production and consumption of colostrum which appears right after delivery.This implies that there is a low level of health education about breastfeeding among the mothers.Mother's education have a signi cant in uence on their breastfeeding habit.This nding is similar to ndings in several studies [4][5][6] which established that a more educated mother/caregiver raises a better quality child than a less educated mother.
The mean age of weaning off breast was 7.4 ±1.5 months.As a global public health recommendation, infants should be exclusively breastfed for the rst six months of life to achieve optimal growth, development and health.Infants should receive nutritionally adequate and safe complementary foods while breastfeeding continues up to two years or beyond [7][8][9]18].However early weaning as reported in this study should be discouraged.Mothers especially those in low resource countries should be encouraged to continue this practice of exclusive breastfeeding at least for the rst 6 months of life with introduction of complimentary feeds at this time and to prolong breastfeeding duration as long as possible.Mass educational program to ensure correct complimentary feeding and programs to control malaria and diarrheal diseases should be vigorously pursued by health authorities especially ministry of health.
Immunization coverage among the respondents was high in this study compared to that which has been reported in the country.Only 1 out of 4 children aged 12-23 months in Nigeria completes their routine immunization schedule [19] while almost 20 million infants about three-fths of them found in 10 underdeveloped countries globally including Nigeria did not receive routine immunization in 2016 [20].Place of residence has been proven to be more important in predicting high immunization coverage when compared to other personal attributes of children or their parents [21].This may be due to inequalities in the use of maternal and child health services between rural and urban dwellers.However, the e cacy of immunization has been well proven against childhood diseases [22].
The study shows that the mean values of head circumference to be signi cantly lower among girls than boys below 36months.The fact that females cope better with adverse physiological condition than their male counterpart is well documented in several studies [15,16].This might be explained by the fact that food for weaning are typically introduced to children in the younger age group, thus increasing their exposure to infections and susceptibility to illnesses which are more tolerated by females.This tendency, coupled with inappropriate or inadequate feeding practices, may contribute to faltering nutritional status among children in these age groups [11][12][13]17].
Conclusion
This study shows that the mean age of weaning off breast was 7.4 ±1.5 months.Only 39.8% of the mother practice exclusive breastfeeding while 56.5% gives rst milk (colostrum).It also reported that mean values of head circumference to be signi cantly lower among girls when compared to boys.Mothers especially those in low resource countries should be encouraged to continue this practice of exclusive breastfeeding at least for the rst 6 months of life and to prolong its duration as long as possible.The care takers need to be educated on the weaning practices to be adopted.Female education cannot be over-emphasized as this will broaden the horizon of female gender on the necessity of being a mother.
Mass educational program to ensure correct complimentary feeding and programs to control malaria and diarrheal diseases should be vigorously pursued by health authorities especially ministry of health.Health education at all levels including school, University, health services and community level should be implemented.
Declarations
Legend not included with this version.
Legend not included with this version.
Trends in immunization coverage of children STUDY AREA Lagos state was the former federal capital of Nigeria.However it still remain the commercial capital of Nigeria.It consist of 20 local government areas.Its geographical coordinates are 6° 39' 0" North, 3° 43' 0" East.Agbowa Ikosi is one of the political constituency under Epe Local Government Area of Lagos State.It has a public General hospital located in Agbowa, E1 (Agbowa) Agbowa-Ikosi on the south bank of a creek that extends from Lagos inland to Ikorodu.It lies 35 kilometer north of Epe Division, It comprises of towns and villages such as Ota-Ikosi, Ikosi Beach, Orugbo-Iddo, Igbalu, Oke-Olisa, Gberigbe, Oko-Ito, Imope, Imota, Odo Ayandelu Ado-Ikosi, Owu, Iganke.The inhabitants are mostly farmers and shermen, though there are several commercial activities replica of an urban area.The religion of the people are Christianity, Islam and traditional beliefs.Agbowa Ikosi consists of 6 political wards namely Agbowa I, Agbowa II, Owuotta, Ajebo/orugbo, Ifesowapo and Ketu/ejirin .According to 2006 population census the total number of children was 35194 STUDY DESIGN This is a community based, cross-sectional, descriptive study, investigations such as dietary survey, anthropometry and clinical examination were done.
Figure 1 &
Figure 1 & 2 below: Diagnostic criteria for Severe acute malnutrition in children aged 6-60 months Ethical approval was obtained from Health Research and Ethics Committee of Olabisi Onabanjo University Teaching Hospital Sagamu with No: OOUTH/HREC/218/2018AP.This research was performed in accordance with the declaration of the Helsinki.Adequate permission was obtained from the local authority such as community head and the hospital staff.Written informed consent was obtained from the respondents who are the parents and/or legal guardians of our study participants after adequate explanation of the study procedure.Con dentiality of the information obtained was ensured.Health care seeking practicesThe mean age of the children under study was 26.58 ± 10.10 months, about half 210 (51.6%) were males and 181(44.5%)were severely malnourished.Common symptoms reportedly experienced by the children were pale eyes (14.3%), poor appetite (10.6%) and malaria (10.3%), headache (8.1%), and brittle nger nails (7.4%).
Table 3 :
Ethics approval and consent to participateEthical approval was obtained from Health Research and Ethics Committee of Olabisi Onabanjo University Teaching Hospital Sagamu with No: OOUTH/HREC/218/2018AP.This research was performed in accordance with the declaration of the Helsinki.Adequate permission was obtained from the local authority such as community head and the hospital staff.Written informed consent was obtained from the respondents who are the parent and/or legal guardian of our study participants after adequate explanation of the study procedure.Con dentiality of the information obtained was ensured.Breastfeeding and weaning practices
Table 4 :
Food consumption frequenciesTable Mean Head Circumference (HC) based on the age and sex speci c distribution of the children
Table 6 :
Mean Head Circumference for age Z-score (HCAZ) based on the age and sex speci c distribution of the children | 2023-11-15T17:51:52.205Z | 2023-11-06T00:00:00.000 | {
"year": 2023,
"sha1": "0cd8490d9c0b4d5a187f709cb25375ffb5df59fc",
"oa_license": null,
"oa_url": "https://journalajpr.com/index.php/AJPR/article/download/291/573",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8566d7ca6f1fda31386d689330f9900d763dac62",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
4047446 | pes2o/s2orc | v3-fos-license | Phylogenetic diversity and biodiversity indices on phylogenetic networks
In biodiversity conservation it is often necessary to prioritize the species to conserve. Existing approaches to prioritization, e.g. the Fair Proportion Index and the Shapley Value, are based on phylogenetic trees and rank species according to their contribution to overall phylogenetic diversity. However, in many cases evolution is not treelike and thus, phylogenetic networks have come to the fore as a generalization of phylogenetic trees, allowing for the representation of non-treelike evolutionary events, such as horizontal gene transfer or hybridization. Here, we extend the concepts of phylogenetic diversity and phylogenetic diversity indices from phylogenetic trees to phylogenetic networks. On the one hand, we consider the treelike content of a phylogenetic network, e.g. the (multi)set of phylogenetic trees displayed by a network and the LSA tree associated with it. On the other hand, we derive the phylogenetic diversity of subsets of taxa and biodiversity indices directly from the internal structure of the network. Furthermore, we introduce our software package NetDiversity, which was implemented in Perl and allows for the calculation of all generalized measures of phylogenetic diversity and generalized phylogenetic diversity indices established in this note. We apply our methods to a phylogentic network representing the evolutionary relationships among swordtails and platyfishes (Xiphophorus: Poeciliidae), a group of species characterized by widespread hybridization.
Introduction
Facing a major extinction crisis and the inevitable loss of biodiversity at the same time with limited financial means, biological conservation has to prioritize the species to conserve. In this matter, the so-called phylogenetic diversity (Faith (1992)) has been introduced as a measure of biodiversity based on the evolutionary history of species. It serves as a basis for biodiversity indices used in taxon prioritization, e.g. the Fair Proportion Index and the Shapley Value (Haake et al. (2007); Hartmann (2013); Fuchs and Jin (2015); Wicke and Fischer (2017)). Both phylogenetic diversity, as well as the Fair Proportion Index and the Shapley Value are based on phylogenetic trees and thus, assume the evolutionary history of species to be treelike. However, there are several forms of non-treelike evolution, such as horizontal gene transfer or hybridization, affecting a variety of species. Therefore, phylogenetic reticulation networks have become an important concept in evolutionary biology, allowing for the representation of non-treelike evolution. Here, we aim at combining both approaches, i.e. we aim at extending the concept of phylogenetic diversity and its measures from phylogenetic trees to phylogenetic networks. So far, phylogenetic diversity and the Shapley Value have been considered for so-called split networks, which can be used to represent conflict in data (Chernomor et al. (2016); Volkmann et al. (2014)), but no attempts have been made towards the generalization of phylogenetic diversity and its measures to reticulation networks. In this note we first recapitulate phylogenetic diversity, the Fair Proportion Index and the Shapley Value on phylogenetic trees, before we focus on generalizing these concepts to phylogenetic networks. We will introduce a variety of definitions for generalized phylogenetic diversity, following three main principles: the calculation of spanning arborescences and subgraphs of a network, the consideration of the (multi)set of phylogenetic trees displayed by a network and the construction of the so-called LSA tree associated with a network. We will then turn our attention to the Fair Proportion Index and the Shapley Value and suggest different ways of using them as taxon prioritization tools in the context of phylogenetic networks. All approaches are implemented in our new software tool NetDiversity, which has been made publicly available at www.mareikefischer.de/Software/NetDiversity.zip. Moreover, we test NetDiversity on a recently published phylogenetic network of swordtails and platyfishes (Xiphophorus: Poeciliidae), whose evolution is characterized by widespread hybridization (Solís-Lemus and Ané (2016)).
Preliminaries
Let X be a finite set of species (taxa). A rooted phylogenetic X-tree T is a rooted tree with root ρ where the leaves are bijectively labeled by X. T is called binary if all internal nodes have degree 3 and the root has degree 2. Throughout this paper, when we refer to trees, we always mean rooted phylogenetic trees. Furthermore, we assume all edges in a tree to have edge lengths greater than zero assigned to them, and we denote the length of an edge e as λ e > 0. Note that all edges in a rooted phylogenetic tree T are directed away from the root, thus formally the treeshape of T is a so-called arborescence.
Definition 1 (Arborescence). Let G = (V, E) be a directed graph and let ρ ∈ V be a specified root node (of indegree 0). Then G is an arborescence (rooted at ρ) if there is exactly one directed path from ρ to u for all nodes u ∈ V \ {ρ}.
A rooted binary phylogenetic network N on X is a connected rooted acyclic digraph such that: • the root has outdegree 2 (and indegree 0), • each node with outdegree 0 has indegree 1, and the set of nodes with outdegree 0 is bijectively labeled by X, • all other nodes either have indegree 1 and outdegree 2, or indegree 2 and outdegree 1.
Nodes with indegree 2 and outdegree 1 are called reticulation nodes and all other nodes are called tree nodes. Furthermore, tree nodes with outdegree 0 are referred to as leaves. Edges directed into a reticulation node are called reticulation edges and edges directed into a tree node are called tree edges. When we refer to phylogenetic networks, we always mean rooted binary phylogenetic networks. Moreover, we assume all tree edges to have edge lengths greater than zero assigned to them and denote the length of a tree edge e as λ e > 0. W.l.o.g. we define the edge lengths of all reticulation edges to be zero. When we refer to the size of a tree or a network, we mean the number n = |X| of taxa, i.e. the number of leaves of the tree or network under consideration.
Let N be a phylogenetic network on X and let T be a phylogenetic X-tree.
We say that T is embedded in N , or that N displays T , if T can be obtained from N by deleting one of the reticulation edges for each reticulation node and suppressing resulting nodes of indegree 1 and outdegree 1. We use T(N ) to denote the (multi)set of all rooted phylogenetic X-trees displayed by N . Note that we receive the edge weights of an embedded tree T ∈ T(N ) as follows: for all formerly distinct edges that are melted into a new edge by suppressing nodes of indegree 1 and outdegree 1, we add their edge lengths, while all other edges keep their original weights. Moreover, note that if there are k reticulation nodes in a rooted binary phylogenetic network N on a taxon set X, then there are at most 2 k phylogenetic X-trees displayed by N . However, this bound does not have to be sharp (cf. Figure 1).
For a phylogenetic network N and a node u of N that is not the root, we call any node v that lies on all directed paths from the root to u a stable ancestor of u. The so-called lowest stable ancestor of u is defined as the last node lsa(u) that is contained on all paths from the root to u, excluding u. Based on this terminology we can define the LSA tree (lowest stable ancestor tree) associated with a network. Let N be rooted phylogenetic network on X. The LSA tree T LSA (N ) associated with N is a rooted phylogenetic X-tree that can be computed as follows: For each reticulation node r in N , remove all edges directed into r and add a new edge e = (lsa(r), r) from the lowest N displays the phylogenetic X-trees T 1 , T 2 and T 3 . When deleting exactly one reticulation edge for each of the two reticulation nodes r 1 and r 2 in N , we also obtain tree T 4 , in which the internal node w of N has become a leaf. However, we do not regard T 4 as a phylogenetic X-tree displayed by N , because w does not belong to taxon set X. Thus, in this case we have T(N ) = {T 1 , T 2 , T 3 }. The bold edges in T 1 represent the edges contributing to the phylogenetic diversity of S = {A, B} calculated in Example 1.
stable ancestor of r into r. Then repeatedly remove all unlabeled leaves and nodes with in-and outdegree 1, until no further such removal is possible. Note that the LSA tree associated with a binary rooted phylogenetic network is not necessarily a binary phylogenetic tree (cf. Figure 2). In order to use the LSA tree for subsequent phylogenetic diversity calculations, we have to infer edge lengths for the edges of the LSA tree. For all tree edges of N that are also present in T LSA (N ), we use their original edge weights. If during the removal of nodes of in-and outdegree 1 two formerly distinct tree edges of N are melted into a new edge in T LSA (N ), we add their original edge lengths. For all newly established edges e = (lsa(r), r) between a reticulation node r and its lowest stable ancestor, we suggest to set the length of these edges to the average path length of a path between lsa(r) and r, respectively, i.e. we set λ e=(lsa(r),r) := 1 |P r | P ∈Pr length(P ), where P r is the set of all lsa(r)-r-paths P in N and the length of any such path is obtained by adding the edge lengths of all edges that are part of this path (cf. Figure 2).
Phylogenetic diversity and phylogenetic diversity indices on trees
In this section we briefly recapitulate the concept of phylogenetic diversity and phylogenetic diversity indices, in particular the Shapley Value and the Fair Proportion Index, for phylogenetic trees.
Definition 2 (Phylogenetic diversity). Let T be a rooted phylogenetic tree with leaf set X. For a subset S ⊆ X of taxa, the phylogenetic diversity P D(S) is calculated by summing up the edge lengths of the phylogenetic subtree of T containing S and the root (i.e., we consider the sum of edge lengths in the smallest spanning tree containing S and the root). Figure 2: Rooted binary phylogenetic network N on X = {A, B, C, D} and its associated LSA tree T LSA (N ). Note that the reticulation edges (dashed) of N have weight zero. The node v is the lowest stable ancestor of the reticulation node r 1 and we have to consider two paths when calculating the length of the edge e = (lsa(r 1 ), r 1 ): P 1 = ((v, u), (u, r 1 )) with length(P 1 ) = 1 + 0 = 1 (recall that we have defined the lengths of reticulation edges to be zero) and P 2 = ((v, w), (w, r 1 )) with length length(P 2 ) = 1 + 0 = 1. Thus, taking the average, we set length((lsa(r 1 ), r 1 )) := 1. Analogously, node ρ is the lowest stable ancestor of r 2 and we have to consider the paths P 3 = ((ρ, v)(v, w)(w, r 2 )) with length(P 3 ) = 1 + 1 + 0 = 2 and P 4 = ((ρ, x), (x, r 2 )) with length(P 4 ) = 2 + 0 = 2. Thus, we set length((lsa(r 2 ), r 2 )) := 2.
However, subsequently the edges (v, r 1 ) and (r 1 , B) are merged into a new edge (v, B) of length 1 + 1 = 2 and analogously, the edges (ρ, r 2 ) and (r 2 , C) are replaced by a new edge (ρ, C) of length 2 + 1 = 3 to finally yield the LSA tree associated with N . Note that T LSA (N ) is not binary, because the root ρ has degree 3.
Example 1. Consider the phylogenetic tree T 1 on X = {A, B, C, D} depicted in Figure 1. Now consider the subset S = {A, B} ⊆ X of taxa. Then the phylogenetic diversity of S calculates as P D(S) = 2 + 1 + 1 + 1 = 5.
Based on phylogenetic diversity, we can now define the Shapley Value for phylogenetic trees. The Shapley Value for phylogenetic trees is used in different versions in the literature (cf. Wicke and Fischer (2017)), but we will use the so-called original Shapley Value throughout this paper.
Definition 3 (Original Shapley Value). Let T be a rooted phylogenetic tree with leaf set X and let P D(S) denote the phylogenetic diversity of S ⊆ X. Then the Shapley Value for a taxon a ∈ X is defined as where n = |X| and S denotes a subset of species containing taxon a (also sometimes referred to as 'coalition') and the sum runs over all such coalitions possible.
While the Shapley Value reflects the average contribution of a species to overall phylogenetic diversity and is thus a sensible prioritization criterion, its calculation is complicated. Therefore another index, the so-called Fair Proportion Index, has been introduced.
Definition 4 (Fair Proportion Index). For a rooted phylogenetic tree T with leaf set X the Fair Proportion Index of a taxon a is defined as where the sum runs over all edges e on the path from a to the root and D e denotes the number of leaves descendent from that edge.
The Fair Proportion Index can easily be calculated, but lacks a biological motivation. However, its use has been justified by its equivalence with the original Shapley Value.
Theorem 1 (Fuchs and Jin (2015)). Let T be a rooted phylogenetic tree with leaf set X. Then we have for all a ∈ X : Example 2. Consider the phylogenetic tree T 1 on X = {A, B, C, D} depicted in Figure 1.
, which equals the total sum of all edge lengths in T 1 . Also note that the Fair Proportion Indices of T 1 equal the Shapley Values of T 1 .
Generalization of phylogenetic diversity
We are now in the position to present our approaches towards the generalization of phylogenetic diversity from trees to networks. We will introduce three approaches, one based on the calculation of spanning arborescences and subgraphs of a network, one based on the set of trees displayed by a network and one based on the LSA tree associated with a network.
Phylogenetic (sub)net diversity
Recall that the phylogenetic diversity of a subset S ⊆ X of taxa of a phylogenetic X-tree T was calculated as the sum of branch lengths of the subtree of T containing S and the root. For a phylogenetic network N on X and a subset S ⊆ X of taxa, there may be more than one subtree, or to be precise, more than one arborescence (because a phylogenetic network is a directed graph) containing S and the root. Thus, we suggest to consider an arborescence of minimum cost and introduce the so-called phylogenetic net diversity.
Definition 5 (Phylogenetic net diversity). Let N be a rooted phylogenetic network on some taxon set X. For a subset S ⊆ X of taxa we define the phylogenetic net diversity P N D(S) of S as the sum of branch lengths in a minimum cost arborescence containing S and the root.
Note that determining the minimum cost arborescence containing a subset S ⊆ X of taxa and the root is formally an instance of the so-called directed Steiner tree problem or Steiner arborescence problem, which, in general, is an N P -hard problem (Karp (1972)). Example 3. Consider Figure 3, which depicts the rooted phylogenetic network N on X = {A, B, C, D} and the two arborescences A 1 and A 2 containing S = {A, B} and the root. A 1 has weight 1 + 1 + 2 = 4, while A 2 has weight 2 + 1 + 1 + 1 = 5. Thus, A 1 is the minimum cost arborescence containing S = {A, B} and the root and we retrieve the phylogenetic net diversity of S = {A, B} as P N D({A, B}) = 4.
Instead of using spanning arborescences to define the phylogenetic diversity of a subset S ⊆ X of taxa of a phylogenetic network N on X, we can also consider the subgraph N S ⊆ N containing the root of N and S and define the phylogenetic diversity of S as the sum of branch lengths in N S .
Definition 6 (Phylogenetic subnet diversity). Let N be a rooted phylogenetic network on some taxon set X. For a subset S ⊆ X of taxa consider the subgraph N S of N containing the root of N and the taxa in S (i.e., N S is the subgraph of N containing all nodes and edges that lie on at least one path from the root of N to any of the leaves in S). Then we define the phylogenetic subnet diversity P SD(S) of S as the sum of branch lengths in N S .
Example 4. Consider the rooted phylogentic network N on X = {A, B, C, D} depicted in Figure 3 and set S = {A, B}. Then the subgraph N S of N (highlighted with bold lines) has length 1 + 1 + 1 + 1 + 1 = 5 and thus, P SD({A, B}) = 5.
Embedded phylogenetic diversity
If species are subject to hybridization or horizontal gene transfer, their genome contains parts of the genome of both its ancestors. However, evolution at the nucleotide level rather than the genome level is still treelike, because a single nucleotide can always be traced back to one parent. Therefore, we suggest to consider the set of trees embedded in a network as an alternative approach towards the generalization of phylogenetic diversity from trees to networks.
Definition 7 (Embedded phylogenetic diversity). Let N be a rooted phylogenetic network on some taxon set X and let T(N ) be the (multi)set of all rooted phylogenetic X-trees displayed by N . Then we use P D * T(N ) (S) to denote the embedded phylogenetic diversity of a subset S ⊆ X of taxa, where * is one of the following functions min, max, , ∅ and define where | T(N ) | is the number of phylogenetic X-trees displayed by N .
Note that * can be replaced by other functions on the phylogenetic diversity of the trees in T(N ), but we will only consider the minimum, the maximum, the sum and the average value of phylogenetic diversity in the set of embedded trees as defined above. Also note that we will only consider phylogenetic X-trees as elements of T(N ) and discard all other trees that may occur when decomposing the network into a set of trees (cf. Figure 1).
Relationship between the phylogenetic net diversity and the embedded phylogenetic diversity
Comparing the phylogenetic net diversity P N D and the minimum embedded phylogenetic diversity P D min T(N ) for a subset S ⊆ X of taxa, we see that they use a similar principle. While P N D(S) is defined as the weight of a minimum cost arborescence spanning S and the root in a network N , P D min T(N ) is defined as the weight of a minimum spanning tree/minimum cost arborescence spanning S and the root in the set T(N ) of phylogenetic X-trees displayed by N . Thus, the two measures are related, but in general they are not identical. Consider, for example the rooted phylogenetic network N depicted in Figure 1 and set S = {A, B, C, D}. Then, we have P D min T(N ) (S) = 9, while P N D(S) = 8. However, we have the following relationship between P N D and P D min T(N ) : Proposition 1. Let N be a binary rooted phylogenetic network on a taxon set X with k reticulation nodes and let T(N ) be the set of phylogenetic X-trees displayed by N .
We have
for all subsets S ⊆ X of taxa. 2. If | T(N ) | = 2 k , i.e. if all combinations of removing one reticulation edge for each reticulation node and suppressing nodes of both indegree 1 and outdegree 1 result in a phylogenetic X-tree, we have Remark. Note that | T(N ) | = 2 k for example holds for so-called normal networks (cf. van Iersel et al. (2010)).
Proof of Proposition 1. Let N be a binary rooted phylogenetic network with root ρ, taxon set X and k reticulation nodes. Let T(N ) be the set of embedded trees and let R(N ) = {r | r is a reticulation node of N } be the set of reticulation nodes of N .
1. We show P D min T(N ) (S) ≥ P N D(S). For every T ∈ T(N ) the phylogenetic diversity of a subset S ⊆ X of taxa is defined as the sum of branch lengths in the smallest arborescence spanning the taxa in S and the root. Clearly, the weight of any such arborescence cannot be smaller than the weight of a minimum cost arborescence spanning S and the root in N (all T ∈ T(N ) are "subgraphs" of N , thus, any smallest arborescence spanning S and the root in a displayed tree T ∈ T(N ) can also be found in N ). 1 In particular, we have 2. Now, suppose that | T(N ) | = 2 k . We want to show that P N D(S) = P D min T(N ) (S). As we have P N D(S) ≤ P D min T(N ) (S) (Equation (7)), it suffices to show P N D(S) ≥ P D min T(N ) (S). Let A S be the minimum cost arborescence spanning S and the root in N . By definition of an arborescence there is exactly one directed path from the root ρ to any other vertex v ∈ V (A S ). This implies that A S contains at most one reticulation edge for each reticulation node r ∈ R(N ), but never both reticulation edges directed into r ∈ R(N ). If we now suppress nodes of both indegree 1 and outdegree 1 in A S and add the weights of the edges which are merged into one edge by doing so, we retrieve a directed acyclic graph A S , which contains the taxa in S and whose weight equals the weight of A S . By the construction of A S , however, A S must be a sub-arborescence of some embedded tree T A S ∈ T(N ), where the set of embedded trees is obtained by deleting one of the reticulation edges for each reticulation node and suppressing the resulting nodes of indegree 1 and outdegree 1, and every combination of doing so results in a phylogenetic X-tree (because we have assumed | T(N ) | = 2 k ). Thus, by definition of P D for trees, the weight of A S equals P D T A S (S) and as T A S is embedded in N we have Combining the above, we have P N D(S) = P D min T(N ) (S) as claimed.
LSA associated phylogenetic diversity
As it can be difficult to determine the set of phylogenetic X-trees displayed by a network N on X, we now consider the LSA tree associated with a network. The LSA tree can be seen as a way to summarize the treelike content of a phylogenetic network, on which all its embedded trees agree, without explicitly having to consider these trees.
Definition 8 (LSA associated phylogenetic diversity). Let N be a rooted phylogenetic network on some taxon set X. Let S ⊆ X be a subset of taxa. Then we define the LSA associated phylogenetic diversity P D LSA (S) as where P D T LSA (N ) (S) is the phylogenetic diversity of S in the LSA tree T LSA (N ) associated with N .
Example 6. Consider the rooted phylogenetic network N and its associated LSA tree T LSA (N ) depicted in Figure 2. Exemplarily, we set S = {A, B} and retrieve the LSA associated phylogenetic diversity of S as P D LSA (S) = 2 + 2 + 1 = 5.
We have introduced a variety of ways to define the phylogenetic diversity of a subset S ⊆ X of taxa in a network. However, the information about the phylogenetic diversity of a subset S ⊆ X of taxa in itself is not very useful for taxon prioritization decisions. Thus, we now turn our attention towards the generalization of phylogenetic diversity indices from trees to networks.
Generalization of phylogenetic diversity indices
After proposing different ways of generalizing the concept of phylogenetic diversity from trees to networks, we will now turn our attention to the Fair Proportion Index and the Shapley Value, two prioritization indices used in biodiversity conservation. Even though the Fair Proportion Index and the Shapley Value are equivalent for rooted phylogenetic trees (Fuchs and Jin (2015)), they differ significantly in their definition and computation. While the Fair Proportion Index is directly based on a given rooted phylogenetic tree (cf. Definition 4), the definition of the Shapley Value is based on the phylogenetic diversity of subsets of taxa, and thus, only indirectly on a given phylogenetic tree (cf. Definition 3). To be precise, the calculation of the Shapley Value involves two steps: 1. Calculation of the phylogenetic diversity for all subsets of taxa based on a given phylogenetic tree. 2. Calculation of the Shapley Value for all taxa based on the phylogenetic diversity calculated in step 1.
This implies that we have two possibilities when extending the Shapley Value from trees to networks: We can either use any generalized definition of phylogenetic diversity (e.g. the phylogenetic net diversity, the embedded phylogenetic diversity or the LSA associated phylogenetic diversity) introduced above and calculate the Shapley Value based on this measure, or we can reduce the network to its treelike content (e.g. via the set of embedded trees or the LSA tree) and calculate the Shapley Value based on these trees. We will, however, start with the reduction of a network to its treelike content, which is also used to generalize the Fair Proportion Index to networks.
Embedded Shapley Value and Fair Proportion Index
Similar to the embedded phylogenetic diversity, we will now use the set T(N ) of phylogenetic X-trees displayed by a network N on X in order to define the so-called embedded Shapley Value and the embedded Fair Proportion Index.
where | T(N ) | is the number of phylogenetic X-trees displayed by N .
Note that as the Shapley Value and the Fair Proportion Index are equivalent on rooted phylogenetic trees (Fuchs and Jin (2015)), the embedded values coincide as well, i.e. SV min T(N ) (a) = F P min T(N ) (a) for all a ∈ X etc.
Example 7. Consider the rooted phylogenetic network N on X = {A, B, C, D} and its embedded trees T 1 , T 2 and T 3 depicted in Figure 1 and fix taxon A ∈ X. Then we have F P T1 (A) = 7 3 , F P T2 (A) = 5 2 and F P T3 (A) = 11 6 . Thus, we retrieve the different versions of embedded Fair Proportion Index of A as F P min T(N ) (A) = 11 6 , F P max T(N ) (A) = 5 2 , F P T(N ) (A) = 20 3 and F P ∅ T(N ) (A) = 20 9 .
LSA associated Shapley Value and Fair Proportion Index
An alternative way of reducing a phylogenetic network to its treelike content is the LSA tree. Thus, we will now introduce the LSA associated Shapley Value and the LSA associated Fair Proportion Index.
Definition 10 (LSA associated Shapley Value, LSA associated Fair Proportion Index). Let N be a rooted phylogenetic network on some taxon set X. Let a ∈ X be a taxon in X. Then we use DI LSA (a) with DI ∈ {SV, F P } to denote the LSA associated Shapley Value or LSA associated Fair Proportion Index and define DI LSA (a) : where DI T LSA (N ) (a) is the respective diversity index (i.e. the Shapley Value or the Fair Proportion Index) in the LSA tree T LSA (N ) associated with N .
Obviously, SV LSA (a) = F P LSA (a) for all a ∈ X, because the two values coincide for rooted phylogenetic trees, thus they coincide in particular for the LSA tree.
Example 8. Consider the rooted phylogenetic network N and its associated LSA tree T LSA (N ) depicted in Figure 2 and fix taxon a ∈ X. Then the LSA associated Fair Proportion Index of A is F P LSA (A) = 1 2 + 2 1 = 5 2 .
Generalized Shapley Value
As the definition of the Shapley Value is only indirectly based on a given phylogenetic X-tree and just requires a measure of phylogenetic diversity for all subsets S ⊆ X of taxa (cf. Definition 3), we now introduce an alternative way of calculating the Shapley Value for the taxa of a phylogenetic network N . We suggest to calculate the Shapley Value according to its definition and use any measure of generalized phylogenetic diversity (e.g. the phylogenetic net diversity, the embedded phylogenetic diversity or the LSA associated phylogenetic diversity) as an input. We call the resulting value the generalized original Shapley Value.
Definition 11 (Generalized Shapley Value). Let N be a rooted phylogenetic network on some taxon set X and let T(N ) be the (multi)set of all rooted phylogenetic X-trees displayed by N . Let a ∈ X be a taxon in X and let PD(S) denote any generalized measure of phylogenetic diversity of a subset S ⊆ X of taxa where n = |X| and S denotes a subset of species containing taxon a and the sum runs over all such subsets possible. = 1 24 1 · 6 · 3 + 1 · 2 · (1 + 2 + 3) + 2 · 1 · (2 + 1 + 3) + 6 · 1 · 1 = 48 24 = 2.
Relationship between the different versions of the Shapley Value for phylogenetic networks
We now shortly compare the generalized Shapley Value and the embedded Shapley Value of a phylogenetic network N on X.
The first observation to make is that, in general, for a ∈ X. Consider for example the rooted phylogenetic network N on X = {A, B, C, D} depicted in Figure 1 and fix taxon A. Then we have SV P D min Thus, If we compare the LSA associated Shapley Value SV LSA and the generalized Shapley Value SV P D LSA that uses the LSA associated phylogenetic diversity as input, we see that all calculations are based upon the LSA tree associated with a network N on X, thus for all a ∈ X (iii) SV LSA (a) = SV P D LSA (a).
Software and Data
In order to calculate the different generalized measures of phylogenetic diversity and generalized diversity indices introduced above, we developed a software tool called NetDiversity, which is available from www.mareikefischer.de/Software/NetDiversity.zip. The tool is written in the programming language Perl and uses modules from BioPerl (Stajich (2002)), in particular the Bio::PhyloNetwork package (Cardona et al. (2008a)). The program takes networks represented in the so-called extended Newick format (Cardona et al. (2008b)) as an input. Depending on the options chosen, the program either outputs any measure of generalized phylogenetic diversity for all subsets of taxa or any generalized diversity index for all taxa of the network. We apply NetDiversity to a phylogenetic network of swordtails and platyfishes (Xiphophorus: Poeciliidae) (cf. Solís-Lemus and Ané (2016)). This is one of the few published hybridization networks, even though hybridization is suspected to have occurred in a variety of other organisms as well. The Xiphophorus hybridization network inferred in Solís-Lemus and Ané (2016) contains 24 species and 2 reticulation nodes (cf. Figure 4). Exemplarily, we use NetDiversity to calculate the different versions of the Fair Proportion Index for the Xiphophorus species. Note that there are 2 24 = 16777216 possible subsets of taxa for a network on 24 species, which is why we refrain from calculating any measure of generalized phylogenetic diversity for all subsets of Xiphophorus or the generalized Shapley value here. Table 1 summarizes the results. For the Xiphophorus network, the rankings obtained by the embedded Fair Proportion Indices and the LSA associated Fair Proportion Index are very similar. There are, however, two striking differences concerning the species X. xiphidium and X. nezahuacoyotl. While X. xiphidium is ranked low by F P min T(N ) , it is placed among the top 10 species by all other indices. The other difference between the indices concerns X. nezahuacoyotl, a hybrid species. X. nezahuacoyotl is ranked first by F P LSA , while it is ranked 12 th , 12 th and 15 th by the other indices.
Thus, in case of the Xiphophorus network, the different versions of the generalized Fair Proportion Index yield similar results, but there are striking differences. In particular the question of whether hybrid species are of high or low importance for overall biodiversity remains to be considered from a biological perspective. (24) 6. Discussion and Outlook In this paper, we have introduced different approaches towards the generalization of phylogenetic diversity and phylogenetic diversity indices from trees to networks. Our approaches provide an extension to existing prioritization tools in conservation biology and allow for the consideration of phylogenetic networks in prioritization decisions. This is of importance if the evolutionary history of a set of species is known to be non-treelike, and thus cannot be represented by a phylogenetic tree. We have applied our methods to a phylogenetic network representing the evolutionary relationships among swordtails and platyfishes (Xiphophorus: Poeciliidae), whose evolution is characterized by widespread hybridization. We have seen that different biodiversity indices may induce striking differences in the ranking order of taxa for conservation. Therefore, we remark that further research concerning the biological plausibility of our approaches is necessary before they can be put into practice. This may be achieved when more phylogenetic networks for different groups of organisms become available and can be analyzed under both a biological and mathematical perspective. Decisions in biodiversity conservation and taxon prioritization do always require thorough examination and should include as much information as possible. Therefore we are currently working on the incorporation of inheritance probabilities into our approaches. For a reticulation node, e.g. a hybrid species, inheritance probabilities reflect the probability or relative frequency with which the hybrid species inherits its genetic material from each of its parents and thus provide additional information on the evolutionary history of species that can be taken into account in prioritization decisions.
Supporting Information S1 Text. Supporting information file that contains the Xiphophorus hybridization network (Solís-Lemus and Ané (2016), its LSA tree and its embedded trees. | 2017-08-28T14:59:21.000Z | 2017-06-16T00:00:00.000 | {
"year": 2017,
"sha1": "c783af1ff874fe34885889f6dfe899ce89921922",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1706.05279",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c783af1ff874fe34885889f6dfe899ce89921922",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Mathematics",
"Medicine"
]
} |
232037518 | pes2o/s2orc | v3-fos-license | Material properties particularly suited to be measured with helium scattering selected examples from 2D materials, van der Waals heterostructures, glassy materials, catalytic substrates, topological insulators and superconducting radio frequency materials
Helium Atom Scattering (HAS) and Helium Spin-Echo scattering (HeSE), together helium scattering, are well established, but non-commercial surface science techniques. They are characterised by the beam inertness and very low beam energy ( o 0.1 eV) which allows essentially all materials and adsorbates, including fragile and/or insulating materials and light adsorbates such as hydrogen to be investigated on the atomic scale. At present there only exist an estimated less than 15 helium and helium spin-echo scattering instruments in total, spread across the world. This means that up till now the techniques have not been readily available for a broad scientific community. Efforts are ongoing to change this by establishing a central helium scattering facility, possibly in connection with a neutron or synchrotron facility. In this context it is important to clarify what information can be obtained from helium scattering that cannot be obtained with other surface science techniques. Here we present a non-exclusive overview of a range of material properties particularly suited to be measured with helium scattering: (i) high precision, direct measurements of bending rigidity and substrate coupling strength of a range of 2D materials and van der Waals heterostructures as a function of temperature, (ii) direct measurements of the electron–phonon coupling constant l exclusively in the low energy range ( o 0.1 eV, tuneable) for 2D materials and van der Waals heterostructures (iii) direct measurements of the surface boson peak in glassy materials, (iv) aspects of polymer chain surface dynamics under nano-confinement (v) certain aspects of nanoscale surface topography, (vi) central properties of surface dynamics and surface diffusion of adsorbates (HeSE) and (vii) two specific science case examples – topological insulators and superconducting radio frequency materials, illustrating how combined HAS and HeSE are necessary to understand the properties of quantum materials. The paper finishes with (viii) examples of molecular surface scattering experiments and other atom surface scattering experiments which can be performed using HAS and HeSE instruments.
Introduction
New materials require adequate tools in order to characterise and understand their fundamental properties. No single technique provides all the answers. It is usually necessary to use several different probes in combination. Each technique exploits the unique features of the interaction between the probe and the material under investigation. A key feature of all the methods in use today is the ability to provide information with high spatial resolution since the design of new materials rests on characterisation on the atomic scale. Furthermore, the enormous developments in 2D materials, van der Waals (vdW) heterostructures and nano-structured surfaces in general, have enhanced the need for surface/few atomic layers sensitive techniques.
Many probes are available to characterise materials and they offer a rich palette of opportunity because their properties and their interaction with the material differ so profoundly. In general the best quantitative information on the smallest length scale with ordered structures is obtained from scattering experiments, while microscopy is preferred on longer length scales and with heterogeneous structures. In this paper we concentrate exclusively on scattering experiments done using beam-probes: photons, electrons, ions, neutrons, and neutral molecules or atoms. Scattering experiments separate into experiments probing ''static'' structure, such as diffraction and experiments probing dynamical processes such as diffusion or vibrations (i.e., phonons). The energy of the scattering particles together with the nature of the interaction potential define the information that can be obtained from an experiment. Major (interrelated) parameters of a scattering probe are (i) wavelength, (ii) time-resolution, (iii) energy and (iv) penetration depth. The wavelength determines the spatial resolution. The time-resolution is of central importance to the study of dynamical processes (i.e., phonons and diffusion). It is usually limited by beam intensity and detector response, or by the range of energy-transfer that is accessible. The energy determines both the wavelength and thus the spatial resolution, as well as the excitations that can be observed and also the damage that individual quanta can create through inelastic scattering. It also has some influence on the penetration depth. Furthermore the energy spread of the incident beam limits the time-resolution. This limitation can be overcome by the spin-echo principle, see Section 3.6. The final parameter, the penetration depth, is particularly important for the investigation of surfaces and ''few atomic layers'' materials. If the scattered signal contains a too large contribution from the bulk the information from the surface/first few atomic layers may be entirely swamped. The penetration depth is determined by the interaction potential between the scattering probe and the sample in combination with the probe energy: electrons, X-rays and neutrons all scatter off the electronic cloud of the atomic cores and atomic nuclei in the sample, and always have a certain penetration into the bulk. Specific methods have been developed to enhance the surface sensitivity of the techniques: low energy electron diffraction, 1 grazing incidence wide angle X-ray scattering, 2 grazing incidence small-angle X-ray scattering 3 and grazing incidence small-angle neutron scattering, 4 but a certain penetration always remains for these probes.
The only scattering probes that do not penetrate at all into the bulk are neutral molecular and atomic beams including neutral helium, created by supersonic expansion. Unlike electrons, X-rays and neutrons which all interact with the core electronic cloud and atomic nuclei in the sample, as described above, the neutral molecules and atoms scatter off the outermost electron density distribution at the sample surface. This is illustrated for helium in Fig. 1. The surface sensitivity arises from a combination of low energies and the Pauli exclusion principle, which gives an interaction dominated by the valence electrons of the sample. The classical turning point for helium is a few Ångstroms above the surface. 5 A key feature of the He-electron collision is its softness: the energy of a 0.1 nm wavelength helium atom is only 20 meV, see Section 2, so no sample damage is induced. Helium scattering can probe essentially all materials and adsorbates, including fragile and/or insulating materials and light adsorbates such as hydrogen. The technique has been presented as the surface analogy to neutron scattering from bulk materials. As we shall see it is still possible to probe some properties related to the first few atomic layers.
Despite a significant body of literature, including both hard and soft surfaces and metal, semiconductor and insulating materials, helium atom scattering (HAS) and helium spin-echo (HeSE) cannot be described as mainstream techniques, due to the fact that they are not readily accessible for the broad scientific community at present. The purpose of this paper is to describe, through a series of examples, the unique benefits offered by helium scattering with an emphasis on encouraging and extending its range of application. For overviews of work on HAS and HeSE see ref. [6][7][8] and the very recent book by Benedek and Toennies dedicated to surface phonon dispersion measurements. 9 This book also serves as an excellent introduction to the topic of helium atom scattering. See also the classical work on Atomic and Molecular Beam Methods edited by Scoles, in particular the chapter by Miller on free jet sources. 10 Fig. 1 Graphical representation of the different processes for the scattering of He atoms on a crystal surface. Note how the helium atom scatters off the electron density distribution, indicated as red lines, without any penetration into the bulk. Selective adsorption refers to the trapping of a helium atom in the helium surface interaction potential, see ref. 9 for further discussion.
Here l i and l f denote the wavelength of the incident and scattered helium atoms, respectively. Inelastic scattering leads to a wavelength change.
The experimental setup
Of all scattering techniques, helium scattering arguably uses the simplest source: a helium pressure bottle. The first helium scattering experiment was performed by Otto Stern and co-workers in 1930. 11 The observation of diffraction peaks from LiF and NaCl not only confirmed the de Broglie matter wave hypothesis for atoms but also provided the basis for a new material characterisation technique. However, the low pressure, effusive source used in the initial experiments with a broad velocity distribution and low intensity was not ideal for scattering experiments. A breakthrough came in 1951 when Kanrowitz and Grey proposed to use gas at high pressure. 12 In these novel sources, the pressure is so high that the atoms collide in the exit aperture of the source (the nozzle) as the beam expands into vacuum. The centre part of the beam is selected by a so called skimmer and the resulting supersonic beam has a source pressure dependent velocity distribution narrower than the equilibrium Maxwell distribution of the gas in the source. The velocity of the beam can be changed by cooling or heating the nozzle. A liquid nitrogen cooled beam has a wavelength around 0.1 nm corresponding to a velocity of around 1000 m s À1 and an energy of around 20 meV. 10 The energy is usually kept at less than 0.1 eV to stay in the quantum mechanical scattering regime, see Section 3.8.3. The energy resolution, as determined by the velocity spread of the beam, varies with pressure and temperature, but is typically around 0.3 meV for a liquid nitrogen cooled beam. 13,14 Essentially, two different types of experiments can be performed in HAS: Elastic and inelastic scattering. The different scattering processes are illustrated in Fig. 1. Fig. 2 shows a diagram of a standard HAS instrument. See Section 3.6 for a description of the HeSE extension. Elastic scattering measures the angular dependence of helium atoms that scatter without energy loss or gain. Elastic scattering experiments can be used to obtain information about the surface topography (corrugation) of crystalline surfaces and amorphous surfaces that are corrugated on the nanoscale. For crystalline materials the lattice parameter and corrugation amplitude can be extracted from the diffracted intensities whereby the term corrugation amplitude refers to the electronic charge corrugation at the surfaces. For amorphous surfaces, the characteristic distance can be determined by the radius of the amorphous (vitreous) ring. For further discussion of the measurement of nanoscale surface topography using HAS, see Section 3.5.
Inelastic helium scattering is illustrated in Fig. 3. Here the energy loss or gain through the surface scattering process is measured using time-of-flight (TOF). By varying the incident angle of the beam (and hence the in-plane wavevector component K), phonon dispersion curves can be measured. An example for graphene on Cu(111) can be found in Fig. 4. Moreover, low energy vibrations of adsorbed molecules such as frustrated translational and rotational modes can be measured below the energy range that is accessible with optical methods. 15 In general the phonon energy that can be probed in a scattering experiment is determined by the energy of the incident probe. HAS, with its incident energy of less than 0.1 eV is the only technique that can probe surface phonons exclusively in the low energy regime. The upper limit probed can be tuned down by cooling of the nozzle as discussed above. The very low energy of the helium beam (4 orders of magnitude less than an electron at a similar wavelength) combined with the inertness also makes helium very attractive for probing insulating, fragile structures as well as 2D materials or materials where the interaction within a few atomic layers are of particular interest, such as van der Waals heterostructures and topological insulators. experiment. An incident helium pulse is visible at the left-hand side of the sample, with the initial velocity distribution indicated as a single blue peak. After scattering off the surface the helium beam has excited and annihilated phonons at the surface, leading to an energy loss and an energy gain respectively, visible as two additional peaks before and after the elastically scattered fraction in the middle.
Selected material properties particularly suited to be measured with helium scattering
We now approach to a series of examples of studies of material properties, where the use of HAS is instrumental.
Bending rigidity and substrate coupling strength of 2D materials
In this section we show how HAS is the most versatile method for measuring the bending rigidity of 2D materials and the only method which can potentially measure the temperature dependence of bending rigidity for a range of 2D materials, something which to the best of our knowledge has not been done so far. Furthermore we show that HAS is a very unique method for precise measurements of the substrate coupling strength for 2D materials.
The mechanical properties of 2D materials are crucial for a number of applications, from biological membranes to flexible electronics. To design flexible electronic components that do not fracture when bent, it is important to know how flexible the different material layers are, relative to each other. This is expressed by the bending rigidity, k, a measure of material resistance to deformation. It is particularly important to know how the bending rigidity varies with temperature, k(T), to design flexible electronics components with a big enough temperature working range for specific applications, typically À40 1C to 85 1C for commercial electronics. 16 In classical mechanics k can be derived for an amorphous membrane structure of thickness h using Young's modulus Y and Poisson's ratio s as 17 the SI unit of k is Pa m 3 = J, usually expressed in eV for nanomaterials. Note that in general for crystalline materials the elastic properties need to be expressed as a tensor rather than simple numbers. However, for hexagonal structures, the behaviour is similar to amorphous materials. 17 Most 2D materials are hexagonal. A relatively simple method for measuring Y and s for 2D materials is to use an atomic force microscope to poke the surface with a well-defined force and measure the response (nanoindentation). 18 It should then be possible to determine k from the formula above. However, this implies knowing h which is difficult to determine for 2D materials and it implies that the 2D materials behave classically, which they usually do not. To the best of our knowledge, the only 2D materials where the bending rigidity has been measured directly using methods other than HAS are graphene, bilayer graphene 19 and 2-5 layer MoS 2 . 20 All measurements were done at room temperature and involved advanced nano-engineering: The 2D-materials were spanned over gaps in a drum-like structure with mechanical stressing and thus limited certainty as to the unperturbed values of rigidity. The measurements all had very large uncertainties. A further experimental value for the bending rigidity of graphene often cited, is inferred from Raman spectroscopy measurements on bulk graphite. 21 In 2013 Amorim and Guinea presented an analytical expression for extracting the bending rigidity for a free-standing thin membrane (i.e., graphene, Gr), from a phonon dispersion curve for the perpendicular acoustic (ZA) phonon mode, obtained from a membrane weakly bound to a substrate: 22 where o coupled ZA is the angular phonon frequency, DK is the parallel wave vector, r 2D is the two-dimensional mass density and h o 0 the binding energy with the substrate, with o 0 given as: where g is the coupling strength between the thin membrane and the substrate. Eqn (2) should also contain, under the square root, a term quadratic in DK that arises from the linear term in DK in the dispersion relation for the ZA mode of a free-standing thin film obeying fixed or periodic boundary conditions. 23 However, this term is negligible compared to the term in o 0 2 and is usually omitted. In 2015 Al Taleb et al. applied eqn (2) as a new method for measuring the bending rigidity of 2D materials by means of HAS. 24 Since He beams used in HAS are typically 1-5 mm in diameter, the method provides information over a large sample area.
We illustrate the way in which both the bond strength and the bending rigidity are determined with HAS for the case of Gr/Cu (111). 24 Fig. 4 shows the acoustic phonon dispersion curves measured with HAS along the GM direction for two different Gr/Cu(111) samples. Phonon dispersion curves for free standing Gr calculated from first principles 25 are also shown as dashed lines.
The ZA phonon mode is clearly visible. This is the mode that corresponds to the dispersion curve in eqn (2). The transverse acoustic (TA) mode is forbidden for planar scattering in the GM direction, whereas the low cross section for excitation of the longitudinal acoustic (LA) mode makes its detection quite difficult. First principles phonon calculations of a Gr/Cu interface predict a few meV shift of the ZA mode near the G point, 26 which is a direct measure of the Gr-Cu coupling strength according to eqn (3). A similar shift (of a different energy) was recently predicted for graphene on another weakly bound substrate SiC. 27 This shift is clearly seen at h o 0 B 6 meV in Fig. 4. An overtone of this mode is also observed at h o B 12 meV. An unshifted dispersion curve is also present, which resembles the ZA mode of free-standing Gr. This is very likely due to the Rayleigh wave of the Cu (111) substrate, since at this wavevector the penetration depth of He atoms is large enough to sample it. 28 Fitting the ZA mode using eqn (2), it is possible to determine both g and k. The best-fit (red curve in Fig. 4) leads to g = (5.7 AE 0.4) Â 10 19 N m À3 and k = (1.30 AE 0.15) eV. The derived g is 2-3 times smaller than that reported for Gr/SiO 2 interfaces, which is very reasonable. 29 The derived k value is consistent with DFT calculations that predict values of k in the range 1.20-1.61 eV. 30 HAS was recently used to obtain also the bending rigidity and coupling strength of a 2D silica bilayer weakly bound on Ru. 31 Furthermore the bending rigidity and coupling strength of graphene on sapphire 32 have been measured. The latter experiment illustrates how the defect density affects the bending rigidity of the graphene.
The fact that measurements done on a 2D material weakly bound to a substrate can be used to extract the value of k for the free-standing 2D material, is a big experimental advantage which should make it possible to measure k(T) for the free-standing material simply by varying the temperature of the substrate. So far no such temperature dependent measurements have been published. As mentioned at the beginning of this section it is particularly important to know how the bending rigidity changes with temperature to design flexible electronics components with a big enough temperature working range for specific applications. The theoretical values for the temperature dependence of the bending rigidity of various 2D materials are heavily contested in the literature. For graphene, several publications claim it will decrease with temperature whereas others predict that it will increase, see for example. 33,34 For bilayer graphene, there are also conflicting results suggesting both increase and decrease with temperature and deviations of more than two orders of magnitude as summarised in ref. 19, see also ref. 35.
Another important point is to understand the behaviour of k as a function of material thickness. How thick does a 2D material have to be to behave classically and follow eqn (1) The answer is likely to differ for different material classes. First experiments on bilayer silica (SiO 2 ) mentioned above 31 suggest that it already behaves classically, which is not the case for bilayer graphene.
Finally it should be mentioned that phonon dispersion curves are extremely sensitive to interatomic forces of adsorbed layers, including the interaction between adlayer and substrate atoms also for non-weakly bound systems. A range of HAS measurements of graphene on metal surfaces provide a good example of how small changes in the substrate coupling strength modify the corresponding phonon dispersion curves. 36 Softening of optical modes and signatures of the substrate ' s Rayleigh wave are observed for strong graphenesubstrate interactions, 37 while acoustic phonon modes resemble those of free-standing graphene for weakly interacting systems. 32 Moreover, phonon dispersion curves provide an excellent scenario to test the performance of current state-of-the-art calculations.
The electron-phonon coupling constant k in the low energy range
In this section we show how HAS is idealy suited to measure the electron-phonon e-ph coupling constant l (also known as the mass correction factor of superconductivity) exclusively in the low energy range (o0.1 eV, tuneable) for 2D materials and van der Waals heterostructures. The energy range can be tuned to a desired maximum by changing the energy and/or the incident angle of the helium beam. As will be explained in more detail below, measurements of l in the low energy range are of particular importance for understanding superconductivity in 2D materials.
The e-ph coupling constant l came into importance in 1957 when Bardeen, Cooper and Schrieffer developed the first comprehensive theory of superconductivity. 38 They gave what is now known as a crude expression for the superconducting transition temperature in terms of l, which was later developed into a more accurate expression by McMillan. 39 In bulk materials l plays a role in all phenomena in which phonons interact with electrons. For bulk materials l can be determined from heat capacity measurements, the linewidths of spectral lines emitted from bulk samples, heat transfer between electrons and phonons under nonequilibrium conditions, laser pump-probe measurements and many other experiments. 40 Since the early HAS measurements by Toennies and coworkers on metal surfaces, 41,42 e-ph interaction was shown to have remarkable effects, unveiled by the discovery of a soft longitudinal resonance, now recognized as an ubiquitous feature of all metal surfaces. This discovery led to a radical change in the theory of inelastic HAS from a conducting surface: from the two-body collision model to the e-ph interaction model described below.
In recent years several 2D materials have been shown to be superconducting. A particularly prominent example being Jarillo-Herrero and co-workers' magic-angle graphene demonstration in 2018 that two graphene sheets placed on top of each other on hexagonal boron nitride and twisted 1.11 relative to each other display superconductivity. 43 In 2019 it was shown that trilayer graphene (ABC type) on hexagonal boron nitride also shows signs of superconductivity. 44 Another class of 2D materials that displays superconductivity is the transition metal chalcogenides: among others, monolayer MoS 2 , 45 monolayer and bilayer WS 2 46,47 and monolayer NbSe 2 . 48 However, the nature of superconductivity in several of these new 2D superconductors, in particular, the relative contributions from e-ph coupling and electron correlation are not at all understood. For example, there is an intense debate about the value of l for magic-angle graphene. Some simulations indicate electron correlation is dominant and hence l should be small ({1), 43,49 whereas other studies suggest that the e-ph coupling is dominant for the superconductivity and l could be as large as 1.0 [50][51][52][53] or even as large as 1.5. 54 Further, if the e-ph coupling is dominant, it is not clear if it is the higher or lower energy phonons that mediate the superconductivity. This lack of understanding of l makes it difficult to decide on the best experimental path for designing new 2D materials that display superconductivity at higher temperatures. One problem has been that, while there are several ways to measure l for bulk materials as discussed above, up to very recently there was no straightforward method for measuring l directly for the low energy phonon regime. 203 Up until now the experimental method usually applied to measure l in 2D materials has been angular resolved photoelectron spectroscopy (ARPES). ARPES measures the momentum distribution of electrons ejected from a solid exposed to UV light/soft X-rays (typically the energy of the incident beam is around 20 eV; note, this is around 1000 times bigger than the energy of the atoms in a helium scattering beam). The ejected electrons reflect the electronic excitations in the material and thus allow the electronic structure to be probed. l can be measured as so-called ''kinks'' in the valence bands. ARPES is a very powerful technique, but the measurements of l tend to be biased towards the higher energy optical phonon modes. In the cases where the momentum transfer is small (and dispersive), i.e., when the relevant phonon mode(s) is the low energy part of the acoustic mode(s) ARPES may struggle: The contribution from the lower energy phonons (in the acoustic phonon regime) cannot be extracted without extensive calculations that are not always feasible (this is discussed in more details in Section 3.2.2). l values for higher energy phonons have been successfully extracted using ARPES for example for graphene (s-band) 55 and MoS 2 . 56 For graphene, the s-band is too far away from the Fermi level to have any influence on superconductivity. For MoS 2 the values found for l were too low to contribute to superconductivity.
A further challenge with ARPES for 2D material examination is that there are some cases where the substrate bands mask the 2D material bands. Furthermore the interaction between a 2D material and the supporting substrate modifies the outermost electron density distribution of the 2D material, which will particularly affect the low energy e-ph coupling 23 (see also the final paragraph of Section 3.1). This illustrates how complex the superconducting challenge is: for 2D materials, l is not necessarily a material constant but may depend on the interaction with the substrate underneath. This is supported by a paper on superconductivity in MoS 2 published in 2020, where the superconductivity appears to be ''induced'' by the Pb substrate 45 and another paper from the same year, which shows that the superconducting properties of magic-angle graphene improve significantly, when the magic-angle graphene is placed on a monolayer of WSe 2 instead of boron nitride. 57 Furthermore, a recent theoretical paper shows that for monolayer graphene, the main phonon mode involved in e-ph coupling in the p-band for moderate doping is one of the lower energy acoustic modes. 58 It appears that an informed design of new 2D materials, with the ultimate aim of achieving room temperature superconductivity, will require systematic measurements of l in the low energy phonon regime for a broad range of 2D material systems.
3.2.1 Measuring k with HAS. The potential energy function governing the interaction between a He atom and a surface during a collision is known to consist of a long-range attractive van der Waals contribution combined with a short-range repulsive part. The repulsive part, which actually reflects the He atoms, is due to the Pauli repulsion arising when the electron wave functions of the He atom begin to overlap with the outermost edge of the surface electron density. This repulsive part has been shown to be proportional to the rapidly decaying surface electron density outside the surface. 59 Thus the He atoms never come close to the atomic cores in the surface as discussed also in Section 1, instead they sense the presence of those cores indirectly through the corrugations induced in the electron density.
It is also at the repulsive part of the potential where the He atoms sense the vibrations of the surface, i.e., the phonons. Since the 1980s it has been known that HAS is uniquely sensitive to measuring phonon modes in the surface region, such as the Rayleigh mode or modes due to adsorbate layers. However, the He atoms do not directly sense the vibrational motions of the atomic cores, instead they measure the phonons of the electron density that are induced by the cores. In other words, inelastic He atom scattering excites phonon modes in the cores via the e-ph interaction. This is shown schematically in Fig. 5.
This process was theoretically quantified in 2011 where it was shown that the He atom scattering intensity associated with excitation of a surface phonon, having parallel momentum h DK and mode number n, is directly proportional to its corresponding mode component of the e-ph coupling constant l DK,n . 28,60 The e-ph coupling constant l is given by the average over the mode components, l ¼ P DK;n l DK;n =N where N is the total number of modes. 61 The He atom scattering inelastic intensity I DK,n for a specific phonon mode is given by 28 where T kf,ki is the transition matrix element determined from the interaction potential, n BE is the Bose-Einstein function, h o is the phonon energy, and exp{À2W(k f ,k i ,T)} is the Debye-Waller factor. E i is the incident and E f the final energy of the He atom.
The Debye-Waller factor multiplies all quantum mechanical intensities, which includes diffraction peaks, single-phonon peaks, diffuse elastic intensity due to defects and adsorbates, etc. It describes the attenuation of all quantum features due to the phonons that are excited in the collision. Its argument 2W(k f ,k i ,T) is proportional to the mean square phonon displacement, hence for temperatures larger than the zero point motion region it is approximately proportional to the temperature T.
Since 2W(k f ,k i ,T) depends on an average over all phonon modes, it is intuitively reasonable to expect that it could also be expressed as a function of the e-ph constant since l is also an average over all modes. Recently, it has been demonstrated that the Debye-Waller exponent is proportional to l and for the special case of the specular diffraction peak it can be written simply as 62 where N E F ð Þ is the electron density of states at the Fermi surface, m is the He atomic mass, m à e is the effective electron mass, E iz is the incident He atom energy due to motion normal to the surface, f is the work function, and k B is the Boltzmann constant. Eqn (5) shows that the temperature dependence of the Debye-Waller exponent, which is easily measured, can be used to extract values of l. For simple metals, the effective mass m à e is known and a reasonable approximation to the density of states is that of a free electron gas is the valence number. Using eqn (5) with the free electron gas density of states, Table 1 shows, in the next-to-last column, the values of l = l HAS that are obtained from all simple metals for which the temperature dependence of the Debye-Waller factor has been measured. The values obtained from HAS are remarkably similar to values of l from other sources shown in the last column, which are almost all measured for the bulk metal crystals. 62 HAS measurements have recently been used to obtain values for l in the low energy range also for degenerate semiconductors (PtTe 2 , PdTe 2 ) 63,64 and a transition metal chalagonide MoS 2 , 65 see also the specific science case on topological materials. Section 3.7.1. 3.2.2 Specific science case: k for magic-angle graphene. We finish the discussion of l by addressing the issue of magic-angle graphene in more detail. As discussed in the beginning of Section 3.2.1 the value of l for magic-angle graphene is a topic of intense debate in the literature. ARPES measurements on magic-angle graphene (twisted bilayer graphene) were published in 2020, 66 however, no value for l was obtained. The flat-bands which are thought to be responsible for the superconductivity 43 (together with the complex back folding of the Brillouin zone) makes it extremely challenging to extract the e-ph ''kink'' using ARPES. Furthermore, to analyse the e-ph ''kink'' (i.e., renormalisation of the electron band due to the interaction) in an ARPES dataset, it must be possible to describe the unrenormalised band accurately. 67 For monolayer graphene, the p-band is famously linear close to the Fermi level, and therefore this is relatively straightforward. For twisted bilayer graphene, there is a complex back folding and the p-band becomes replicated and gapped. 66 A further problem is that the e-ph calculations of the renormalisation are only feasible on especially simple unit cells (such as monolayer graphene), 27 but the twist in magic-angle graphene leads to a moiré pattern which increases the size of the unit cell by orders of magnitude. For HAS such matters are not a problem, and it is thus clear that HAS is particularly suited to measure l for magic-angle graphene.
The surface boson peak
In this section we show how HAS is the only method which can be used to directly measure the boson peak on a surface for glassy materials. This implies that HAS is also the only method that can be used to measure directly the boson peak on 2D materials. The boson peak as a 2D phenomenon has been predicted, 68 and recently observed in a model system of a highly jammed two-dimensional granular material, 69 but not yet experimentally measured in a 2D material.
The Debye model predicts that the vibrational density of states (VDOS) for a material is proportional to the frequency squared in the low energy range. However in many materials the spectrum departs from this law and thus, when the VDOS is normalized by the frequency squared, a peak (or rather a hump) occurs, i.e., an excess in the phonon density of states with a corresponding excess in heat capacity. This peak/hump is known as the boson peak. It has been observed in the bulk of numerous materials using optical, 70 neutron 71 and thermal 72 techniques. The boson peak has long been considered a feature of disordered materials such as glassy materials, where it is typically observed at energies in the THz range (1 THz E 4 meV), but recently it has also been observed in single crystals. 73 A theoretical explanation for this was provided last year. 74 Recent result on polymer glasses show that the boson peak frequency is proportional to ffiffiffi ffi G p , where G is the macroscopic shear modulus. 72 Given the importance of the heat capacity for a big range of material applications, it is clearly very important to understand and potentially tailor the magnitude and/or position of the boson peak also for surfaces and 2D materials. As mentioned above theoretical predictions suggest that the boson peak should be present in 2D materials, 68 however, none of the standard methods used to measure boson peaks can be applied to surfaces and 2D materials because they penetrate too far into the materials. The only, method that can be used to probe the boson peak on surfaces in the THz (meV) range, relevant for glassy materials, is the strictly surface sensitive HAS.
A few years back the first and so far only measurements of the boson peak on a surface were performed using HAS on vitreous silica, where it was found at an energy of around 1 THz (4 meV). As mentioned above the boson peak is typically observed at energies in the THz (meV) range, exactly the energy range that can be probed by HAS. The first publication showed that the surface boson peak was in the predicted energy range. 75 In a second publications it could be shown that the surface boson peak on vitreous silica displays a strong temperature dependence, blueshifting with increasing temperature, 76 see also ref. 77 and 78.
Polymer chain surface dynamics under nano-confinement
In this section we argue that HAS is a useful complimentary method for investigating dynamical properties of polymer thin films. Polymers represent a very important class of glassy materials, they are usually ''soft'' and insulating, which means that they can be challenging to investigate with other techniques, in particular with regards to the surface properties. HAS has proven a very useful probe for studying the vibrational dynamics of polymer surfaces, revealing how the surface dynamics change due to nano-confinement as film thickness approaches the radii of gyration of the polymer chains, [79][80][81] and how surface vibrational dynamics change when going from the amorphous to the crystalline phase. 82 HAS measurements provide a precise window into polymer surface dynamics, complementing other spectroscopic or X-ray scattering methods while revealing a clear picture of surface dynamics isolated from the bulk signatures. 82
Nanoscale surface topography
In this section we show how HAS in some cases can provide important information about surface topography, that cannot be obtained with other techniques.
As discussed in Sections 1 and 3.2.1, HAS is unique in that the atoms scatter off the outmost electron density distribution of the surface. For this reason a close-packed crystalline metal surface, with its de-localized outer valence band, appears flat with negligible diffraction peaks in a HAS experiment. 6 For more corrugated periodic structures including adsorbate structures, the HAS diffraction peaks can provide very accurate information about the characteristic lateral repeat distance. The information can be extracted using the standard reciprocal lattice formalism. In cases where the diffraction pattern can potentially be explained by contributions from domains, a combination with a direct imaging technique is necessary to determine the surface structure. One of the largest surface reconstructions ever observed on a bulk substrate, a 5.55 AE 0.07 nm reconstruction on annealed a-quartz was recently identified in a combined HAS and atomic force microscopy study. 83 HAS can also be used to extract information about the vertical step-height by looking at the conditions for positive and negative interference effects in the perpendicular k-vector. This can also be used to monitor thin film growth modes. 8,[84][85][86] and real time relaxation effects by monitoring changes in the Helium signal after the deposition has been completed, 87 see also specific science case Section 3.7.2. As mentioned in Section 1 HAS is particularly sensitive to light adsorbates, including hydrogen, which has been used in a large number of fundamental structure and dynamic studies, see for Table 1 The e-ph coupling constant l HAS as derived from the temperature dependence of the HAS elastic diffraction intensity for all simple metals that have been measured is shown in the next-to-last column. These values are compared with values of l from other sources, mainly bulk measurements, in the last column. For reference information on the experimental data and other input parameters, see ref.
62 In principle HAS can also provide very precise information of the surface topography for periodic structures. The surface corrugation is reflected in the relative intensities of the HAS diffraction peaks (the form factor). However, because inelastic scattering also plays a role, knowledge about the interaction potential between the helium atom and the surface is required to extract the surface corrugation. This can be obtained as a model issued mainly from first principle calculations (or ab initio calculations) or some simple interaction and geometrical models with fitting parameters. The best and more convenient dynamical theory is the close-coupling formalism which is exact when the numerical convergence is reached. In this formalism, the different diffraction channels are coupled among them and the number of channels depend strongly on the surface corrugation. Single and multi-phonon events need to be calculated in order to obtain the attenuated diffraction intensities. 89 This method has been used, among others, to determine the surface corrugation on semi-metals. 90 Recently, an extension of this theory which takes into account the e-ph coupling has also been proposed. 91 It should also be noted that the position, shape and width of selective adsorption resonances, see Fig. 1, provides a powerful route for an experimental determination of the attractive part of the atom-surface interaction potential 92 (see also Section 3.6.2). An article describing in detail the methods for obtaining atom-surface interaction potentials from HAS experiments can be found as part of this special issue. 92 It is appropriate to compare the capabilities of HAS to the standard tools for measuring nanoscale topography: Scanning Tunnelling Microscopy (STM) and Atomic Force Microscopy (AFM). Firstly it should be noted that both of these techniques are obviously more versatile than HAS because they provide real space images and thus do not require the investigated samples to contain periodic features.
STM probes a combination of the surface topography and the local density of states (LDOS). Which LDOS are sampled depends on the bias voltage. 93 In many cases this is a big strength of STM because this flexibility in bias voltage can provide additional, important information, but in some cases it can be valuable to distinguish the contribution from the surface topography, a couple of examples are provided in Sections 3.5.1 and 3.5.2 below. It should also be noted that STM requires the substrate to be conducting.
AFM probes the interaction potential between a tip and the surface. It is a very powerful technique which works regardless of the substrate conductivity. Topographic information can be obtained with any type of AFM imaging mode, but atomic resolution generally needs the use of dynamic AFM. 94 In particular non-contact AFM has succeeded in atomic resolution imaging, 95 however, as in STM, the contrast may be convoluted with other effects. For example it has been demonstrated that the type of atoms that form the tip apex decide the contrast, which has lead to, e.g., hydrogen adsorbed on an oxide being imaged inversely as holes in the surface. 96 3.5.1 Specific science case: the structure of 2D silica. 2D silica (bilayer silica) is a novel, transferable 2D material, which has garnered interest as a model glass for supporting catalytic systems and as a promising 2D insulator layer. It can be made both as crystalline and vitreous films. For a recent review see ref. 97. A density functional theory (DFT) model of 2D silica suggests that the topmost layer consists of a network of oxygen atoms. This could not be confirmed using STM studies alone, since depending on the bias voltage the oxygen atoms or the silicon atoms were shown on top. However, from the STM studies the characteristic O-O distance of (0.26 AE 0.02) nm could be obtained. This could then be compared to HAS rocking scans which displays a clear vitreous ring with a characteristic length of (0.25 AE 0.01) nm. Since HAS probes the outermost electron density distribution, the combination of STM and HAS could thus be used to confirm the DFT model for the structure of 2D silica. 31 3.5.2 Specific science case: ripple corrugation of Gr/Ru(0001). The ripple corrugation of Graphene on Ruthenium has been studied intensively both theoretically and experimentally using STM, surface X-ray diffraction (XRD), low-energy electron diffraction and theoretical calculations. XRD measurements display a periodicity of (25 Â 25) 98 which differs from the (12 Â 12) periodicity measured by STM. 99 This discrepancy could be resolved as a distortion of the first Ru layer under the graphene, which is picked up by XRD. A combination of ultrahigh-resolution STM images and HAS diffraction data, could eventually show that the graphene lattice is not only rippled, it is also rotated 51 relative to the Ru substrate. 100 Furthermore the corrugation of the ripples was investigated. The apparent amplitude in STM of the corrugation of the ripples decreases from 0.11 nm to 0.05 nm when the tunnelling bias goes from À0.8 to 0.8 V. 101 The corrugation amplitude measured by HAS is 0.015 nm. DFT including van der Waals (vdW) interactions could later reproduce the change in corrugation of the ripples with tunneling bias observed by STM, but not the corrugation amplitude measured by HAS. 102 3.5.3 Specific science cases: H-positions, proton order and water layers. Due to the large cross section of HAS to isolated adsorbates (including hydrogen as described in Section 3.5), the position and structure of hydrogen atoms and adsorbed water layers can be readily determined. [103][104][105] These include also the hydrogenation of a graphene surface 106 while H-positions are hard to determine with other methods (e.g., hydrogen is a weak scatterer for electrons) which also present a severe risk of damaging the H-layer. 107 In a study of highly proton-ordered water structures on oxygen pre-covered Ru(0001) it could be shown that the atomic oxygen and the oxygen from water form a (2 Â 2) surface reconstruction, which however, is broken by the hydrogen to give a (2 Â 4) surface reconstruction: while LEED measured a (2 Â 2), HAS measured a (2 Â 4) superstructure. 108
HeSE: a unique tool for studying surface dynamics
Helium spin-echo (HeSE) is a recent variation on the HAS technique 109,110 which adds manipulation of the helium wavepackets using the nuclear spin of 3 He atoms, to enable dynamical measurements to be obtained in a completely different way. Essentially, each helium wavepacket is split into two spin-components, which are separated by a time, t SE , using a magnetic field, before they scatter in turn from the surface being studied. The two scattered components are then recombined, and by averaging over the beam a surface correlation measurement is obtained as a function of the time, t SE , see Fig. 6.
A schematic of the experimental setup can be found in Fig. 7. Measurements typically have the general form shown in Fig. 8, where phonons and other vibrations show up as oscillations and aperiodic changes, such as diffusion, show up as an overall decay.
The result is a very powerful surface-correlation measurement in reciprocal space. The technique is sensitive between timescales of less than a picosecond and nanoseconds, and on lengthscales between Ångstroms and many tens of nanometres. A very wide range of important physical processes occur within this measurement window (a more detailed comparison of experimental techniques is given in ref. 110) and in particular there are simply no other techniques that can probe equilibrium processes at surfaces in this regime. The nearest comparable techniqueneutron spin-echo -is only weakly surface sensitive, so is limited to certain very specific systems. 112 HeSE has therefore become the tool of choice for studying many surface processes and has already revealed a range of unique and otherwise unavailable physical insights. Moreover, due to the low energy of the probing particle beam delicate adsorbates such as water can be studied without disruption of the motion 113 or dissociation of the molecule (see also Section 3.5.3).
3.6.1 Mobility of atoms and molecules -rates and mechanisms. One of the core applications of HeSE is in measuring the rate and mechanism of motion of atoms and molecules on surfaces. Such measurements are deceptively difficult, and although many techniques attempt to measure surface diffusion, 110 few can do so reliably, and no other technique can examine the detailed mechanisms of motion. Simple theoretical models of surface motion often assume activated hopping, which is a gross simplification of reality. HeSE measurements enable both rates and mechanisms to be examined in detail. 110 Activation energies can be obtained extremely accurately, for example to within 2 meV, 114 and by using long length-scale measurements, both tracer and collective diffusion coefficients can be obtained. While microscopy may provide information in the low temperature regime, only HeSE can follow the diffusive process at high and industrially relevant temperatures i.e., studies on both microscopic length scales and on pico-to nanosecond timescales while the system is in true thermal equilibrium. By obtaining correlation measurements at a range of scattering momentum transfers, the entire mechanism of motion can be determined with great precision. It is possible to clearly distinguish jumping versus gliding (for example, for ring molecules on graphite 115 ), as well as more complex motion such as flapping (in the case of thiophene 116 ), reorientation (pentacene moves on ''rails'' 117 ), rotational jumps 118 and quantum tunnelling. 119 No other experimental technique has access to such a broad range of surface dynamical phenomena, with such precision.
3.6.2 Potential energy surfaces, interaction potentials and benchmark for theory. HeSE data has been widely interpreted within the Langevin dynamics model, which enables potential Fig. 6 Two wavepackets scatter from the surface with a time difference t SE , allowing the motion of molecules on the surface to be interrogated through the loss in correlation, measured through the polarisation of the beam. The top inset shows a typical measurement with the linewidth caused by a small Doppler broadening upon scattering from moving adsorbates and thus corresponding to the timescale of the molecular movement. 111 Since the process is based on self-interference of each 3 He atom, the polarisation loss depends only on the change in energy and not the beam energy itself. 109 Fig. 7 Schematic showing the principle parts of the Cambridge spin-echo scattering apparatus. An unpolarised beam of 3 He is generated from a supersonic beam source at the top left in a fixed direction. The beam is then passed through a polariser and the aligned nuclear spins are rotated by the incoming solenoid (precession coil) before being scattering from the sample surface. The scattered beam passes back through the identical but reversed field in the outgoing solenoid before being spin-analysed and counted in the detector. energy surfaces representing the ''frozen'' adsorbate-substrate interaction to be determined very accurately. 110 The ability to generate such potentials experimentally offers a unique opportunity to test first principles models for the same quantities. For example, comparing such potentials for weakly physisorbed species has offered a way of examining the quality of different dispersion correction schemes for DFT approaches. 120 Interaction potentials between adsorbed species also have a dramatic influence on surface processes, causing correlations in motion, and ultimately driving adsorption structures, self-organisation and islanding. HeSE enables these interactions to be studied directly, and measurements have revealed dramatic deviations from widely accepted theory. For example, CO adsorption on Pt and Cu surfaces has been understood in terms of strong pairwise interactions, whereas HeSE revealed that such interactions are not present and a mean field change must instead be taking place. 121 Without the essential piece of information coming from HeSE, the true behaviour of these systems was impossible to establish.
3.6.3 Atomic scale friction and rate theory. The dynamics of adsorbed atoms and molecules are fundamentally controlled by the rates of energy transfer between the adsorbate and the substrate, and between different parts of the adsorbate. Through surface correlation measurements, HeSE offers a unique way to measure rates of energy transfer, and thus the strength of energetic coupling. 113,122,123 The method has been used to measure atomic scale frictional coupling constants, 124 explain the absolute rate of motion in complex systems, 125 and to test quantum rate theories. 119 3.6.4 Ultra low energy vibrational properties. As well as providing correlation measurements, HeSE data can be Fourier transformed to provide ''traditional'' energy resolved spectroscopic measurements with extremely high energy resolution (meV to neV range). It is well suited to measuring very low energy vibrational modes, such as the acoustic phonons responsible for thermal conductivity in two dimensional materials, or the modes present in high-mass or weakly-interacting overlayers (weak spring-constants). In particular, the technique can measure the width of such modes accurately, 126 offering a way of measuring the lifetime of vibrational states, and thus the quality and long range order present in thin films, which is otherwise a considerable challenge. In fact, the ''wavelength transfer matrix'' approach 127,128 enables the complete mapping between incident and scattered states to be determined.
3.6.5 Specific science case: catalytic surfaces. In this section we make the case that HeSE is a crucial tool for heterogeneous catalysis research, because it is the only experimental technique that can measure surface diffusion with both atomic precision and picosecond time resolution, as described in Section 3.6. Heterogeneous catalysis is an essential process for the World's economy and its sustainable growth. The catalyst industry is estimated to generate an annual turnover of about 15 Â 10 9 US dollars, 129 and employs about 6.3 Â 10 6 people world wide. It has the potential to dramatically reduce energy consumption in chemical industry and the production of greenhouse gases, thus having an important impact on sustainability and the huge socio-economic benefits that such changes will bring. While significant progress has been made in recent decades to understand heterogeneous catalysis, many elementary steps remain unresolved. Very important steps include the diffusion of the chemically reacting adsorbates on the catalytic substrate, in order to find the reaction site, 130 followed by reorientation of molecules for reaction, as well as the nature of the forces between species that control these steps when multiple adsorbates are present. The challenge has been that only within the last decade has a technique been available which can measure these steps with the required picosecond time resolution: HeSE.
Experiments on the diffusion of adsorbates on catalytic substrates using HeSE, as described earlier and in, 110,131 have opened the possibility to assess adsorbate mobility with high spatial resolution in all directions of the substrate, as well as molecular reorientation. 116 These experiments allow us to gather information on the topography of the catalytic surfaces, the interactions with the substrate atoms, and the motion of the adsorbed particles that participate in the catalytic reaction. For example, the discovery of the uncorrelated motion of CO molecules on Pt (111), where strong pairwise interactions were previously thought to dominate, 121 is a particularly clear example of the need for such data.
However, only a small number of relevant substrates have been investigated with HeSE so far and these measurements often raise important questions about the fundamental behavior of adsorbates. For instance, the barrier to diffusion of CO on a copper surface is predicted to be three times higher along the h110i direction than along the h100i, 132 whereas, from ref. 131, the spin echo experiments seem to indicate that the barriers should be similar. Understanding the interplay between adsorbate interaction potentials and adsorbate-substrate energy exchange is likely to be fundamental to resolving such questions.
From a more recent work, the need for further HeSE measurements also becomes obvious. Recent first principle calculations 133 have, for the very first time, shown that quantum effects are important even above room temperature. The theory developed in that work allowed the calculation of diffusion rates a for H and H 2 on Pd(111), see Fig. 9, yielding significant differences. These results are in quantitative agreement with similar experimental results from diffusion on Pt(111), but can themselves only be verified with new HeSE data.
Together, the combination of further HeSE experiments on relevant surfaces, combined with further theoretical developments, will enable us to discover and unravel the fundamental steps in catalytic processes. The result will provide crucial knowledge that can facilitate the intelligent design of new catalysts.
Specific science cases for combined HAS/HeSE investigations
In this section we present two important classes of materials where HAS and HeSE are instrumental in understanding their properties.
3.7.1 Topological materials. In this section we show the importance of HAS and HeSE for obtaining a full understanding of the structural and surface dynamical properties of topological insulators, both providing crucial information that cannot be obtained with other techniques. The examples presented provide information about the surface phonon dispersion and the electron-phonon coupling, see Section 3.2. Furthermore, insight into the driving mechanism considering phase transitions, in particular charge density wave systems, can be obtained. 135 Topological insulators (TIs) fall under the term of quantum materials whose electronic properties are determined by many interacting degrees of freedom like lattice vibrations, electron orbital and spin strictly connected by the laws of quantum mechanics. TIs belong to the class of Dirac materials where the unifying framework is an electronic surface state with a linear energy-momentum relationship, a so-called Dirac cone (left inset in Fig. 10). 136 In typical Dirac materials such as graphene and TIs, low-energy fermionic excitations behave as massless Dirac particles. Moreover, TIs such as Bi 2 Se 3 or Bi 2 Te 3 exhibit an insulating gap in the bulk while the surface is electrically conducting 137 and the electronic surface state exhibits no spin degeneracy due to the large spin-orbit splitting (see, e.g., ref. 138 and 139). Implications include surface dominated electronic transport and spin-polarised charge transport with intrinsically reduced backscattering. 138 In other words, spindependent surface currents should experience no resistance. Such materials hold great promises for future use in quantum technology, such as quantum information transfer and storage.
Three-dimensional TIs, are commonly composed of layered hexagonal structures that are bound by weak vdW forces (see Fig. 10). Aside from the interest in spintronic devices based on TIs, surface dominated transport is a major route towards applications, for instance in quantum sensing by employing electronic changes upon adsorption [140][141][142] for the realisation of miniature sensors capable of monitoring single atoms or molecules. The electronic properties of TIs have been shown to be tunable by adsorption of atomic or molecular species that can serve as n-or p-type doping agents. 140,141 However, at finite temperatures, the ideal zero-Kelvin behaviour of TIs is perturbed by e-ph coupling and energy dissipation into the bulk thus giving rise to energy losses. [143][144][145] Already at this point, it is clear that information about e-ph coupling and collective charge excitations at surfaces, thin film quantum wells and nano-structures are of paramount importance for the understanding and application of-relevant properties of topological materials. Hence we need experimental techniques that help to gain a deeper fundamental understanding of (a) the electronic structure and electronic transport, (b) the phonon dispersion and the e-ph coupling constant l (see Section 3.2.1), as well as (c) the effect of adsorption on property modification of topological materials.
ARPES addresses the electronic band structure as discussed in Section 3.2.1, while STM, AFM, and electron microscopy provide a real space analysis of the surface structure. HAS, on the other hand, is based on the interaction of an atomic matter wave with a periodic crystal surface where the lattice constants and the wavelength are of the same order of magnitude. Compared to electron or X-ray diffraction, atoms of the same wavelength deliver much less energy to the surface and due to the scattering mechanism, do not penetrate into the sample, but get scattered purely by the electronic charge distribution at the sample surface as discussed in the introduction. Moreover, HAS has the advantage that TI samples are not exposed to any intense ultraviolet illumination which has been reported to trigger energetic shifts of the electronic bands. 146 Having said this, the relation to microscopy and microanalysis differs from the mentioned classical real space analysis by rather yielding results in a reciprocal space picture. 9,147 In this way, periodicity is measured at highest precision, see also Section 3.5, which allows for example an exact determination of surface phase transitions. Hofmann et al. 148 recently showed a transition to a dimerisation-like reconstruction in the onedimensional atomic chains on the Bi(114) surface at low temperatures. While STM images give a nice spatial picture and the idea of a Peierls-like distortion, complementary HAS measurements clearly show a change of the periodicity in the appearance of additional diffraction peaks at low temperature, halfway between the peaks for the ''normal'' phase.
Due to the heavy elements in typical TIs such as Bi 2 Se 3 , the energies of (acoustic) surface lattice vibrations are typically in the low meV energy region and thus measurements of the surface phonon dispersion require high energy resolution as well as surface sensitive probes. HAS provides also access to the e-ph coupling strength at surfaces 9,149 (see Section 3.2.1), a quantity that determines energy losses in surface electronic transport. While scattering from defects and other lattice imperfections can possibly be controlled by the quality and a careful growth of crystals and films, phonons will be excited in even the most perfect crystals. Consequently, e-ph coupling should be the dominant scattering mechanism for surface electronic states at finite temperatures. TIs such as Bi 2 Te 3 are also classic thermoelectric materials 150,151 with a large Seebeck coefficient and, as such, they have been used in thermoelectric refrigeration for a long time. Since the thermoelectric performance is closely related to the phonon dispersion and details of their electronic structure, information on the phonon dispersion and the e-ph coupling is essential to fully understand their thermoelectric properties. [151][152][153] Hence, HAS provides a sensitive probe to determine the surface phonon dispersion and energy dissipation processes in terms of the e-ph coupling constant, l, on the surfaces of these materials. The obtained experimental data of various TIs 92,145,[154][155][156][157][158][159] promise to evolve a more general picture about the surface dynamics and the atom-surface interaction of these peculiar surfaces. The e-ph coupling, as determined for several topological insulators belonging to the class of bismuth chalcogenides, suggests a dominant contribution of the surface quantum well states over the Dirac electrons in terms of l. 145 Investigations of the archetypal TI which is Bi 2 Te 3 (111) 155 show a prominent surface acoustic mode that may have important implications in layered and nanoscale devices. Moreover, thanks to the high resolution experimental data, it was shown in comparison with ab initio calculations that the inclusion of vdW interactions is necessary for an exact theoretical description of application-relevant issues like the thermal conductivity of layered structures in general. 155 The influence of e-ph coupling also shows up by softening phonon modes at specific values of momentum transfer, a phenomenon known as a Kohn anomaly. 144,160,161 Whether Kohn anomalies at a phonon momentum that connects opposite sides of a topological Dirac cone are possible, is still an open question as it would require a phonon-induced transition involving a spinflip. 144,160,162,163 The latter may become possible by creating or annihilating a phonon which carries an angular momentum of 1 quantum number. While Kohn anomalies have been reported in the lower part of the phonon spectrum of TIs, 155,160,162 recent studies have shown that the major contribution to e-ph coupling in these materials comes from polar optical modes. 145,155,158 The ability of HAS to determine the surface averaged e-ph coupling constant l, directly from the thermal attenuation of HAS spectra (see Section 3.2), has the advantage that a wide range of experimental conditions can be used for the evaluation compared to the limited range where e-ph effects are visible in ARPES. 145 Furthermore l can be measured exclusively for the low energy range as discussed in Section 3.2.
Since HAS or HeSE excite phonons via phonon-induced surface charge density oscillations, another consequence is that these probes may in principle excite also low-energy collective electronic excitations like surface phasons, 147 surface acoustic plasmons in the THz and sub THz domains, 9,158 charge density waves 147,148 as well as electron-hole excitations. 9,147 The observation of collective electronic excitations such as phasons and surface acoustic plasmons in interesting 2D conducting materials like topological insulators and graphene, 147,164 makes HAS a tool for the investigation of THz plasmonics, with great relevance for sensors and other nano-technologies.
Furthermore, HAS can detect subsurface phonons as deep as the range of e-ph interaction, allowing investigations of the phonon dispersion curves and their e-ph interaction not only at surfaces but also in ultra-thin films. 28,149,165 The possibility of observing the dispersion and knowing the e-ph coupling of waves localised at the interface of supported ultra-thin films, 86 subsurface layers 166 or optical branches in Bi 2 Te 3 and Bi 2 Se 3 155 opens the prospect of developing an interface or sub-surface phononics, thus avoiding contamination problems which would affect surface acoustic wave devices in the THz domain.
Finally, as described in Section 3.6, HeSE is capable of delivering detailed information on the whole energy landscape for adsorbate-surface systems by observing the way that an adsorbate moves around in the potential at the surface, resolving diffusion processes on timescales from ns to sub-ps, 117 which is beyond the scope of other techniques. This information is particularly valuable for possible sensing applications of TIs as well as for the assembly of molecular qubits 167,168 on technologically relevant surfaces. For example, in a study of the diffusion of water on the TI Bi 2 Te 3 (111), 113 the mechanisms underlying the molecular motion of water are specified and by comparison with first-principle calculations, aspects of its adsorption geometry are identified, as well as the energy landscape for the motion. A qualitative assessment of the rates of energy transfer between water molecules and the TI on which they move is made. The latter is discussed in terms of the nanoscale-friction affecting the motion, where a TI is particularly interesting since certain friction mechanisms are disallowed by the topological character of the substrate.
3.7.2 Superconducting radio frequency materials. Superconducting particle accelerators and free electron lasers (FELs) depend currently on the performance of niobium superconducting radio frequency (SRF) cavities. Next-generation accelerators will depend on the development of higher performance alloys such as Nb 3 Sn that will have better quality factors under extreme accelerating fields than Nb. Unlike for Nb, cavities cannot be formed out of Nb 3 Sn directly; current fabrication methods include Sn deposition on Nb cavities. This deposition ultimately results in a thin film of Nb 3 Sn where the microscopic structural characteristics of the thin film contribute directly to cavity performance in highfields. A thorough understanding of the nanoscale growth of these films will aid significantly in the advancement of accelerator science. HAS is uniquely positioned to assist in formulating the needed growth procedures as it can directly assess surface-localized crystalline structures without undesired scattering signals arising from the selvedge and underlying bulk regions. Moreover, such information can be obtained at surface temperatures spanning an extraordinarily wide range from cryogenic conditions up to refractory metal processing temperatures that approach 2000 K, giving a remarkably clear picture of the interface at the elevated temperatures used during intermetallic deposition. Such studies can be performed non-destructively and without the possibility of perturbative effects due to charged particle bombardment. Another key feature of HAS is its extreme sensitivity to surface adsorbates, see also Section 3.5. This sensitivity enables investigations into surface nucleation and growth at the elevated temperatures needed for cavity processing -key information needed for the refinement of growth chemistries for forthcoming SRF materials. HAS studies of surface phonon relations, e-ph coupling, alloy formation, and evolving surface roughness complement the structural information gathered from in situ He diffraction. A recent study has shown, for example, the need to consider the presence of ordered oxygen layers on Nb under conditions important for alloy growth, causing experimental and theoretical efforts to move beyond the simple view of purely metallic phases during alloy formation. 169 Moreover, inelastic HAS can be used to map out the surface phonon structure of these interfaces, which when combined with modern DFT dynamics simulations can assist in developing accurate bonding models of these technologically important multicomponent interfaces. One can also envision diffusion studies where quasielastic HAS measurements will provide further insights into our understanding of intermetallic atom mobility at the surface. It is clear that the ensemble of aforementioned HAS studies can inform, in several unique ways, current and future SRF materials research, directly contributing to ongoing worldwide development efforts for next-generation accelerators and FELs.
Applying helium scattering instruments for molecularsurface scattering and other atom-surface scattering experiments
In this section we discuss molecular surface scattering experiments and atom surface scattering experiments which can be carried out using a HAS apparatus. Essentially all that is required is to change the gas bottle, modify the detector settings and possibly, for scattering in the classical regime, heat the nozzle.
3.8.1 Molecular surface scattering experiments. The interaction of gas phase molecules with solid surfaces, is a key property for materials in a wide range of research fields and applications, examples include industrial heterogenous catalysis, discussed also in the previous section, atmospheric chemistry, thin film deposition, nanotechnology fabrication and many others. 170 To complete our understanding of the molecule-surface properties of materials, it is important to study directly the collision of the gas phase molecule with the surface. Unlike helium, when a molecule hits a surface, it has a certain probability to react upon impact. These interactions lie at the heart of surface based chemistry and valuable information about the molecule-surface chemistry can be extracted by monitoring the fraction of the molecular beam which sticks to the surface, 171 as well as by studying the angular distribution, time-of-flight and quantum-state populations of the beam which continues towards the detector. 170 A different scenario is direct quantum scattering of the molecules. In this case a diffraction pattern can be measured providing information about the structure of the surface, the dynamics of the collision and the molecule-surface potentials. 172 These diffraction experiments are directly analogous to helium scattering and can be performed with the same type of apparatus. Small molecule scattering can be used to extract precise information on molecule-surface interaction potentials by measuring elastic diffraction, rotationally inelastic, and bound state selective adsorption resonant scattering. Experiments with H 2 , D 2 , and HD (including rotationally state-selected beams of para-H 2 and ortho-D 2 ), when combined with quantum scattering calculations, can lead to an accurate determination of molecule-surface physisorption potentials including spatially anisotropic terms. [173][174][175][176] Another illustrative example is a diffraction study of hydrogen from a platinum surface, where an experimental molecular diffraction pattern was compared with state-of-the-art potentials to scrutinize the highly useful Born-Oppenheimer approximation in hydrogen-metal interactions. 177 Molecular diffraction experiments were also performed on various isotopes of hydrogen as well as other light molecules like methane. [178][179][180] A review of molecular hydrogen scattering experiments from metal surfaces can be found here. 181 Similar to an atom, the collision of a molecule with a surface is sensitive to both the long and short range interaction potential. Unlike an atom, the rotational motion of molecules changes the interaction with the surface and needs to be taken into account in order to understand molecule-surface collisions. Significant efforts are being made to include the effect of molecular rotations in theoretical studies and obtain reliable multi-dimensional interaction potential surfaces. To support these efforts is it vital to measure experimentally the effect molecular rotations have on molecule-surface collisions. One way of studying this is by measuring inelastic rotational scattering events, where the excitation and de-excitation of rotational energy quanta can be seen as distinct diffraction peaks, 179,180 measured by photo-exciting the scattered beam, 182 or assesed by comparing the scattering of ortho-para spin isomers. 183 Unlike the rotational quantum state, J, the rotational orientation of a molecule, characterised by the rotational projection quantum number, m J , has been generally inaccessible to experiments. The few studies which have been performed were mainly restricted to photo-excited and paramagnetic molecules where sophisticated experimental techniques have been developed. 184,185 Recently, a new type of magnetic manipulation experiment has been developed, where a modified helium-spin-echo apparatus is used to control and measure the rotational orientation of a ground state molecule. 186 Two types of magnetic manipulations underlie this technique. One involves using a magnetic hexapole lens to focus certain quantum populations and defocus others, 187,188 and the second involves passing the beam through a homogenous electromagnet where coherent control of the molecular quantum states is achieved and the rotational projection quantum state of the molecules which reach the surface can be both altered and determined. 186 A unique aspect of these rotationally controlled molecular scattering experiments is that unlike regular diffraction measurements they can be used to determine the full scattering matrix empirically, i.e., measure all the quantum state-to-state probabilities and phase changes which characterise the gas-surface collision. A first measurement of a scattering matrix was demonstrated recently for hydrogen molecules scattering from an LiF surface. 189 The results of this study demonstrated that this simple salt surface acts as both a rotational orientation polarizer and a rotational orientation analyser for hydrogen molecules, and the extracted scattering matrix elements provide what is arguably the most sensitive experimental bench mark for further development of theoretical molecule-surface interaction potentials.
3.8.2 Atom scattering specific science case: isotope enrichment and purification. It was recently demonstrated, using Ne that precision gas-surface diffractive scattering can be used as a new method of isotope enrichment and purification. 190 Isotope separation first came into focus during the Manhatten project, but is now used in a range of applications including isotopic labeling in life science and radioisotopes in medicine. Isotope enrichment is also a topic for microelectronics, where research is ongoing on highly enriched 28 Si wafers, which have been shown to have increased thermal conductivity 191 and improved electron transport characteristics 192 compared to standard silicon wafers.
3.8.3 Atom-surface scattering in the classical regime. Typical He atom surface-scattering experiments are carried out in the quantum mechanical regime. In the scattered spectra the observed quantum features such as diffraction, single-phonon peaks, diffuse elastic peaks, bound state resonances, etc. provide detailed information about the atom-surface interaction as described through numerous examples in this paper. However, experiments can also be done in the classical regime. This regime usually encompasses some combination of higher incident energies, larger mass projectiles and higher temperatures and is marked by the fact that classical mechanical theory can be used to describe the scattered spectra. [193][194][195][196][197][198] The projectiles most often used are the larger mass rare gases such as Ne, Ar and Kr, but He atoms can also exhibit classical scattering at high incident energies of around 100 meV or more. 199 The higher energies can be achieved by heating the nozzle (see Section 2). In classical collisions, the Debye-Waller factor suppresses all quantum features, leaving the scattered spectra to exhibit much broader peak features that are typically governed by the excitation of large numbers of phonons.
In spite of the fact that the spectra do not exhibit the detailed features seen in the quantum regime a significant amount of information about the surface can be extracted through analysis of experimental data using classical mechanics. These include, among others, determination of the root mean square surface corrugation amplitude, 200 isotope sensitivity in the surface composition 201 and surface segregation in alloy mixtures of liquid metals. 202
Conclusion and outlook
In this paper we show how the helium scattering, through its unique combination of low energy, charge neutrality, inertness and strict surface sensitivity can complement other scattering techniques such as photons (X-rays), electrons, ions and neutrons. We present selected examples of material properties uniquely suited to be measured using either helium atom scattering (HAS) or helium spin-echo scattering (HeSE). We also present examples of molecular surface scattering experiments and atom-surface scattering experiments in the classical regime, which can be performed using HAS and HeSE instruments. We emphasize that the examples of material properties provided in this paper are by no means a complete list, furthermore several topics such as seeded beams or neutral helium microscopy have been left out due to space limitations. We also do not discuss instrumental development, which of course is ongoing.
The overarching purpose of this paper is to show that helium scattering is a great and unique technique with a very large amount of interesting and important experiments waiting to be done. However, if these experiments are to be carried out, helium scattering must be made readily available to the materials research community. This can be realised by creating a helium scattering facility, co-located and co-administrated with a synchrotron or neutron facility (or both). This would have the additional great advantage of enabling the use of two complimentary scattering techniques in parallel on the same sample.
Conflicts of interest
There are no conflicts to declare. | 2021-02-25T06:16:35.859Z | 2021-02-24T00:00:00.000 | {
"year": 2021,
"sha1": "d0c70823811ca20d5213eb88403be9392741ed11",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/cp/d0cp05833e",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "189d48d6e96d8731f7ac5ff5aaccb42a70fae408",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52917433 | pes2o/s2orc | v3-fos-license | Topological exploration of artificial neuronal network dynamics
One of the paramount challenges in neuroscience is to understand the dynamics of individual neurons and how they give rise to network dynamics when interconnected. Historically, researchers have resorted to graph theory, statistics, and statistical mechanics to describe the spatiotemporal structure of such network dynamics. Our novel approach employs tools from algebraic topology to characterize the global properties of network structure and dynamics. We propose a method based on persistent homology to automatically classify network dynamics using topological features of spaces built from various spike train distances. We investigate the efficacy of our method by simulating activity in three small artificial neural networks with different sets of parameters, giving rise to dynamics that can be classified into four regimes. We then compute three measures of spike train similarity and use persistent homology to extract topological features that are fundamentally different from those used in traditional methods. Our results show that a machine learning classifier trained on these features can accurately predict the regime of the network it was trained on and also generalize to other networks that were not presented during training. Moreover, we demonstrate that using features extracted from multiple spike train distances systematically improves the performance of our method.
Introduction
A major objective in neuroscience is to understand how populations of interconnected neurons perform computations and process information.It is believed that the dynamics of a neuronal network are indicative of the computations it can perform.Its dynamics are affected by how the neurons are physically connected and by the activity history of the neurons.Understanding this spatiotemporal organization of network dynamics is essential for developing a comprehensive view of brain information processing mechanisms, the functional connectome.Two neurons can be considered "functionally connected" if their dynamics are similar or if one appears highly likely to spike causally after the other.The same notion of functional connectivity can be considered also on a macroscopic level, where one can study the causal relationships between brain regions.The notion can also be formalized for similarly structured systems from outside of neuroscience.Techniques like the one we present in this paper thus have broad applicability.Furthermore, it is well known that certain neuronal systems can play multiple roles, characterized by different patterns of activity.For example, neurons in the thalamus have tonic or phasic behavior depending on the afferent signals and neuromodulators they receive or on different phases of the sleep cycle [38].Another example is the hippocampus, which plays a role both in memory and in navigation.Researchers have also observed distinct rhythms in EEG recordings in awake and in sleeping rats [5].
An understanding of network dynamics is also of medical importance, as many neurological disorders are characterized by abnormal global or local activity.During epileptic seizures, for instance, EEG recordings show an increase in the amplitude of neural oscillations [13].In Alzheimer's disease, one observes a shift in the power spectrum toward lower frequencies and a decrease in the coherence of fast rhythms [20].
Partly because of the clinical importance of neural dynamics, various methods have already been developed to automatically detect abnormal regimes, for example those related to epileptic seizures [2,36,21].The best ones rely on artificial neural networks.Here we propose a novel approach using techniques from topological data analysis, a part of applied mathematics.
Traditionally, neuroscientists have analyzed functional networks using pairwise neuron statistics and graph theory.Such methods often neglect certain global structures that may be present in the dynamics.The analysis of network dynamics using alternative methods from topological data analysis has recently enjoyed success [34,9,11,15,8,35].These methods provide information about connectedness, adjacency, and global structure such as holes (of various dimension) in a dataset 1 .In particular, persistent homology detects holes or cavities and quantifies how robust these are with respect to a threshold variable related to the dynamics of the system.
Several interesting properties of neuronal network structure and function have been revealed through these recent developments.For example, persistent homology has been applied to detect and characterize changes in the functional connectome in disorders such as autism and attention deficit disorder [25], in the Parkinson mouse model [19], and after injection of a hallucinogenic substance [32].It has also been employed to describe brain function during different tasks, such as multimodal and unimodal speech recognition [22].Moreover, the homology of a digital reconstruction of the rat cortical microcircuit revealed that the brain substructure is non-random, that it is substantially different from various other null-models, and that its activity tends to concentrate in areas with greater local organization [33].In the same article, homology was shown to distinguish the dynamics arising from different stimuli injected into the digital reconstruction.It is also interesting to note that the mammalian brain seems to encode topological information, as there are strong indications that the place cells [30] in the hippocampus build a topological, rather than geometric, representation of the animal's surroundings [10].The latter article also shows how such a map can be encoded by a spiking network of neurons.
To this day, the few articles in which persistent homology has been applied to in vivo [15,34] or synthetic [35] spike data have all used spike train (Pearson) correlation as the distance between neurons.The use of correlations requires one to make specific assumptions about neural coding that may not be reasonable or relevant in all research areas.There exists a wide variety of spike train distances and similarity metrics, and the restrictiveness of their assumptions and the kind of information they encode can vary significantly.As we demonstrate in this paper, the appropriate notion of spike train distance to use depends on context, and it can also be beneficial to combine several of them.
In this paper, we simulate activity in an artificial network of neurons to generate spiking data and build weighted graphs based on various spike train similarities.These graphs are then transformed into topological spaces, which are analyzed using persistent homology.Finally, we extract simple features from topological invariants and use them to train a classifier for predicting the global network dynamics.These topological features are fundamentally different from those that might arise from graph-theoretic or other classical methods, as they take into account relations within triplets and not just pairs of neurons.Our results show that it is possible to perfectly predict network regimes from a few features extracted from the persistent homology.The trained classifier also predicts with high accuracy the dynamics of networks other than that on which it was trained.Finally, our results illustrate the importance of employing several spike train similarities, as the best performance was achieved using a combination of them.
Results
In this section, we summarize our work, namely the simulation of network dynamics, the processing of spike trains, the topological analysis and feature selection, and the classification method, before presenting our results.A more detailed explanation of the method can be found in section 4.
Simulation of a downscaled Brunel network
We consider simulated activity in the Brunel network [4], a simple and well-studied in silico model network of sparsely connected excitatory and inhibitory neurons.For computational reasons, we use a downscaled version of the network as described below.
The Brunel network
The Brunel network consists of two homogeneous sub-populations of excitatory (here indexed by E) and inhibitory (indexed by I) neurons modelled by a current-based leaky integrate-and-fire (LIF) model.
Each of the N neurons has a membrane potential V m (t) whose dynamics are described by a differential equation.Once the membrane potential reaches a threshold value V θ , the neuron sends a spike through its synapses to its post-synaptic neurons, and its potential resets to a value V r .The synapses are δ-current synapses, i.e., after a delay D, each pre-synaptic spike induces a positive (respectively, negative) jump in the membrane potential of the post-synaptic neurons if the pre-synaptic neuron is excitatory (respectively, inhibitory).The excitatory sub-population is four times larger than the inhibitory one (N E = 4N I ) in accordance with cortical estimations [28], but their synapses are relatively weaker.Formally, if the excitatory synapses induce an increase of membrane potential of J, the inhibitory ones will induce a decrease of −gJ for some g > 1.Every neuron receives K inputs coming from a fixed proportion P of the neurons in each sub-population, i.e., K = P (N E + N I ).Furthermore, each neuron receives C E = P N E external excitatory inputs from an independent Poisson population (of size C E N ) with fixed rate ν ext .The relative synaptic efficiency (g) and the external population rate (ν ext ) are the free parameters with respect to which we study network dynamics, once we have fixed the other model parameters, in particular J, P , and D. We adopt the convention of Brunel's original article [4] and express ν ext as a multiple of the minimal external rate necessary to trigger spiking in the neurons without recurrent connections, denoted ν θ .
Because computing persistent homology is expensive for large and dense spaces, which tend to arise from large and dense networks, the number N of neurons was reduced from 12,500 in [4] to 2,500.Such a downscaling of the network while N/K is kept constant will result in an increase in the correlation between the neurons [37], more salient oscillations in the network dynamics [18], and potentially a loss in the diversity of network dynamics.To prevent these undesirable effects, a correction to the synaptic strength J was applied, and the external population was modified according to [18].Specifically, the synaptic strength J was adjusted to keep JK constant, and the rate of the external population ν ext was increased.An external inhibitory population with appropriate rate was also introduced to preserve the mean and variance of the external inputs.The external rate correction is relevant only when neurons are expected to show irregular firing, i.e., in the regimes where inhibition dominates (g > 4).
We generated three versions of the Brunel network to validate our method across different networks.
A complete description of the final model and parameter sets, following the formalism in the field [29], is presented in Tables 2 and 3.
Simulations performed
Each network was simulated for 20 seconds of biological time with 28 different values of the pairs of free parameters g and ν ext /ν θ .These pairs form a rectangular grid in the parameter space, with g taking values from 2 to 8 and ν ext /ν θ taking values from 1 to 4. Since the network is connected according to a random model, each simulation was repeated 10 times with different network instantiations, resulting in a total of 280 simulations for each network version.We recorded the spiking times of all neurons, as well as the overall population firing rate, for all the simulations.
Four distinct activity regimes, shown in Figure 1, were identified by inspecting the simulations: SR: a regime characterized by synchronized neurons behaving as high-frequency oscillators, clustered in a few groups, similar to the synchronous regular regime (Figure 1A), SI: a regime characterized by a slow oscillatory global pattern and synchronous irregular firing of individual neurons (Figure 1B), AI: a regime characterized by asynchronous irregular firing of individual neurons (Figure 1C), Alt: a regime characterized by neurons alternating between periods of silence and periods of rapid firing (Figure 1D).
Note that the Alt regime is not present in the full-size network.This is not an issue for us, however, since our goal is to discriminate between different regimes, not to understand the Brunel network per se.
For each of the three networks, we visually identified the network regime for every pair of parameters (g, ν ext /ν θ ).The result is shown in Figure 2. The simulations in which none of the neurons fired were removed from the analysis (40 simulations for versions 2 and 3).Note that the first network (version 1) does not exhibit the Alt regime, while versions 2 and 3 do not exhibit the AI regime.This issue is addressed in section 2.4.
Spike train similarities
We used three different measures of spike train similarity to compare the recorded neuron activity in the networks.
One is the widely used Pearson correlation.It is often employed in analyzing spiking data because it has been shown to encode particular information that is not present in the firing rate alone.For example, in the auditory cortex of the marmoset, Pearson correlation encodes the purity of sounds [12].It can also be used to infer connectivity or extract information about network function [6].However, it is tied to the correlation population coding hypothesis [31], and thus may not be relevant to the problem at hand.We therefore also employed two complementary measures: SPIKE-synchronicity [23] and SPIKE-distance [24].Both are exploratory measures relying on an adaptive time window to detect cofired spikes and involve a pairwise similarity measure of spike trains.Conceptually, the size of the window depends on the local firing rate of the two neurons under consideration.If one of the neurons has a high local firing rate, then the time window will be short, while if both neurons have low local firing rates, the time window will be longer.SPIKE-synchronicity is the fraction of cofired spikes according to this adaptive window, while SPIKE-distance is the average over time of a dissimilarity profile between spike trains.See equations ( 7)-( 13) in [24] for details.
We computed Pearson correlations by time-binning the spike trains with a 2 ms time window and a binning error correction, as described in section 4. The SPIKE measures were computed using the Python package PySpike [27].
Persistent homology
In topology, a branch of mathematics, one works with very general objects called spaces.Spaces have a notion of "nearness", but in general lack more geometric structure such distances or angles, as well as familiar algebraic structure.The unaccustomed reader may still gain intuition about what is meant by a space by thinking of geometric objects.One should still keep in mind that the spaces we consider may be defined entirely intrinsically without any reference to some ambient Euclidean space.
Algebraic topology, then, concerns itself with describing a space X in terms of algebraic invariants that capture global properties of the space.Examples include the Betti numbers, which can be thought of as the numbers of components and of n-dimensional unfilled cavities in the space.As these notions can be defined intrinsically, i.e., without any reference to how the space is embedded in Euclidean space, they are useful in analyzing spaces arising from abstract data where no such embedding can be constructed in a principled way.We consider here only the zeroth Betti number b 0 (X), which is the number of connected components, and the first Betti number b 1 (X), which is the number of one-dimensional unfilled loops.In the special case of graphs -which is not the case we consider -these are precisely the number of connected components and the number of cycles, respectively.
The spaces we study here will be built from spiking data, and we are interested in how these algebraic invariants change as a function of a spike train similarity threshold.We therefore build a filtration, a multi-scale sequence of spaces depending on a threshold, and compute persistent homology, a multi-scale invariant that captures Betti numbers (which are then often also referred to as Betti curves to reflect their scale-/threshold-dependent nature).For details, see section 4.More background information can be found in the survey [14] and its bibliography.
As is common in topological data analysis, we follow the convention that edges with low weights are to be considered "most important" and enter first in the filtration (see section 4 for the definition).The correlation and SPIKE-synchronicity values were transformed through the function x → 1 − x so that they range from 0 to 1 with 0 being the value assigned to a pair of identical spike trains (i.e., we work with dissimilarity measures).We compute persistent homology in dimensions 0 and 1.
Classifying network dynamics
We used the output of persistent homology of the space built from pairwise spike train similarities as an input feature for machine learning in order to discern information about the global network dynamics.
From every filtration, four simple features were extracted: • from the Betti-0 curve, the area under the curve and the filtration value at which it starts to decrease, • from the Betti-1 curve, the global maximum and the area under the curve.
As a result, a total of 12 features were extracted from each simulation (four features per filtration, three filtrations from the three spike train similarity measures).For some simulations in the SR regime, all the pairwise similarities attained the maximal value, resulting in a space with no topological features and a constantly zero Betti-1 curve.The filtration value at which the curve starts to decrease was defined to be 0 in this case.Before doing any classification, potentially good features were selected by plotting all the features against each other for the samples coming from network version 1. Six features were selected by visual inspection because they were deemed to produce non-overlapping clusters.These features are the area under the Betti-0 curve for the three similarity measures, the area under the Betti-1 curve for correlation and SPIKE-synchronicity, and the maximum of the Betti-1 curve for the SPIKE-distance.These features were among the ones with the highest mutual information in the three networks, although their ranking varied between the networks.Section 4.4, in particular Figure 7, provides further details of the feature selection process.
Four different training sets were used for classification, three of which were composed of randomly selected samples (90%) coming from a specific network version, while the last set was the union of the three other sets.For each training set, an L 2 -regularized support vector machine (SVM) classifier was trained to identify the different regimes.The classifier was composed of four sub-classifiers, each of which had to distinguish one particular regime from the others.The final decision was computed in a one-vs-rest manner.The regularizing hyperparameter was selected with a 10-fold cross-validation.
When assessing the performance of the classifier for network version k using multi-class accuracy, we validated it on three test sets: one composed of the 10% of the samples from version k that had not been used for training and two containing all the valid samples from one of the other network versions.A sample was considered valid if it was labeled with a regime that was present in the training sets.For example, when the classifier trained on version 1 was tested on version 2, the samples labeled Alt were ignored, since no version 1 networks exhibit the Alt behavior.The performance accuracy and the numbers of valid samples are reported in Table 1.The trained classifiers all achieved perfect accuracy (100%) on the network version they were trained on, indicating that the topological features extracted are sufficient to perfectly discriminate the regimes of the training network.Moreover, they also generalized well to other versions, with 94.26% accuracy on average, suggesting that the topological features extracted are consistent across the three network versions.
Additionally, we combined all the samples from the three networks and used a 90%-10% training-testing sample repartition, attaining perfect classification ("All versions" row in Table 1) of the four regimes.This provides the complementary information that the Alt and AI regimes In each panel, the classification accuracy for the test samples from each network and all the samples together is reported for five sets of features."Select" designates the features we visually selected."Corr", "sync" and "dist" designate the features extracted using the correlation, SPIKE-synchronicity, and SPIKE-distance, respectively."All" designates the set of all the features.
are also distinguishable from one another, since none of the network versions can exhibit both regimes.1. Classification accuracy for each pair of training and testing sets.The training sets were comprised of 90% of the samples coming from a specific network (version 1, 2 or 3), or 90% of all samples (all versions).The testing sets contained the remaining samples from the network version that are not used for training.i.e., the remaining 10%.The number of samples in every testing set is reported in parentheses.
Testing set Training set
Finally, we checked whether the persistent homology-derived features provide complementary information when based on different similarity measures, and thus whether it can be advantageous to use several of them together for classification.The same classification experiments were repeated using either all the computed features or only the features coming from one of the similarity measures.The classifier accuracies were compared with those from the previous computations (Figure 4).Although the accuracies obtained using features coming from a single similarity measure were satisfactory (on average 79.10%, 92.35%, and 80.73% accuracy for correlation, SPIKE-synchronicity, and SPIKE-distance, respectively), better performance was consistently attained by a combination of measures.Moreover, selection of potentially good features yields the best results (94.94% and 96.63% average accuracy with the "all" set and "select" set, respectively).
Discussion
In this paper we analyzed the dynamics of spiking networks of neurons using features derived from persistent homology.We generated three versions of a simple artificial network of LIF neurons (a downscaled version of the Brunel network) by modifying connectivity density, synaptic delay, and synaptic strength.Activity in the networks was then simulated with 28 pairs of the free parameters (external population rate and relative synaptic efficiency values).Across all the simulations, four regimes of activity were observed based on the pattern of the global population firing rate and the individual neuron spiking times.
For each simulation, we computed three pairwise spike train similarity measures: Pearson correlation, SPIKE-synchronicity, and SPIKE-distance.We computed the persistent homology of the flag complex of the weighted graph coming from each similarity measure and extracted simple features from the zeroth and first Betti curves.The interesting features were selected by visual inspection of the sample distribution.Finally, an SVM classifier was trained to identify the dynamics regimes of the simulations.
Our experiments showed that it is possible to perfectly predict the dynamic regime in simulations coming from the network trained on, and from other networks with a high degree of accuracy, as long as some samples of the regimes in question were available during training.
We also illustrated the importance of using and combining several similarity measures.Indeed, SPIKE-synchronicity carries more information, and does so more consistently across the network versions, than the other two measures, but the best accuracies were consistently obtained when an ensemble of features selected by visual inspection was used.Moreover, if one were to automatically select the features based on a score, we showed that the mutual information between features and the regime label is a good indicator to consider.
We tested our method in the context of a simple network.It would be interesting to test it also with more complex networks, with neurons and synapses modeled in greater detail.Topological features can also be extracted from other types of neural data, such as the population firing rate or neuron voltage traces.We consider the examination of how the topological methods perform in classifying such data as interesting future work.
Here we have illustrated just one concrete use of topological data analysis (TDA) in the study of network dynamics, but the class of methods should be applicable to a wide variety of systems from within and without neuroscience.To the best of our knowledge, there have been no previous attempts at applying TDA to automatic detection of regimes in spiking neural networks, since they are usually identified analytically [4,18] and can often be discriminated visually.However, a topological approach to this task may be interesting in recordings of real data, such as EEG or fMRI.One might, for example, investigate the feasibility of solving a more subtle task, such as automatic detection of movement intention or seizure detection in epileptic patients.
Although great progress has been made in neuroscience since the first recording of a neuron activity in 1928 [1], a unified model of the brain across its different scales is still lacking, and many hard challenges have barely been attempted.Recent work has highlighted how TDA could help shed new light on both brain structure and dynamic and are promising advances towards a more comprehensive understanding of the brain.The method we have outlined in this paper takes a novel view of one challenge, the automated classification of neuronal dynamics, by considering features that are topological in nature.We believe that including such features will be of great help in the understanding of both structural and dynamical aspects of neuronal networks and other similarly structured systems.
Methods
We give here the full details of our computations, and expand on the topological constructions involved in the analysis.
Network simulations
A complete specification of the three simulated networks following formalism and notation commonly used in the field [29] can be found in Tables 2 and 3.
All the networks were simulated with 28 pairs of parameter values for the relative strength between inhibition and excitation g (integer values from 2 to 8) and the external population rate ν ext /ν θ (integer values from 1 to 4).The systems were simulated 10 times for each parameter pair, for a total of 280 simulations per network.The simulations were performed with the Brian2 simulator [16], with a time step of 0.01 ms and a total biological time of 20 seconds.Because of the downscaling of the network, the synaptic transmission J was increased compared to that used in [4] in order to keep C E J constant, and an external inhibitory population was introduced [18] when the spiking of neurons was expected to be irregular [4].This external population was modeled by a Poisson process with rate as in [18].Here, σ i is the variance of input in the original network and σ loc is the variance due to local input from recurrent connections in the downscaled network.The variances can be approximated as The parameters for the original network are labelled by an asterisk and differ from their counterpart in the downscaled network by a scaling factor α such that C E = C * E /α and J = αJ * .From equations (2), ( 3) and (1), we obtain Here ν 0 is the stationary frequency of the original population and can be approximated by [4]
Topological framework
In algebraic topology, a well-established field of mathematics, one studies topological spaces by turning them into well-behaved algebraic invariants and deducing properties of the spaces from those of the algebraic objects.We shall not define any of these concepts precisely here, but will instead give relevant examples of both.See for example [17] for an introductory textbook that includes all the details with full precision.A space will in our context mean a kind of object that is built from certain geometric pieces by specific rules that reflect the data of the dynamics (or structure) of systems of neurons.These .Right: The boundary of (a, b, c) consists of the three 1-simplices and three 0-simplices that one would expect geometrically, giving geometric meaning to a purely combinatorially defined concept.
trains.A simplicial complex K is then formed by adding in every possible 2-simplex.As a space, this is not very interesting, as there is simply a filled triangle between every triple of neurons (one says that the space is contractible).The crucial part is that each 2-simplex is given a weight equal to the maximum of the weights given to its boundary edges, i.e., w(i, j, k) = max{w(i, j), w(i, k), w(j, k)}.
We then consider a filtration of K, enabling us to study a sequence of thresholded versions of K.At the start of the filtration, the filtration consists only of the vertices of K.Then, as the threshold increases, 1-and 2-simplices from K appear if their weight is below the threshold, so as to include into the filtration pieces stemming from ever more dissimilar spike trains.See Figure 6 for an illustration.The construction above is applicable to simplices of dimension higher than 2, so even though we stop at dimension 2 in our analysis, the following description employs generic dimensions p in order to simplify notation and give the bigger picture.
A basic algebraic invariant that we track as the dissimilarity threshold increases is the Betti numbers, giving rise to Betti curves for the filtration as a whole.The Betti numbers of a simplicial complex can be defined formally as the dimensions of the homology vector spaces of the complex, as we now sketch.Define C p (K) to be the collection of formal binary sums of the p-simplices in K.This makes C p (K) a vector space over the binary numbers, and allows us to view the boundary as an algebraic operation encoded in a linear map ∂ p : C p (K) → C p−1 (K) given by One checks that the application of two consecutive boundary maps always gives zero, i.e., that ∂ p−1 (∂ p (σ)) = 0 for every p-simplex σ.This algebraic statement reflects the geometric fact that the boundary of a boundary is empty.A general collection of p-simplices -a sum in C p (K)that has zero boundary is called a p-cycle.Figure 6 shows several examples.
It turns out that p-cycles that are not the boundary of collections of (p + 1)-simplices correspond geometrically to holes (p > 0) or connected components (p = 0) in the simplicial complex.Persistent homology, a widely employed construction in topological data analysis, tracks such holes/components as they appear and disappear across a filtration.The record of the "life" and "death" of such topological features provides valuable information about the filtration and thus about the underlying data.We do not use all of the data recorded in persistent homology, but instead just keep track of the number of holes in each dimension as a function of the filtration2 .These integer-valued functions, called Betti curves, are the features we use for machine learning.
An example of a Betti curve is given in Figure 6.We emphasize that the features captured by persistent homology may be much more global in nature than in this small example.
Machine learning
Before doing any machine learning, the features selected were standardized.If a feature was not computable because there were no corresponding Betti curves, its value was set to 0.
For each version of the network, four training sets were formed, one containing 90% of the samples from a specific network version and a fourth one containing 90% of all the samples, stratified so that its distribution of the samples was representative of all the samples.One classifier per training set was trained and tested against four test sets: one for each network version, using the valid samples not in the training set, and the fourth one containing all the valid samples not used during training.
Support Vector Machine methods [7] using a radial basis function kernel with L 2 -regularization were applied to classify the samples into the four different regimes.The multi-class classification was achieved by training four sub-classifiers with a one-vs-rest decision function.The regularization parameter was found by accuracy optimization thanks to 10-fold cross-validation.
The performance of the classifiers was assessed using an accuracy score.
Mutual Information
In section 2.4, we mentioned that mutual information between the features we selected by visual inspection and the regime labels was relatively high, suggesting that one could use the mutual information score to automatically select features when visual inspection would be time consuming or violate a need for automation or independence from human input.The mutual information between each feature and the labels for the three datasets is presented in Figure 7, where one can observe that some features, such as the area under the Betti-0 curve for correlation and SPIKE-distance, have a consistent mutual information score across the three datasets.This suggests that they are important features that allow the classifier to correctly sample from other datasets.Moreover, the area under the curve (AUC) features tend to have a higher score than the peak amplitude of the Betti curve.This is perhaps natural since the former includes information from all of the filtration, while the latter includes only one single aspect of it.
Figure 2 .
Figure 2. Diagrams of the different regimes for each version of the Brunel network ((A), (B) and (C) corresponding to versions 1, 2, and 3 of the network, respectively) in the parameter space (νext/ν θ , g).The white areas indicate where no neurons fired and no data is available.
Figure 3 .
Figure 3. Example of three of the features extracted from a filtration: the filtration threshold at which the Betti-0 curve (left) starts to decrease, the global maximum (red) of the Betti-1 curve, and the area under each curve (grey area)
Figure 4 .
Figure 4. Testing accuracy of the classifier trained on samples from version 1 (A), version 2 (B), version 3 (C) and all the samples (D).In each panel, the classification accuracy for the test samples from each network and all the samples together is reported for five sets of features."Select" designates the features we visually selected."Corr", "sync" and "dist" designate the features extracted using the correlation, SPIKE-synchronicity, and SPIKE-distance, respectively."All" designates the set of all the features.
Figure 6 .
Figure 6.Left: A weighted graph G on four vertices/neurons.Assume that 0 < α < β < γ.Top: The filtered simplicial complex K built from G. The 0-simplices are drawn in a different way at threshold 0 to make them more visible.Bottom: The Betti curves of K in dimensions 0 and 1.
Figure 7 .
Figure 7. Mutual information between each feature and the label for the training sets obtained from the simulations of the different network versions.Features that were selected by visual inspection are represented with a hashed bar.The peak and AUC labels designate the peak amplitude and the area under the Betti curve features, respectively.
Table 2 .
Fixed rate ν, C E + C I generators per neuron, each generator projects to one neuron: if excitation dominates (g ≤ 4): E ext rate = ν ext , I ext rate = 0 if inhibition dominates (g > 4): E ext rate = ν ext + ν bal , I ext rate = ν bal /g Description of the neuronal network following the formalism of [29], part 1/2. | 2018-10-03T14:11:24.000Z | 2018-09-23T00:00:00.000 | {
"year": 2019,
"sha1": "e74fabb4ddd04f2d6fdeb742a386b25c9f259278",
"oa_license": "CCBY",
"oa_url": "https://www.mitpressjournals.org/doi/pdf/10.1162/netn_a_00080",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "b7ae898aa098d6e384498693613b67000e45b9dc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Biology",
"Mathematics",
"Medicine"
]
} |
229272992 | pes2o/s2orc | v3-fos-license | Clinical pharmacology of fluconazole in infants and children
Fluconazole is the most potent antifungal agent, after metabolic conversion to 5-fluorouracil, by the enzyme uracil phosphoribosyl transferase, is either incorporated into RNA or metabolized to 5-fluoro-2’-deoxyuridine-5’monophosphate, a potent inhibitor of thymidylate synthase, ultimately inhibition of DNA synthesis. Fluconazole is active against several Candida species, Blastomyces dermatitis, Human capsulatum, and Coccidioides species, Paracoccidioides brasiliensis, and ringworm fungi (dermatophytes). Fluconazole is also active against Aspergillus species, Scedosporium apiospermun (Pseudallescheria boydii), Fusarium, and Sporothrix schenckii, but these fungi are intermediate in susceptibility. Fluconazole penetration into cerebrospinal fluid is good and it successfully treated bacterial meningitis. Fluconazole is used either for prophylaxis and treatment of fungal infection and colonization either in infants and children. In infants, fluconazole treatment dosing-regimen consists of a loading dose (25 mg/kg) followed by a daily maintenance dose of 12 mg/kg, whose dosing intervals decrease according to postnatal age. In children, the recommended dose is of 12 mg/kg daily without loading dose. Following oral dosing, this antibiotic is well absorbed, mainly by small intestine, its bioavailability is about 100%, and it is almost completely eliminated by kidney. Half-life is lower in preterm and term infants (40 to 60 hours) compared to children (< 20 hours) because renal function is lower in infants, and it increases with infant maturation. Fluconazole has been found to be effective and safe in infants and children, however, it produces several birth-defects when administered at high doses, (400 mg daily) to pregnant women. This drug is metabolically cleared by CYP3A enzymes, and interacts with different drugs which are metabolized by CYP3A, enhancing or inhibiting their effects, pharmacokinetics or metabolism. Several Candida species may become resistant to fluconazole; the resistance-rate is flung-specie depended, and azole consumption causes fluconazole-resistance rate increasing infection-risks. The aim of this study is to review published data on fluconazole dosing, effects, distribution, prophylaxis, treatment, drug-interactions, meningitis, trials, metabolism, pharmacokinetics, and drug-resistance in infants and children. *Correspondence to: Gian Maria Pacifici, Associate Professor of Pharmacology, via Sant’Andrea 32, 56127 Pisa, Italy; E-mail: pacifici44@tiscali.it
Introduction
Fluconazole is commonly used both to prevent and to treat invasive Candida albicans. This antibiotic is a potent, selective, triazole-inhibitor of the fungal enzymes involved in ergosterol synthesis. Fluconazole is reasonably effective against most Candida species, other than Candida glabrata. It is also of value in treatment of Cryptococcal infection. It is water soluble, well absorbed by mouth even in infancy, and largely excreted unchanged in the urine. Penetration into the cerebrospinal fluid is good. Fluconazole is increasingly used in treatment of systemically invasive Candida albicans infection. Studies suggest that it is less toxic, and at least as effective as amphotericin B. Liver function eruptions have only been seen in immunodeficient patients. Over 90% of an oral fluconazole dose is absorbed. It is widely distributed through the body, and then it is excreted by the kidney, in about 80% as unchanged compound and 11% as metabolites. The half-life in preterm infants with 29 weeks postmenstrual age is just over 70 hours. In term infants it is 40 to 60 hours, but changes within the first two weeks before setting at 20 to 25 hours throughout infancy and childhood, setting to 30 hours in adults. There is no good reason to give amphotericin B as well has high-dose fluconazole, but there is evidence that effective treatment of all Candida species, with MIC of ≤ 8 µg/ml, requires a higher-dose than much reference text currently quote. In-vitro modelling also suggests that high-dose treatment makes the emergence of resistant strains less likely. Oral fluconazole is widely used to treat superficial, thus topic, infection in adults and is now starting to be used for this purpose in infants. Prophylactic use has been widely studied in the last ten years, but some prefer to use nystatin which is not systemically absorbed, to minimize the risk of fluconazole-resistant strains proliferating. There is also some debate whether universal prophylaxis is warranted in units where of invasive candidiasis are low, whilst there is good evidence that this strategy is useful in those units that have moderate-to-high rates of invasive candidiasis, the evidence is less compelling when rates are low. While high-dose systemic exposure, such as 400 mg daily in the first trimester of pregnancy can produce a constellation of serious foetal abnormalities there are, as yet, no reports of teratogenicity with a single 150 mg daily-dose in the first trimester, or with topical or oral use later in pregnancy For Candida infection of the breast, the mother should take 100 to 300 mg loading dose and then 100 to 200 mg daily for at least ten days. Treat the infant as well and take steps to minimize the risk of re-infection [1].
All susceptible fungi are capable of deaminating flucytosine to 5-fluorouracil a potent antimetabolite that is used in cancer chemotherapy. Fluorouracil is metabolized first to 5-fluorouracilribose monophosphate by the enzyme uracil phosphoribosyl transferase. 5-Fluorouracil-ribose monophosphate is then either incorporated into RNA (via synthesis of 5-fluorourine triphosphate) or metabolized to 5-fluoro-2'-deoxyuridine-5'-monophosphate, a potent inhibitor of thymidylate synthase, ultimately inhibiting DNA synthesis. The selective action of flucytosine is due to the lack of cytosine deaminase in mammalian cells, which prevents metabolism to fluorouracil. Fluconazole is active against several Candida species, Blastomyces dermatitis, Human capsulatum, and Coccidioides species, Paracoccidioides brasiliensis, and ringworm fungi (dermatophytes). Fluconazole is also active against Aspergillus species, Scedosporium apiospermun (Pseudallescheria boydii), Fusarium, and Sporothrix schenckii, but these fungi are intermediate in susceptibility. Candida glabrata exhibits reduced susceptibility to azoles, whereas Candida krusei and the agents of mucormycosis are resistant. Posaconazole and isavuconazole have modestly improved spectrum of activity in-vitro against the agents of mucormycosis. Fluconazole is the drug of choice for treatment of Coccidioidal meningitis because of good penetration into cerebrospinal fluid and much less morbidity than the intrathecal amphotericin B. Fluconazole is almost completely absorbed from the gastrointestinal-tract and plasma concentration is essentially the same when the drug is administered orally or intravenously, and its bioavailability is unaltered by food or gastric acidity. This antibiotic diffuses readily into body fluids, including breast-milk, sputum and saliva. Concentration in the cerebrospinal fluid can reach 50 to 90% of the simultaneous values in plasma. Fluconazole is active against all susceptible fungi which are capable of deaminating flucytosine to 5-fluorouracil, a potent antimetabolite that is used in cancer chemotherapy [2]. Molecular structure of fluconazole (molecular weight=306 grams/ mole)
Literature search
The literature search was performed electronically using PubMed database as search engine, the cut-off point was April 2020. The following key words: "fluconazole infants effects", "fluconazole children effects", "fluconazole infants metabolism", "fluconazole children metabolism", "fluconazole infants pharmacokinetics", "fluconazole children pharmacokinetics", "fluconazole infants resistance", and "fluconazole children resistance" were used. In addition, the books Neonatal Formulary [1] and The Pharmacological Basis of Therapeutics [2] were consulted. The manuscript is written according to "Instructions for Authors".
Fluconazole administration dosing schedules of in infants and children
Fluconazole prophylaxis in very low birth-weight infants. Infants aged<2 weeks, give 6 mg/kg fluconazole on day 1, then a further 6 mg/kg every third day. Infants aged two to four weeks, give 6 mg/kg fluconazole and day 1, then a further 6 mg/kg every second day [1].
Fluconazole treatment of invasive candidiasis in infants.
Fluconazole loading dose (25 mg/kg) shortens the time to achieving therapeutic concentrations. Give 12 mg/kg fluconazole once-daily, every day, to infants with a postmenstrual age<30 weeks. For postmenstrual age > 30 weeks, give 20 mg/kg fluconazole once-daily, every day. Double the dosage interval after the first two doses if there is renal failure [1].
Fluconazole treatment of children. Give 12 mg/kg once-daily, maximum 600 mg daily, without a loading dose [2].
Fluconazole absorption following oral administration to children
Cristofoletti et al. [3] studied the absorption of fluconazole in children. Blood in the small intestine ranges from 5 to 40 L/h, is represented by an age-depended nonlinear function, reaching asymptote at 15 years. Luminal bile salts concentrations under fasting state are described by non-monotonic age depended functions, graphically represented by an inverted U-shape, with an ascending phase from 1 to 10 years with a maximum duodenal concentration of 9 mM, then a descending phase from 1 to 10 years, at which age adult values are reached. Gastric pH in children was described in an age-depended way, being neutral in infants and reaching adult values by 2 years of age. Even though the lower adsorptive surface and smaller volumes of intestinal fluids in children does not affect fluconazole absorption and the absorption-rate constant rate is 1.8 h -1 .
Schäfer-Korting [4] described the absorption of azole compounds and observed that the absorption-rate of these drugs is impaired by gastric pH, which is observed in some patients with acquired immunodeficiency syndrome. It is also impaired by frequent vomiting, which commonly occurs in patients with neutropenia. The concomitant administration of cyclosporine reduces fluconazole concentration by 50 to 55%. Nahata and Brady [5] investigated fluconazole pharmacokinetics in fasting children, aged 5 to 13 years, with HIV-infection. This antibiotic was administered orally at a dose of either 2 or 8 mg/kg. Fluconazole achieved serum concentration in these children similar to that of adults indicating a nearly complete degree of absorption.
Gupta et al. [6] observed that fluconazole has excellent absorption and good persistence in tissues that suggest it may be useful in superficial fungal infection.
Fluconazole efficacy and safety in infants and children
Castagnola et al. [7] described fluconazole efficacy and safety in infants. Although clinical use has been wide for over 15 years, there have been few studies about its safety and efficacy in young infants. These authors observed that fluconazole was found to be efficacy and safe in infants admitted in a neonatal intensive care unit.
Murata et al. [8] observed that fluconazole is efficacy and safety in infants in children and adverse-drug reactions were observe in only one patient, among 27 patients, who developed liver disorder, thus adverse-effect-rate was 3.7%. No serious or unanticipated adverse-drug reactions were observed, fluconazole was found to be efficacy and safe, and no remarkable findings should require additional precautions.
Lee et al. [9] observed that in highest-risk group, consisting in extremely low birth-weight infants (< 1,000 gram), invasive fungal infection increases mortality-rate and neurodevelopment in patients, despite antifungal therapy. With respect to fluconazole, other concerns are short-term drug toxicities and long-term neurodevelopment consequences due to fluconazole treatment when it is used in a developing organism.
Gürpinar et al. [10] studied fluconazole efficacy and safety in 24 infants, aged 2 days to 10 months, having fungal infection, who were treated with fluconazole by intravenous infusion, at a dose of 6 mg/ kg daily (range, 2 to 16), for a mean treatment duration of 25 days (range, 5 to 72). Fluconazole was found to be effective, positive clinical response was achieved in 23 of 24 infants (95.8%); eradication of fungal organism was also achieved in 23 of 24 infants (95.8%). Adverse-effects occurred in only two infants (8.3%) but therapy was not discontinued in both infants. These results confirm fluconazole efficacy and safety in the treatment of infants suffering from severe fungal infection.
Fasano et al. [11] prospectively evaluated fluconazole effects in 40 infants, aged 2 days to 3 months, in whom conventional antifungal therapy was ineffective or contraindicated. Fluconazole was administered at a mean dose of 5.3 mg/kg daily (range, 1 to 16), and mean duration of therapy was 26 days (range, 2 to 80). Efficacy was evaluated in 31 of 32 infants (96.9%) with proven fungal infection, and eradication of fugal organisms was achieved in 30 of 31 infants (96.8%). Adverse-effects occurred in only two infants (6.2%), but therapy was not discontinued in both infants.
Schwarze et al. [12] reviewed optimal fluconazole dosing-regimens in order to establish efficacy and safety. These authors included 726 children, aged<1 year, who derived from 78 studies. Fluconazole dose ranged from 2 to 50 mg/kg daily, and 162 days was maximum treatment duration. Fluconazole was well tolerated and was efficacious against systemic candidosis.
Novella and Holden [13] determined fluconazole safety profile in 562 children, aged 0 to 17 years, most children received multiple oral fluconazole dose (range, 1 to 12 mg/kg), and only few received a single dose. The most common adverse-effects were gastrointestinal (7.7%) or skin (1.2%). Overall, 18 children (3.2%) discontinued treatment due to severe adverse-effects. Although 98.6% of children were taking concomitant medications, no clinical and laboratory interactions were observed. Fluconazole was well tolerated in children mirrors the excellence safety profile.
Fluconazole prophylaxis in preterm infants
Autmizguine et al. [14] performed a multicentre, randomized, placebo-controlled trial of fluconazole prophylaxis, in order to test susceptibility of Candida isolates to fluconazole. One-hundredeight premature infants (< 750 gram birth-weight) were enrolled and received fluconazole, whereas 173 infants received placebo. Cultures were assessed at baseline (study period day 0 to 7), period 1 (study day 8 to 28), and period 2 (study period day 29 to 49). Fluconazole MICs were determined for all Candida species. Candida colonization was significantly lower in fluconazole group compared to placebo, during periods 1 (5 versus 27%, P-level<0.001) and period 2 (3 versus 27%, P-value<0.001). Only two infants (1%) were colonized with at least one fluconazole-resistant Candida. Median fluconazole MIC was similar in both treatment groups at baseline and period 1. However, in period 2, median MIC was higher in the fluconazole group compared to placebo (1.00 versus 0.5 µg/ml, P-value=0.01). There was no emergence of resistance observed, and no infants developed invasive candidiasis with a resistant Candida isolate. Fluconazole prophylaxis decreased Candida albicans and 'non-albicans' colonization, thus its use is appropriate in infants.
Ericson et al. [15] identified all randomized, placebo-controlled trials evaluating fluconazole prophylaxis in preterm infants conducted in the US. The occurrence of each endpoint in infants who received prophylaxis with fluconazole versus placebo was compared. Endpoints evaluated were: invasive candidiasis, death, Candida colonization, and fluconazole-resistance among tested isolates. Infants receiving fluconazole prophylaxis had reduced odds of invasive candidiasis, death, and Candida colonization compared to infants giving placebo. Thus, fluconazole prophylaxis is effective and safe in reducing invasive candidiasis, and had no impact on resistance was observed.
Kaufman et al. [16] conducted a prospective, randomized, doubleblind clinical trial over a 30 month period in 100 preterm infants with birth-weight<1,000 gram, to assess prophylaxis efficacy. Infants were randomly assigned to receive either fluconazole (N=50) or placebo (N=50) for 6 weeks. Fungal colonization was documented in 30 infants, in the placebo group (60.0%), and in 11 infants in the fluconazole group (22.0%, P-value=0.02). Fungal infection was observed in blood, urine, or cerebrospinal fluid of 10 infants (22.0%) who received placebo, and none infant treated with fluconazole (P-value=0.008). Fluconazole prophylaxis prevented fungal colonization and invasive fungal infection in these infants.
Weitkamp et al. [17] assessed prophylaxis efficacy in preterm infants, 44 infants received prophylaxis, whereas 42 infants did not. In prophylaxis group, no invasive fungal infection was observed, whereas it compared in nine infants of no-prophylaxis group (P=0.004). No significant adverse effects were recorded, fluconazole prophylaxis was found to be effective and safe in preventing invasive fungal infections.
Aziz et al. [18] undertook a retrospective study to document the efficacy and adverse-effects or routine fluconazole prophylaxis. Extremely-low-birth-weight infants were divided into two groups: control group (N=99) and fluconazole group (N=163). Invasive fungal infection occurred in 7.1% in the control group versus 1.8% in the fluconazole group (P-value=0.045). Logistic regression analysis revealed that fluconazole prophylaxis was associated with a lower-risk of invasive fungal infection.
Fluconazole treatment in infants and children
Gürpinar et al. [19] described fluconazole treatment in 24 infants, aged 2 days to 10 months, with documented fungal infection. Fluconazole was administered at a mean dose of 6 mg/kg (range, 2 to 16) daily and mean treatment duration was 25 days (range, 5 to 72). A positive clinical response was achieved in 23 of 24 infants (95.8%), and adverse-effects were observed in only two infants (8.3%). These results confirm fluconazole efficacy and safety in treatment in infants with severe fungal infections.
Charlier et al. [20] observed that Candida species are responsible for most of mucosal and invasive candidiasis. These authors reviewed the main available data on the position of fluconazole in prophylaxis or curative treatment of invasive infections caused by Candida specie infections. Case repots and uncontrolled studies documented fluconazole efficacy in curing osteoarthritis, endocarditis, meningitis, endocarditis, and peritonitis caused by Candida species in adults. Fluconazole is still the first-line treatment option for several cases of invasive candidiasis; however, its prophylactic use should be limited to selected high-risk patients to limit emergence-risk of strains azoleresistant.
Kakourou and Uksal [21] described practice guidelines for treatment of Tinea capitis; topical treatment is only used as adjuvant therapy to systemic antifungals. Fluconazole was found to be efficacy and had potential adverse-effects similar to that of griseofulvin in children with Tina capitis infection caused by Trichophyton species.
Fluconazole may be more expensive and griseofulvin is still the treatment of choice for infections caused by Microsporum species, however, griseofulvin is nowadays not available in certain European countries.
Iosifidis et al. [22] reviewed the main indications for antifungal drug administration in paediatrics, and stated that fluconazole remains the most frequent antifungal prophylactic agent given to high-risk infants and children. However, the emergence of fluconazole-resistance, particularly in non-albicans Candida species, should be considered during preventive or empiric therapy.
Ben-Ami [23] observed that invasive candidiasis occurs frequently in hospitalized patients, and is associated with high mortality-rate. Fluconazole is the drug of choice in management of invasive candidiasis, however, one must take into account multiple hosts, pathogens, and drug-related factors, including the site of infection, host immune status, severity of sepsis, resistance, tolerance, biofilm formation, and pharmacokinetics/pharmacodynamics to this antifungal agent.
Michelerio et al. [24] retrospectively studied three cases of cutaneous lesishmaniasis in paediatric patients, aged 3 to 6 years, treated with fluconazole. Efficacy, tolerability, safety profile and the cosmetic results of fluconazole were examined after administration of a dose of 6 mg/kg for 6 weeks. Children had complete resolution of their lesions with minimal scarring, and no adverse-effects were reported. Fluconazole represents a valid, safe and easily manageable option for lesishmaniasis in paediatric patients.
Watt et al. [25] determined fluconazole dosing in children on extracorporeal membrane oxygenation, and developed a physiologically based pharmacokinetic model in adults and critically ill children. Simulations, using the final extracorporeal membrane oxygenation and based pharmacokinetic model, reasonably characterized observed pharmacokinetic data in children with extracorporeal membrane oxygenation support, and the model was used to derive dosing across the paediatric age spectrum, in patients on extracorporeal membrane oxygenation.
Sesmero et al. [26] assessed fungal chemoprophylaxis safety in 60 infants, weighing<1,500 gram, and with postmenstrual age<28 weeks. Fluconazole was intravenously infused and a pharmacotherapeutic follow-up was performed for one year. No significant drug interactions and adverse-effects were observed. Fluconazole chemoprophylaxis was found to be excellent regarding to effectiveness, safety and tolerability.
Fox et al. [27] evaluated the physical compatibility of various drugs, including fluconazole, with neonatal total parenteral nutrition solution during simulated Y-site administration. Injection of equal volumes of neonatal total parenteral nutrition solution or sterile water was combined with the drugs or sterile water. Samples were examined via turbidmetric analysis and visually against light and dark backgrounds immediately, and after five times from administration ranging from 0.25 to 3 hours. Phenobarbital, pentobarbital, and rifampin formed visible precipitation immediately after mixing with the neonatal total parenteral nutrition solution. Fluconazole exhibited no visual or turbidmetric evidence of incompatibility when combined with a neonatal total parenteral nutrition solution for up to three hours in a simulated Y-site injection, thus fluconazole may be co-administered with total parenteral nutrition solution.
Nwaroh et al. [28] stated that sirolimus, an immunosuppressant drug, is indicated for post-allogenic stem cell transplant to reduce the risk of graft-versus-host disease. Sirolimus is metabolized by CYP3A4 and is a substrate of P-glycoprotein drug efflux-pump. Fluconazole may interact with known inhibitors of CYP3A4 enzyme and P-glycoprotein. Co-administration of fluconazole resulted in a decline in sirolimus blood concentrations, leaving patients at risk of graft-versus-host disease. In three patients studied, fluconazole discontinuation resulted in marked reduction in sirolimus trough concentrations, requiring > 200% increase in sirolimus dose to achieve therapeutic concentrations. Fluconazole should be not co-administered with sirolimus.
Liu and Köhler [29] observed that ill patients, at risk of invasive candidiasis, often receive multiple medications, including protonpump inhibitors. Fluconazole perturbs the vacuolar proton ATPase, and the proton-pump inhibitor omeprazole which inhibits Candida albicans growth. A Candida albicans codon-adapted pHluorin, Candida albicans pHluorin, was generated to assess cytosolic PH. Fungal cytosol was acidified by omeprazole and re-alkalinized by coexposure to fluconazole. Off-target effects of any medication on fungal pathogens may occur.
Fluconazole drug-interactions
Fluconazole interacts with many drugs; it enhances or inhibits drug effects, metabolism or pharmacokinetics. Levin et al. [30] reported two cases of life-threatening serotonin toxicity due a drug interaction between citalopram and fluconazole. Fluconazole inhibits CYP2C19 and citalopram is a substrate of CYP2C19, co-administration of fluconazole and citalopram results in alteration of citalopram with consequent serotonin toxicity.
Black et al. [31] observed that fluconazole, administered at a dose of 400 mg/kg for 6 days to 6 volunteers, significantly reduced warfarin metabolic clearance, as this drug is metabolized by CYP enzymes. In particular, CYP2C9 catalyses 6-and 7-hydroxylation of (S)-warfarin, responsible of warfarin termination anticoagulant effect, and coadministration of fluconazole inhibited approximately 70% of warfarin metabolism. This interaction dramatically increased the magnitude and duration of warfarin's hypoprothrombinaemic effect. (R)-Warfarin clearance was also strongly inhibited by fluconazole. 10-Hydroxilation, a metabolic pathway catalysed exclusively by CYP3A4, was inhibited by 45%, whereas 6-, 7-, and 8-hydroxylations were inhibited by 61, 73, and 88%, respectively. Fluconazole interacts with any drugs whose metabolic clearance is dominated by CYP2C9 and CYP3A4, and strongly support the hypothesis that effects based, in-vivo drug interactions, may be predicted from in-vitro microsomal data.
Finch et al. [33] described elevated carbamazepine serum concentrations during concomitant fluconazole (400 mg/kg daily) administration, including serial concentrations both before and after therapy with this antifungal agent. Inhibition of carbamazepine metabolism is altered by fluconazole, a known inhibitor of cytochrome P450 enzyme system.
Hilber et al. [34] evaluated the pharmacokinetic interaction between fluconazole 150 mg twice-daily, administered for one week, and oral ethinyl estradiol and norethindrone in 26 healthy women aged 18 to 36 years. Treatment with fluconazole resulted in significant increase of AUC 0-24 hours for both ethinyl estradiol (24%) and norethindrone (13%) as compared to placebo. The concomitant administration of 300 mg fluconazole, twice the recommended dose for vaginal candidiasis, resulted in significant increase of blood concentration of ethinyl estradiol and norethindrone.
Blum et al. [35] performed a randomized, placebo-controlled, parallel study, to assess fluconazole effect on phenytoin. Twenty healthy male subjects received 200 mg daily phenytoin orally on days 1 to 3 and 18 to 20, and 250 mg intravenously on days 4 and 21. Fluconazole trough concentration was determined on days 14, 18, and 21. Phenytoin AUC 0-24 hours increased 75% and trough plasma concentration increased up to 128% after fluconazole administration. Fluconazole inhibits phenytoin metabolism with a consequent alteration of phenytoin serum concentration. Serum levels of this drug should be monitored, and phenytoin dosage is clinically warranted in patients receiving fluconazole.
Cobb et al. [36] determined fluconazole effect on methadone disposition in health volunteers who received methadone orally at a dose of 200 mg daily and fluconazole (N=13) or placebo (N=12) for 14 days. There was a 35% increase of both methadone serum concentration and AUC in patients treated with fluconazole (P-value=0.0008, for both parameters). Methadone peak and trough concentrations increased by 27% (P-value=0.0076) and 48% (P-value=0.0007), respectively, compared to those obtained in subjects who received placebo. Fluconazole alters methadone peak and though concentrations.
Kang et al. [37] assessed fluconazole effect on omeprazole pharmacokinetics in 18 healthy volunteers. Control subjects received omeprazole dose of 20 mg daily, and treated subjects received omeprazole plus 100 mg fluconazole daily for 4 days. Omeprazole is extensively metabolized through 5-hydroxylation and sulfoxidation catalysed by CYP2C19 and CYP3A4, respectively. Fluconazole is a potent competitive inhibitor of CYP2C19, and a weak inhibitor of CYP3A4. In treated subjects, omeprazole AUC 0-∞, elimination half-life, and peak plasma concentration were significantly greater (3,0 versus 0.5 µg.h/ml), (2.59 versus 0.85 hours), and (0.75 versus 0.31 µg/ml), respectively, compared to control subjects. Fluconazole is a potent inhibitor of omeprazole metabolism.
Fluconazole penetration into cerebrospinal fluid of infants and children
Gerhart et al. [38] characterized fluconazole exposure in plasma of 22 infants with mean postmenstrual age of 28 weeks (range, 24 to 50), and in cerebrospinal fluid of 27 infants with a mean postmenstrual age of 28 weeks (range, 24 to 33). Cerebrospinal fluid concentration ranged from 0.1 to 9.6 µg/ml and was obtained 3.3 to 219 hours from the last dose. Drug penetration into the brain was determined by the following equation: P = MW eff *10 9 ) -6 * 10 logP .10 -5 (366) - 6 5 Where: P is the specific organ permeability, MW eff is the effective molecular weight, and logP is the lipophilicity.
Cerebrospinal fluid and plasma samples were obtained 1,470 and 1,474 min, respectively, after the last dose of 25 mg/kg. Cerebrospinal fluid to plasma ratio was 0.98, suggesting that fluconazole penetrates blood brain barrier easily, and fluconazole target attainment was reached in plasma and cerebrospinal fluid.
Bafeltowska and Buszman [39] examined fluconazole pharmacokinetics in cerebrospinal fluid of two children with hydrocephalus. Fluconazole was intravenously infused at average multiple doses of 12.5 mg/kg daily, and was injected into the cerebrospinal fluid at doses of 4, 5, and 7.5 mg/kg once-daily, and 7.5 to 10 mg/kg twicedaily. Fluconazole cerebrospinal fluid concentration was undetectable after intravenous administration. Fluconazole pharmacokinetics were determined in cerebrospinal fluid after intraventricular administration, and steady-state peak and trough concentrations were: 19.5+4.63 and 0.0 to 0.3 µg/ml, respectively. Elimination rate constant and half-life were: 0.465+0.210 h -1 and 1.84+0.93 hours, respectively. These results suggest the necessity of fluconazole monitoring in children with hydrocephalus during treatment of shunt infection.
Fluconazole treatment of meningitis in infants and children
Huttova et al. [40] studied fluconazole treatment of meningitis in 40 infants, of whom 28 were very-low-birth-weight infants, and all infants had documented Candida albicans or Candida parapsilosis fungemia. Fluconazole was intravenously infused at a dose of 6 mg/kg once-daily for 6 to 48 days, 34 infants received fluconazole as a monotherapy and 6 infants were treated with a combination of fluconazole and amphotericin B. Thirty-two infants (80.0%) were cured; 4 of them relapsed at least after 14 days of therapy, but they were ultimately cured without sequelae. Two infants had elevated liver enzymes and 2 other had elevated serum creatinine concentration during fluconazole monotherapy. No therapy discontinuation was necessary in one infant. Fungal meningitis developed as a complication of fungemia in 8 infants. Fluconazole successfully treated meningitis caused by Candida albicans or Candida parapsilosis even in complicated Candida fungemia.
Pérez et al. [41] observed that meningitis follows approximately 0.1% to 0.75% of cases of extra-pulmonary coccidioidomycosis. Fluconazole has a good cerebrospinal fluid penetration and a favourable side-effect profile. Eleven children, suffering from coccidioidal meningitis, were treated with amphotericin B, and they were switched to oral fluconazole at a dose of 400 mg daily for up to 19 months. Three children required hospitalization, two of them for reasons unrelated to coccidioidal meningitis. No child developed extra-meningeal disease or required discontinuous of fluconazole therapy. Conversion from amphotericin B to fluconazole was associated with stable disease course of coccidioidal meningitis for up to 19 months.
Fluconazole migration into breast-milk
Little is known about fluconazole migration into breast-milk. Kaplan and Koren [42] obtained breast-milk samples 8 days after delivery, at the 18 th day of treatment. Fluconazole maximum concentration in breast-milk was 4.1 µg/ml, measured 2 hours after the dose. The estimated relative infant dose was 17%, and elimination half-life in breast-milk was 26.9 hours. In another report, fluconazole breast-milk concentration was 2.93, 2.66, 1.76, and 0.98 µg/ml, at 2, 5, 24, and 48 hours, respectively, after an oral dose of 150 mg. The estimated relative infant dose was 17%, and elimination half-life was 30 hours in breast-milk. Both studies are consistent with the view than fluconazole half-life is longer in breast-milk than in serum of healthy volunteers.
Fluconazole administered during pregnancy causes birthdefects
Liu et al. [43] reviewed 8 cohort studies and one case-control study. Oral fluconazole exposure during pregnancy increased the risk of congenital heart defects and itraconazole caused eye defects in foetuses.
Mølgaard-Nielsen et al. [44] studied the association between oral fluconazole exposure during pregnancy and the risk of spontaneous abortion and stillbirth. Among 3,315 women exposed to oral fluconazole, from 7 through 22 weeks of gestation, 147 (4.4%) experienced a spontaneous abortion. Among 5,382 women exposed to fluconazole, from week 7 to delivery, 21 experienced a stillbirth occurring at 0.4%, compared to 77 among 21,506 unexposed (0.3%). Using topical azole exposure as the comparison, 137 of 2,823 women (4.8%) exposed to systemic fluconazole versus 118 of 2,823 (4.2%) exposed to topical azoles had a spontaneous abortion; 20 of 4,301 (0.4%) exposed to systemic fluconazole versus 22 of 4,301 (0.5%) exposed to topical azoles had a stillbirth. Use of oral fluconazole in pregnancy was associated with a statistically significant increased-risk of spontaneous abortion compared to risk among unexposed women, and women with topical azole exposure in pregnancy.
Howley et al. [45] observed that of 43,257 mothers analysed, 44 pregnant women, who received fluconazole, and only 6 control mothers' generated infants with birth-defects. Six exposed infants had cleft lip with cleft palate, 4 had an atrial septal defect, and each of the following defects had 3 exposed cases: hypospadias, tetralogy of Fallout, d-transposition of the great arteries, and pulmonary valve stenosis. Fluconazole use was associated with cleft lip, cleft palate (odds ratio=5.53, confidence interval=1. 68-18.24) and d-transposition of the great arteries (odds ratio=7.56; confidence interval=1. 22-35.45). The associations between fluconazole and both cleft lip with cleft palate and d-transposition of the great arteries are consistent with earlier published case reports but not recent epidemiologic studies. These reported birthdefects are consistent with earlier published cases.
Pursley et al. [46] enrolled three infants born to women who received fluconazole through or beyond the first trimester of pregnancy. All infants had congenital anomalies; no other drugs were implicated, and only one of three infants survived. Their anomalies, similar to those observed in animal studies, were largely craniofacial, skeletal (i.e., thin, wavy ribs and ossification defects), and cardiac. One of these infants was previously reported as having Antley-Bixler syndrome; however, given the chronology described herein and the similarity of this infant to the others, deformities also represent the potent teratogenic effect of fluconazole.
Bérard et al. [47] assessed fluconazole low and high-doses exposure during pregnancy, occurrence of spontaneous abortions, major malformations, and stillbirths. Within a cohort of 320,869 pregnancies were included in the analysis, 226,599 had of major congenital malformations, and 7,832 stillbirths. Most women (69.5%) received 150 mg fluconazole (low-dose); the remainder received > 150 mg fluconazole (high-dose). Use of oral fluconazole during earlier pregnancy was associated with increased-risk of spontaneous abortion compared to no exposure. Low exposure to fluconazole during the first trimester did not increased-risk of overall major congenital malformations; however, exposure to high-dose during the first trimester was associated with an increased-risk of cardiac septal closure abnormalities compared to no exposure. No association was found between exposure to fluconazole during pregnancy and risk of stillbirth. Any maternal exposure to fluconazole used during pregnancy may increase risk of spontaneous abortion, and doses higher than 150 mg increase risk of cardiac septal closure abnormalities.
Fluconazole trials in infants and children
Wilkerson et al. [48] observed that in neonatology, well-designed evidence-based practice guide practitioners. Several large randomized controlled trials have been conducted to explore fluconazole prophylaxis in preterm infants. Despite findings of these studies, practice varies among units. In a recent survey of members of the American Academy of Paediatrics, 34% of clinicians indicated that they have used antifungal prophylaxis, and only 11% indicated that a written protocol was in place in their neonatal intensive care units. Sixty-six percent of paediatricians used fluconazole, 59% oral nystatin, and 21% intravenous amphotericin B. There is the need to elaborate a guideline to optimize and uniform fungal treatment in paediatric patients.
Ku and Smith [49] observed that determining right dose of drugs is critically important in infants because they have significant differences in physiology affecting drug absorption, distribution, metabolism, and elimination that makes extrapolating dosing-regimens from adults and older children is inappropriate. Specialized analytical techniques, such as the use of dried blood spots, scavenged sampling, population pharmacokinetic analysis, and sparse sampling, have helped investigators better to define doses which maximize efficacy and safety. Use of these methods resulted in successfully clinical trials, and optimized dosing-regimens in this population.
Schware et al. [50] determined optimal therapy in 38 infants; the majority of them were preterm, with a mean birth-weight of 1,120 gram, born at a postmenstrual age of 23 to 38 weeks, and suffering from a systemic Candida mycosis, mostly caused by Candida albicans. Fluconazole treatment (5 to 6 mg/kg daily) was initiated at 5 weeks of life, and the median duration of therapy was 21 days. Clinical cureor improvement-rates were reported in 31 of 38 infants (81.6%), and mycological cure-rate was achieved in 25 out 32 infants (78.1%). Fluconazole was found to be an effective antifungal therapy, and no adverse-effects were observed. Two infants (5.2%) with megauretermegacystis-hydronephrosis syndrome and severe meningoencephalitis showed a mild increase in liver enzymes. Fluconazole therapy is effective for systemic and other forms of candidiasis in these infants.
Manzoni et al. [51] observed that infants in neonatal intensive care units are at high-risk of invasive fungal infection, mostly caused by Candida species. This infection-rate is increasingly, leading to high morbidity-and mortality-rates, and causes frequent neurodevelopmental disabilities in survivors. Fluconazole is the best option to decrease Candida infection, and to prevent disease burden. This antibiotic is a suitable strategy and its efficacy was proven in different studies. Nevertheless, the use of this azole in high-risk preterm infants, admitted in neonatal intensive care, is not yet standardized.
Kaufman [52] observed that a better understanding of adherence factors, molecular diagnostics, and risk-factors are important in treatment of fungal infection. The INT1 gene is associated with enhanced colonization and dissemination in humans. Dissemination is probably caused by test cell adherence and invasion, whereas tissue injury may be related to filamentous formation. PCR technique has demonstrated promise in infants with bloodstream infection. At the time of fungal sepsis,<28 weeks of postmenstrual age, thrombocytopenia, and previous exposure to broad-spectrum antibiotic continue to be riskfactors of infection. Fluconazole empiric therapy is still being defined and investigated, in order to prevent fungal infection and colonization in high-risk very-low-birth weight. Multicentre fluconazole clinical trials are important to confirm drug safety and efficacy, and empiric treatment to test safety and outcomes is urgently needed.
Viscoli et al. [53] administered fluconazole (6 mg/kg daily, administered either orally or intravenously) to 24 children, and all had predisposing HIV-infection, cancer, organ, or bone marrow transplantation, malnutrition, and obstructive uropathy. Two children with fungemia due to Candida parapsilosis required and increased dosage of 12 mg/kg. Clinical and microbiological successes were achieved in 30 of 34 children (88.2%). Drug-related transaminase increase was observed in only two cases (5.9%). Fluconazole represents an effective alternative to amphotericin B in treatment of candidiasis in children.
Driessen [54] reported their experience of fluconazole in 21 infants who developed Candida septicaemia and were treated with oral fluconazole over one year. Therapy was continued for at least one week after the first negative culture was observed. Clinical and microbiological cure-rates were similar and accounted in 90.5% infants. No serious renal, haematological, or hepatic complications were detected; mild hepatotoxicity was evidenced by elevated enzymes in a third of infants. Relapse occurred in only one infant (4.8%) who received inadequate fluconazole doses. Fluconazole is a safe and effective alternative for management of systemic candidiasis in infants.
Frattarelli et al. [55] observed that fluconazole was the most antifungal agent studied and show slightly less variability than other antifungal agents in infants. Genetic factors, which affect metabolism of fluconazole, may explain some of the observed variability in drug effects. Amphotericin B deoxycholate is primary nephrotoxic; it also induces electrolyte abnormalities and is to a lesser degree cardiotoxic. Fluconazole toxicity is lower than that of amphotericin B. No sufficient data are available to define the pharmacokinetic profiles, optimal dose, therapy duration, or toxicity for these drugs.
Marchisio and Principi [56] evaluated treatment efficacy of oropharyngeal candidiasis, caused by Candida species including Candida albicans, which occurred in 55%, of children with a mean age of 5 years, suffering from HIV-infection. Fluconazole was given in a mean dose of 3.4 mg/kg daily (range, 2.0 to 5.6) for a mean duration of 12 days (range, 6 to 28). By the end treatment, 90.0% of children were clinically cured, 6% were improved, and 4% failed to respond. Candida was eradicated in 82% children. Clinical failure occurred only in children given 3 mg/kg or less daily dose. After two and four weeks after therapy, clinical cure was confirmed in 88 and 82% of children, respectively, and infective agents were eradicated in 76% of children. Six children experienced mild adverse-effects (1 skin rash, 5 mild elevations of liver enzymes). Fluconazole was found to be safe and effective in treating oropharyngeal candidiasis in HIV-infected children.
Fluconazole metabolism
Godamudunage et al. [57] described effects of azole compounds on cytochrome P450 enzymes (CYP). CYP3A enzymes metabolize up to 50% of human drugs. While CYP3A4 is the major enzyme in adults, CYP3A7 is the major form in infants aged 6 to 12 months. There are some significant differences between CYP3A4 and CYP3A7. Azoles are effective for treatment of common fungal infections in infants, but can also be rather nonselective CYP inhibitors. In addition, to the clinical relevance of these compounds, azoles constitute a useful series of CYP3A active site probes. These authors evaluated the interactions of different azoles with purified, recombinant human CYP3A4 and CYP3A7. Fluconazole and flosfluconazole demonstrated unusual binding characteristics to these CYP enzymes. Differences exist between CYP3A7 and CYP3A4 interactions with fluconazole and flosfluconazole. Such differences may underline differential metabolism of common drugs at different life stages and inform dosing in infants versus adults.
Niwa et al. [58] compared effects of five antifungal drugs, fluconazole, itraconazole, micafungin, miconazole, and voriconazole, on CYP2C9-mediated tolbutamide hydroxylation, CYP2C19mediated S-mephenytoin 4'-hydroxylation, and CYP3A4-mediated nifedipine oxidation activities in human liver microsomes. IC 50 value against tolbutamide hydroxylation was the lowest for miconazole (2.0 µM), followed by voriconazole (8.4 µM) and fluconazole (30.3 µM). Similarly, the IC 50 value against S-mephenytoin 4'-hydroxylation was the lowest for miconazole (0.33 µM) and fluconazole (12.3 µM). These results suggest that miconazole is the strongest inhibitor of CYP2C9 and CYP2C19, followed by voriconazole and fluconazole, whereas micafungin wound not cause clinically significant interactions with other drugs that are metabolized by CYP2C9 or CYP2C19. The IC 50 value of voriconazole against nifedipine oxidation was comparable with that of fluconazole and micafungin and higher than that of itraconazole and miconazole. Simulation of inhibition of CYP2C9, CYP2C19, or CYP3A 4-mediated reactions by 5 min preincubation was not observed for any of antifungal drugs, suggesting that these are not mechanismbased inhibitors.
Fluconazole pharmacokinetics in infants
Leroux et al. [59] studied fluconazole pharmacokinetics in 18 infants with mean postmenstrual, postnatal ages, and birth-weight of 28 weeks + 2 days, 13.5 days, and 995 gram, respectively, six infants were born prematurely with a postmenstrual age from 23 to 32 weeks. Ten infants (55.5%) were infected by Candida species and one infant had blood and cerebrospinal fluid infected by Candida species. Infants received fluconazole by intravenous infusion; the loading dose was: 25 mg/kg, and the maintenance daily dose was 12 or 20 mg/kg, and varied according to postmenstrual age, and treatment duration was at least 5 days. Some infants were co-treated with amoxicillin (N=5, 27.8%), or Vancomycin (N=9, 50.0%), or inotropic agents (N=6, 33.3%), or diuretics (N =2, 11.1%) or caffeine (N=9, 50.0%). The following pharmacokinetic parameters are expressed as the mean and (range): total body clearance was 0.015 L/h/kg (0.008 to 0.039), distribution volume was 0.91 L/kg (0.91 to 0.91), elimination halflife was 40.9 hours (16.2 to 78.4), AUC 0-24 hours was 491 mg.h/L (406 to 572), respectively, on the first treatment day. Total body clearance and distribution volume were corrected for birth-weight. At steady-state, all infants reached the target systemic exposure of AUC 0-24 hours ≥ 400 mg.h/L, however, 6 infants (33.3%) did not achieved AUC 0-24 hours target value=800 mg.h/L. Monte Carlo simulations showed that fluconazole target attainment rate increased from 30 to 96% with the use of 25 mg/ kg loading dose at 24 hours. When using the same maintenance dose, without loading dose, target attainment rate was delayed to 48 hours of treatment. The pharmacokinetic/pharmacodynamic index of AUC/ MIC > 50 for Candida species was achieved in most infants with an MIC breakpoint ≤ 8 µg/ml. This issue can be critical in preterm infants who urgently require high and rapidly effective amounts of antifungal drugs owing to a very high severity of systemic candidiasis. These results confirm the necessity of fluconazole loading dose of 25 mg/kg, followed by a maintenance dose of 12 and 20 mg/kg daily, in infants with<30 weeks and > 30 weeks of postmenstrual age, respectively, these dosing-regimens reduce time needed to reach the target AUC/MIC, providing an important therapeutic benefit in such vulnerable patients. Fluconazole was well tolerated in all infants, the loading and dailydoses, higher than those recommended, enable the target AUC to be reached earlier. , respectively. Fluconazole total body clearance was highly variable and ranged from 9 to 27 mL/kg/h. After the loading dose, 5 of 8 infants (62.5%) achieved the therapeutic target AUC 0-24 hours > 400 mg.h/L, and all infants achieved a 24-hour trough concentration > 8 µg/ml. AUC 0-24 hours was 493 mg.h/L (range, 271 to 499), the highest value was observed in infants with elevated serum creatinine (1.2 mg/dl), and an inverse relationship was observed between fluconazole total body clearance and serum creatinine. Fluconazole is eliminated by glomerular filtration, and this relationship confirms that infants, with renal failure, eliminate fluconazole less rapidly. Elimination half-life, elimination rate constant and AUC 0-24 hours were: 91.4 hours, 0.010 h -1 and 338 mg.h/L, respectively, in one infant, supported by extracorporeal membrane oxygenation, and had these values are at the extreme of pharmacokinetic parameters of whole population. These findings suggest that the extracorporeal membrane oxygenation may be responsible for part, of even the majority, of fluconazole clearance. Two infants, suffering from severe anasarca, did not achieve the therapeutic target value. None of infants reached an AUC 0-24 hours > 800 mg.h/L, which is the recommended therapeutic target for immune-compromised adults with Candidaemia. All infants tolerated this dosing-regimen well, bud rare serious hepatotoxicity occurred in patients taking fluconazole. However, this toxicity was not related to dosage, total drug exposure, sex, age, it was reversible, and therapy was not discontinued. A loading dose of 25 mg/kg achieved desired therapeutic target in most critically ill infants.
Murakoso et al. [61] systematically revived fluconazole pharmacokinetic data, and renal function, in infants and adults. Total body clearance normalized by body-surface-area (BSA) (CL BSA ) was 1/3 to 1/4 lower than that of adult, but CL BSA rapidly increased during the neonatal and infantile periods and attained near adult values at a postmenstrual age of 60 weeks. A significant correlation was observed between CL BSA and postmenstrual age in infants: CL BSA (ml/min/1.73m 2 )=0.26*postmenstrual age (weeks) -4.9 (r=0.68, P-value<0.001). In addition, developmental time course of glomerular filtration-rate normalized to BSA (GFR BSA ) fitted well with a sigmoidal model with maximum GFR BSA of 149 ml/min/1.73m 2 . Postmenstrual age was associated with 50% of GFR BSA ,max (PMA 50 ) of 54 weeks, and the Hill-coefficient=3.7. The following correlation was found between fluconazole clearance and glomerular filtration-rate in infants: clearance (ml/min)=0.34*glomerular filtration-rate (ml/min) -0.53 (r=0.84, P-value<0.001). Assuming that fluconazole plasma concentration, required for treating fungal infection, is comparable between children and adults, thus fluconazole doses, for paediatric patients with above given postmenstrual ages may be predicted from adult doses (such as 100 mg daily) using size-normalized clearance as a scaling factor. The predicted doses for infants were largely within the ranges recommended in prescribing information.
Momper et al. [62] characterized fluconazole population pharmacokinetics, and dosing-regimen, required in 141 preterm infants, suffering from invasive candidiasis, whose postmenstrual, postnatal ages, and birth-weight were: 28.3 weeks (range, 23.7 to 35.1), 23 days (range, 3 to 47), and 710 gram (range, 345 to 2,680), respectively. Eight-one infants percent were intubated, 67% infants were delivered by Caesarean section. Fifty-three percent infants were black or African American, 40% were white, 5% were American Indian or Alaska native and 1% was Asiatic. Infants were treated with fluconazole 6 mg/kg intravenous infused, or orally administered, twice-weekly for up to 42 days of treatment. Each infant had two pharmacokinetic plasma samples drawn after a single dose taken around the administration time of dose and 3, 5, 7, or 9, hours after dosing, and one sample taken around the administration of the final dose. One-compartment model, and first-order conditional estimation method, was used to describe fluconazole concentration data. The majority of samples (N=368, 61%) were from scavenged samples. The following plasma samples were obtained: scavenged samples (median 4.1 µg/ml, range, 0.5 to 14.0), and timed samples (median 6.1 µg/ml, range, 0.3 to 13.2). The multivariable process started with the elemental components of postmenstrual age (postnatal and gestational ages) rather than postmenstrual age itself. Sequential removal was performed with in reverse order of magnitude of objective function value change seen with the covariate in the invariable screening process. Attempts to remove gestational age (clearance) postnatal age (clearance), and serum creatinine (clearance) each resulted in increases of > 10 in objective function value. Finally, a model using serum creatinine (clearance) and postmenstrual age (clearance) as a function of gestational and postnatal ages was assessed and performed better than the model with serum creatinine (clearance), gestational age (clearance), and postnatal (clearance) with an objective function value reduction of 37.3 despite having one fewer covariate. The model-estimated absolute oral fluconazole bioavailability was 100%, which is in agreement with prior adult data. No significant relationships were observed between fluconazole clearance or distribution volume and sex, race, ethnicity, intubation, or mode of infant delivery. Population pharmacokinetic parameters were: clearance (L/h/kg 0.75 )=0.0127*(serum creatinine concentration/0.8) -0.41 *(postmenstrual age/28) 2.05 ; distribution volume (L/kg)=1.00; elimination rate constant (h -1 )=0.96; bioavailability=100% (where: serum creatinine concentration is in mg/dl and postmenstrual age is in weeks). Interindividual variability was estimated as 23% for clearance, 13% for distribution volume, and 25% for bioavailability. A percentage of 98.7 bootstrap data resulted in ≥ 3 significant digits. The median of bootstrap fixed-effect parameter estimates were within 1% of population estimates from original data set for all parameters. Using Monte Carlo simulations, fluconazole exposure from dose of 6 mg/kg twice-daily was assessed. Trough concentration was determined during an 8 week fluconazole course, and predose concentration was compared to a minimum target of 2 µg/ml. This threshold was exceeded in 80% of simulated infants at 1 week, and in 59% of simulated infants at week 4 of fluconazole prophylaxis. These results are consistent with 95.7% of the first measured concentration being > 2 µg/ml, and with 89.9% of the overall fluconazole concentrations being > 2 µg/ml. Table 1 summarizes fluconazole pharmacokinetic model parameters.
Fluconazole samples used in the Wade et al. [63] study were obtained from two studies enrolling concurrently within Paediatric Pharmacology Research Unit. Fifty-five infants were enrolled and had mean postmenstrual, postnatal ages and birth-weight of: 26 weeks (range, 23 to 40), 16 days (range, 1 to 88) and 1,020 gram (range, 451 to 7,120), respectively. The infant race was: Caucasian (50%), Black (40%), other (10%), and Hispanic (9%). Twenty-three infants (41.8%) received prophylaxis from birth, 11 infants (20.0%) received prophylaxis with broad-antibiotic exposure, 8 infants (14.5%) received prophylaxis for necrotizing enterocolitis, 7 infants (12.7%) received treatment for fungal sepsis, 2 infants (3.6%) received treatment for urinary infection, and 4 infants (7.2%) were empirically treated for fungal sepsis. The primary study (study 1) was open label, fluconazole pharmacokinetic study conducted at 8 institutions. Infants were stratified by postmenstrual age (weeks) 23 to 25, 26 to 29, 30 to 33, and ≥ 34, and postnatal age<14 and 14 to 119 days. The second study (study 2) was an open label study of a panel of antimicrobial drugs. For both studies, fluconazole dosing was determined by the routine clinical practice and was intravenously infused at dose ranging from 3 to 12 mg/kg twicedaily. The following information was collected for covariate-analysis: gestational and postnatal ages, weight, urine output (ml/24 hours), serum creatinine concentration (SCRT), date of positive Candida cultures, daily-assessment, and respiratory support. Infants in study 1 were randomly assigned to one of two sampling schedules (schedule A included preinfusion, end of infusion, and 1, 6 to 8, and 20 hours postinfusion; schedule B included preinfusion, end of infusion, and 3, 10 to 12, 24, and 48 hours post-infusion. Monte Carlo simulations replicates of original data were used to explore impact of postnatal and birthgestational ages on pharmacokinetic parameters. For target exposure, AUC 0-24 hours =800 mg.h/L, and fluconazole dose of 800 mg daily ensure that exposure exceeds the pharmacodynamic target of an AUC/ MIC value of > 50 for Candida species with an MIC=8 µg/ml at the clinical and laboratory standard institute sensitivity breakpoint. Table 2 summarizes fluconazole population pharmacokinetic parameters in these infants.
Fluconazole pharmacokinetics in children
Watt et al. [64] determined fluconazole pharmacokinetics in 21 children with mean postmenstrual, postnatal ages, and body-weight of 40 weeks, 22 days, and 3.4 kg, respectively, receiving extracorporeal membrane oxygenation (ECMO) support, and 19 children, without ECMO support, had mean postmenstrual, postnatal ages, and bodyweight of 39 weeks, 13 days, and 3.2 kg, respectively. Fluconazole samples were obtained from three prospective trials. Study 1 was a single-centre open-label pharmacokinetic study which consisted of 20 children; this antibiotic was administered at a dose of 25 mg/kg once-weekly for prophylaxis of fungal infection. Study 2 was a single centre pharmacokinetic study in 12 critically ill children, aged<1 year, one of whom had ECMO support, and fluconazole loading dose was administered. Study 3 was a multicentre pharmacokinetic study in 8 infants, with postmenstrual and postnatal ages of 23 to 42 weeks, and<120 days, respectively, who were treated with fluconazole for prevention or treatment of candidiasis. The following parameters: Interindividual random effects for clearance, distribution volume, and both diagonal and block Omega matrices for covariance were explored. An exponential model for Interindividual variance was used. Bodyweight was incorporated into the base model, before evaluation of other covariates, due to multicollinearity with other clinical covariates. Both linear and allometric scaling of weight was assessed for total body clearance. For distribution volume and intercompartmental clearance parameters, size-based scaling parameters were incorporated using a linear relationship with body-weight. The following covariates were evaluated: ECMO support, volume of blood required to prime ECMO circuit, ratio of blood prime volume to estimated native blood volume of the child, hemofiltration, use of conventional venovenous haemodialysis, serum creatinine concentration, albumin, AST, and ALT levels, post-neonatal age, sex, and race. Fluconazole exhibited timedepended fungistatic activity with prolonged post-antibiotic effect, and efficacy was most associated with AUC/MIC ratio of > 50. For treatment, target minimum AUC 0-24 hours of 400 µg.h/ml was obtained in 90% of children. This value achieved target AUC/MIC ratio, assuming an MIC=8 µg/ml, which is the clinical and laboratory standard institute sensitivity breakpoint for all Candida species. Monte Carlo simulations, using parameter estimates from final model, were used to explore dose-exposure relationship. Children were stratified by the presence or absence of ECMO support. Peak and trough concentrations, were used to calculate AUC, for each dosing interval, using linear-up log-down trapezoidal approach, for each of 14 simulate dosing intervals, using the equation for an intermittent infusion. Table 3 shows final population pharmacokinetic parameters, and Table 4 shows Bayesian estimates overall distribution volume and clearance by age group in both children with or without ECMO support.
Cristofoletti et al. [3] observed that higher fluconazole total body clearance is higher in children than in adults, thus it is recommended that Table 2. Fluconazole population pharmacokinetic parameters were obtained in 55 infants with mean postmenstrual, postnatal ages and birth-weight of 26 weeks, 16 days, and 1,020 gram, respectively. Fluconazole was intravenously infused at a dose ranging from 3 to 12 mg/kg twice-daily, by Wade et al. [63] higher relative dose on mg/kg basis should be administered to children in order to achieve similar systemic exposure to adults. These authors validated the adult whole-body physiologic based pharmacokinetic model, and absorption models, its paediatric counterpart was developed by means of changing the system component of the model to reflect the specific anatomy, physiology, and biochemistry of the paediatric group under study. This approach requires some fundamental assumptions: (1) the drug undergoes same metabolic pathway in adults and paediatrics; (2) the model structure is similar in both populations; and (3) unless otherwise stated, variability in terms of anatomy, physiology, and biochemistry is considered similar. Once a preliminary first-in-paediatric dose is defined using modelling and simulation tools, a subsequent confirmatory pharmacokinetic study is necessary to assess whether systemic exposures, and consequently, similar therapeutic responses in adults and children would be indeed similar. A relative bioavailability study, enrolling healthy adult volunteers, was conducted to compare the paediatric formulation with approved adult drug product. There is often little or no consideration given to differences in gastrointestinal between adult and paediatric patients. Fluconazole is cleared primarily by kidney, with approximately 80% of the administered dose appearing in the urine as unchanged drug. The renal clearance value reported after intravenous fluconazole administration was estimated from total body clearance (i.e. nonrenal clearance=total body clearance -renal clearance). Table 5 shows fluconazole properties taken from the literature or estimated according to the method described in the table, and absorption of 2 fluconazole doses in children and adults.
Lee et al. [65] evaluated fluconazole pharmacokinetics in 26 children, aged 5 to 15 years, with normal renal function who received treatment for cancer. Fluconazole was intravenously infused at doses of 2, or 4, or 8 mg/kg for 7 days. Fluconazole showed linear first-order kinetics over the dosage range tested and during multiple dosing. After the first-dose, mean total body clearance, distribution volume and elimination half-life were: 22.8+2.3 ml/min, 0.87+0.06 L/kg, and 16.8+1.1 hours, respectively. Similarly, after the last dose, total body clearance, distribution volume, and elimination half-life were: 19.4+1.3 ml/min, 0.84+0.04 L/kg, and 18.1+1.2 hours, respectively. Following a dose of 8 mg/kg, peak and trough serum concentrations were: 9.5+0.4 and 2.7+0.5 µg/ml, respectively, and AUC0-∞ was 186+16 µg.h/ml. Fluconazole renal clearance was 65+5% of the total body clearance and demonstrated the predominantly renal excretion of this drug.
Mechanisms of bacteria-resistance to fluconazole
Candida lusitaniae is usually susceptible to echinocandins. β-1,3-glucan synthase encoded by FKS genes which is the target of echinocandins. A few missense mutations in Candida lusitaniae FKS1 hot-spot 1 (HS1) have been reported. Asner et al. [66] reported the rapid emergence of antifungal-resistant in Candida lusitaniae isolated during therapy with amphotericin B, caspofungin, and azoles for treatment of persistent Candidaemia in an immunocompromised child with severe enterocolitis and visceral adenoviral disease. As documented, from restriction fragment length polymorphism, random amplified, and polymorphic DNA analysis, five Candida lusitaniae isolates examined were related to each other. From antifungal susceptibility and molecular analysis, 5 different profiles were obtained. These profiles included the following: profile 1 (caspofungin MIC=0.5 µg/ml, fluconazole MIC=0.25 µg/ml), determined while the child was being treated with liposomal amphotericin B for 3 months; profile 2 (fluconazole MIC=0.25 µg/ml, caspofungin MIC=4 µg/ml) while the child was being treated with caspofungin for 2 weeks, profile 3 (caspofungin MIC=0.5 µg/ml, fluconazole MIC=32 µg/ml), while the child was being treated with azoles and caspofungin initially followed by azoles alone for one week, profile 4 (caspofungin MIC=8 µg/ml fluconazole MIC=8 µg/ml), while the child was being treated with both drugs for 3 weeks, and profile 5 (amphotericin B MIC=0.125 µg/ml, caspofungin MIC=8 µg/ml), while the child caspofungin-resistance was associated was being treated with amphotericin B and fluconazole for 2 weeks. Caspofungin-resistance was associated with resistance not only to micafungin and anidulafungin but also to amphotericin B. Analysis of caspofungin-resistance revealed 3 novel FKS1 mutations in caspofungin-resistant isolates (S638Y in profile 2, S631Y in profile 4, and S638P in profile 5). While S638Y and -P are within HS1, S631Y is close proximity to this domain but was confirmed to confer candinresistance using a site-directed mutagenesis approach. Fluconazole-
Molecular weight
Hydrophilicity Peff.man=Effective permeability in humans. *µl/min/mg microsomial protein. resistance could be linked with overexpression of major facilitator gene 7 (MFS7) in Candida lusitaniae profiles 2 and 4 and was associated with resistance to 5-fluoocytosine. While candins or azole-resistance followed monotherapy, multidrug antifungal-resistance emerged during combined therapy.
Marchaim et al. [68] reported increased occurrence of secondary fluconazole-resistance, analysis of risk-factors thereof, and described management of fluconazole-refractory vaginitis. Twenty-five women with vaginitis were enrolled and they had fluconazole-resistant to Candida albicans (MIC ≥ 2 µg/ml). Study cohort consisted mainly of married, insured white aged > 12 years having formal education averaged or above socioeconomic status. Median fluconazole MIC=8 µg/ml (range, 2 to 128). Risk-factors for mycological failure included increased fluconazole consumption (P-value=0.03) in 16 of 25 women (72.7%) exposed to low-dose weekly maintenance therapy. All women were successfully treated, although treatment was difficult and often prolonged. Vaginitis caused by fluconazole-resistant Candida albicans was previously considered rare. All women had fluconazoleconsumption in the previous 6 months. Management of fluconazolerefractory disease is extremely difficult with limited options, and new therapeutic modalities are needed.
Krcmery and Barnes [69] observed that non-albicans Candida causes 35 to 65% of all Candidaemias in hospitalized patients, and occurs more frequently in diseased patients, and appears in children with a frequency of 1 to 35%. The proportion of non-albicans Candida species is increased from 10 to 40% to 35 to 65% in the last two decades. The most common non-albicans Candida were: Candida Parapsilosis (20 to 40% of all Candidaemias), Candida tropicalis (10 to 30%), Candida krusei (10 to 35%), and Candida glabrata (5 to 40%). At least two other species were emerging: Candida lusitaniae and Candida guilliermondii causing infection in 2 to 8% and 1 to 5%, respectively. Other non-albicans Candida species, such as Candida rugosa, Candida kefyr, Candida stellatoidea, Candida norvegensis, and Candida famata are rare. Mortality-rate due to non-albicans Candida species is similar to that caused by Candida albicans, and ranges from 15 to 35%. However, there are differences in both overall and attributable mortality-rate among species: the lowest mortality-rate is associated with Candida parapsilosis, the highest with Candida tropicalis and Candida glabrata (40 to 70%). There are several specific risk-factors for particular nonalbicans Candida species: Candida parapsilosis is related to foreign body insertion, infants with hyper-alimentation; Candida krusei to azole prophylaxis and along with Candida tropicalis to neutropenia and bone marrow transplant recipients; Candida glabrata to azole prophylaxis, surgery, and urinary or vascular catheters; Candida lusitaniae and Candida guilliermondii to previous amphotericin B or nystatin use; and Candida rugosa to burns. Antifungal susceptibility varies significantly, in contrast to Candida albicans: some non-albicans Candida species are inherently or secondarily fluconazole-resistant; 75% of Candida krusei isolates, 35% of Candida glabrata, 10 to 25% of Candida tropicalis, and Candida lusitaniae. Therefore, "species directed" therapy should be administered for fungemia according to the species identified-resistant or tolerant Candida species (Candida lusitaniae and Candida guilliermondii). In-vitro susceptibility testing should be performed for most species of non-albicans Candida in addition to remove any foreign body to optimize management. Several authors observed that antifungal consumption prior exposure to fluconazole increased the risk of infection. These findings suggest the need for a closer look at fluconazole therapy as possible risks for development of fungal infections and to optimize therapy [68,[70][71][72][73].
Discussion
Fluconazole is the most potent antifungal agent, and it used in cancer therapy. All susceptible fungi are capable of deaminating flucytosine to 5-fluorouracil a potent antimetabolite that is used in cancer chemotherapy. Fluorouracil is metabolized first to 5-fluorouracilribose monophosphate by the enzyme uracil phosphoribosyl transferase. 5-Fluorouracil-ribose monophosphate is then either incorporated into RNA (via synthesis of 5-fluorourine triphosphate) or metabolized to 5-fluoro-2'-deoxyuridine-5'-monophosphate, a potent inhibitor of thymidylate synthase, ultimately inhibiting DNA synthesis. The selective action of flucytosine is due to the lack of cytosine deaminase in mammalian cell, which prevents metabolism of fluorouracil. Fluconazole is active against several Candida species, Blastomyces dermatitis, Human capsulatum, and Coccidioides species, Paracoccidioides brasiliensis, and ringworm fungi (dermatophytes). Fluconazole is also active against Aspergillus species, Scedosporium apiospermun (Pseudallescheria boydii), Fusarium, and Sporothrix schenckii, but these fungi are intermediate in susceptibility. Fluconazole is almost completely absorbed from the gastrointestinal-tract (mainly from the small intestine) [3][4][5][6], and it is formulated for orally dosing [2]. This drug diffuses in all body tissues where reaches effective concentration, and tissue-plasma partition coefficient is 2.03 [3]. Penetration into cerebrospinal fluid is good [38,39] and successfully cured meningitis caused fungemia [40,41], and is the drug of choice for treatment of meningitis caused by Coccidioidal meningitis [2]. Fluconazole also migrates into breast-milk in significant amounts, the estimated relative infant dose is 17% of the maternal dose, and drug half-life is 30 hours in breast-milk, thus it is longer than that of found in healthy volunteers [42]. Fluconazole was found to be effective and safe in infants and children [7][8][9][10][11][12][13], however, it causes birth-defects when is administered at high dose (400 mg daily) in pregnant women [43][44][45][46][47]. This antibiotic was successfully used for prophylaxis, it decreased Candida albicans and non-Candida albicans infection and colonization [14][15][16][17][18], and it was confirmed to be a suitable option for fungal infection treatment [19][20][21][22][23][24][25][26][27][28][29] either in infants and children. Fluconazole is metabolized by CYP3A enzymes, while CYP3A4 is the major enzyme in adults; CYP3A7 is the major form in infants [57,58].
Fluconazole interacts with drugs which are metabolically cleared by CYP3A enzymes, and enhances or inhibits drug effects, metabolism, and pharmacokinetics [30][31][32][33][34][35][36][37]. Fluconazole therapy was found to be effective and safe for systemic forms of candidiasis in infants and children [48][49][50][51][52][53][54][55][56]. Fluconazole pharmacokinetics have been extensively studied in infants [59][60][61][62][63] and children [3,64]. In infants, half-life is 40 to 60 hours, whereas it is<20 hours in children. Such a difference is attributable to reduced renal function in infants, as this drug is mainly eliminated by glomerular filtration, and renal function increases with infant maturation. In children, renal and non-renal clearances are 1.1 and 0.3 L/h, respectively, indicating the prevalence of renal elimination pathway [3]. Distribution volume is high, it is about 1 L/kg, indicating that fluconazole distributes in all body tissues, and the tissue-plasma partition coefficient is 2.03 [3]. Different Candida species, including non-albicans Candida, may become fluconazole-resistant [66][67][68][69][70], and resistance-rate is specie-depended [69]. Such a resistance causes patient serious adverse-effects including mortality [69], and mortalityrate due to resistant-Candida albicans, and non-albicans Candida species, ranges from 15 to 35% [69]. Fluconazole-resistance may be caused by gene mutation, this has been observed in Candida lusitaniae, and the mutated gene causing resistance is FKS [66]. Another mechanism of resistance is augmented MIC of fluconazole in various Candida species, and this is particular true for Candida Kruse whose MIC=64 µg/ml [67]. Some authors observed that azole consumption determines fluconazole-resistance in various Candida species, thus a closer look at fluconazole therapy reduces resistance-rate [68,[70][71][72][73], and optimizes therapy with this drug [69]. Rogers and Krysan [2] reviewed mechanisms of fungal-resistance. In Candida albicans, azole-resistance can be due in part to accumulation of mutations in ERG11, the gene encoding the azole target 14-α-sterol demethylase. Increased azole efflux, caused by overexpression of ABC, and/or major facilitator superfamily transporters impart azole-resistance in Candida albicans and Candida glabrata. Overexpression of these genes is due to activating mutations in genes encoding their transcriptional regulators. Mutation of C5,6 sterol desaturase gene ERG3 also can increase azoleresistance in some species [74]; such mutations prevent formation of the toxic product 14α-methyl-3,6-diol form 14α-methylfecosterol; the resulting accumulation of 14α-methylfecosterol produces functional membranes and overcomes the azole effect. Increased production of 14-α-sterol demethylase due to over expression of EGR11 occurs owing to activating mutations in the gene encoding its transcriptional regulator Upc2. Primary azole-resistance has been described in some isolates of azole fumigatus with increased azole export and decreased ergosterol content, but the clinical significance is unknown. Decreased fluconazole-susceptibility has been described in Candida neoformans isolated from patients with AIDS failing prolonged therapy.
In conclusion, Fluconazole is the most potent antifungal agent; its mechanism. All susceptible fungi are capable of deaminating flucytosine to 5-fluorouracil a potent antimetabolite that is used in cancer chemotherapy. Fluorouracil is metabolized first to 5-fluorouracil-ribose monophosphate by the enzyme uracil phosphoribosyl transferase. 5-Fluorouracil-ribose monophosphate is then either incorporated into RNA (via synthesis of 5-fluorourine triphosphate) or metabolized to 5-fluoro-2'-deoxyuridine-5'-monophosphate, a potent inhibitor of thymidylate synthase, ultimately inhibiting DNA synthesis. Fluconazole is successfully used in the prophylaxis and treatment of fungal infections. Fluconazole is almost completely absorbed from the gastrointestinaltract (mainly from the small intestine), and it is formulated for orally dosing. This drug diffuses in all body tissues, including cerebrospinal fluid and breast-milk, and it is successfully used for treating bacterial meningitis infections. Fluconazole was found to be effective and safe in infants and children, however, it causes birth-defects when it is administered at high dose (400 mg daily) in pregnancy. Fluconazole is metabolized by CYP3A enzymes, and interacts with drugs which are metabolically cleared by CYP3A enzymes, thus enhances or inhibits their effects, metabolism, and pharmacokinetics. Fluconazole pharmacokinetics have been extensively studied in infants and children; its half-life is 40 to 60 hours in infants, whereas it is<20 hours in children. Such a difference is attributable to reduced renal function in infants compared to that of children, as this drug is mainly eliminated by glomerular filtration, and renal function increases with infant maturation. In children, renal and non-renal clearances are 1.1 and 0.3 L/h, respectively, indicating the prevalence of renal route. Distribution volume is high, it is about 1 L/kg, indicating that fluconazole distributes in all body tissues, and the tissue-plasma partition coefficient is 2.03. Different Candida species, including non-albicans Candida, may become fluconazole-resistant, and resistance mechanisms are due to gene mutation and/or increased MIC values for fluconazole. Azole consumption causes fluconazole-resistance; this consideration must be taken into consideration in order to avoid infection-risks and optimizing therapy, keeping in mind that fluconazole-resistance causes serious adverse-effects including mortality-rate. | 2020-11-12T09:06:47.756Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "0d0ab9b2c2af9197a61bc3f790660a382c260ee3",
"oa_license": "CCBY",
"oa_url": "https://www.oatext.com/pdf/CMI-5-213.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "763d2ba01620dd31fa8d53145f8d1b2f0eb0e1b3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257112839 | pes2o/s2orc | v3-fos-license | Identification of selected genes associated with the SARS-CoV-2: a therapeutic approach and disease severity
The ongoing pandemic of COVID-19 viruses takes its sole origin from the Wuhan Huanan seafood market, China. The first case was recorded as viral pneumonia and later became a worldwide pandemic (officially declared by WHO on March 11, 2020). SARS-CoV-2 is an extremely infectious and transferrable virus that develops severe conditions like respiratory syndrome, high blood pressure and weakens the immune system. Coronavirus falls under the Coronaviridae family and Beta coronavirus genus. Affected individuals will encounter problems starting with fever followed by severe complications like SARS, ARDS, and many others. These SARS-CoV and MERS-CoV enter the host cells by the endosomal pathway, and about 16 non-structural proteins are involved in assembling the viral RNA synthesis complex. They possess a positive-sense single-stranded RNA, and about four major genes are mainly associated with the development of ASRD, SARS, and other respiratory problems. Susceptibility of these four major genes such as ACE2, IL-2, 7 and 10, TNF, and VEGF is associated with COVID-19. This highlights the identification of the above-mentioned genes that can be used as potential biomarkers for early diagnosis and targeted drug delivery for treating the SARS-CoV-2 neurological symptoms and reducing inflammation in the brain.
Background
Coronavirus was first likely to emerge in Wuhan city, Hubei Province, China. It is suspected that its transmission was from the animal host and then spread to humans (Zhu et al. 2020). SARS-CoV-2 matches 79% to SARS-CoV and 50% to MERS-CoV, it even has a high genetic similarity with bat CoV (RaTG13), but bats are not a primary source coronavirus. Common symptoms of SARS-CoV-2 are fever, cough, and sore throat (Poon et al. 2005). This virus is highly contagious and can transmit faster. According to the laws of IHR, 2005, declared a PHEIC because the spread was among 18 countries on January 30, 2020 (Cascella et al. 2020). CoV belongs to Nidovirales order, a family of Coronaviridae and Orthocoronavirinae subfamily; it is divided further into four groups according to genera are as follows: alpha (α), beta (β), gamma (γ), delta (δ) (Fig. 1) (Fehr and Perlman 2015). The purpose of this article is to explain the prevalence, risk factors, life cycle, structure, genetic aspects, and significant genes involved and associated with SARS-CoV-2.
Page 2 of 11 Veerabathiran et al. Bull Natl Res Cent (2021) 45:79 Prevalence The ongoing pandemic COVID-19 started to emerge in December 2019. The severity and adverse effects increased gradually, so WHO declared it as a pandemic (Worldwide) on March 11, 2020(Al-Tawfiq et al. 2020. The more familiar manifestation among the affected individuals is fever followed by several associated complications like diarrhea and severe body pain, which leads to the cruelty of the infection ). Beginning (March 3, 2020), 73 countries, territories, or areas worldwide had been encountered this infection and affected about 90,870 individuals. Studies say that these pandemics may be related to bats, but it takes their sole origin from the Wuhan Huanan seafood market (Ge et al. 2020). According to WHO, as of December 20, 2020 (3.05 pm), globally the total number of COVID confirmed cases rose to 75,098,369, and the total no deaths across the globe reaches 1,680,339. Figure 2 shows the USA stands first in position among the COVID confirmed cases across the world and contributes to 23.05% of all confirmed cases with a mortality rate of 18.51% followed by India with 13.35% and 8.65%, Brazil-9.53% and 11.04%, Russia-3.79% and 3.02%, France-3.22% and 3.57%. Some of the least affected countries are Denmark with 0.17% confirmed cases and mortality rate of about 0.06%, Malaysia-0.12% and 0.025%, Norway-0.056% and 0.024%, Finland-0.043% and 0.029% and Zimbabwe with 0.016% and 0.018%, etc. Figure 3 demonstrated the risk factors that can cause death in COVID-19 are older having hypertension and diabetes mellitus, and the majority of those affected were males compared to females (Wolff 2020). The severity of this disease may progress due to various lifestyle factors such as smoking, obesity, staying for a longer time in a hospital for admission may expose us to disease, and being unhygienic (Chou et al. 2020;Rod et al. 2020).
Structure of coronavirus
Coronavirus was named because of its halo (coronas) structure when seen under an electron microscope, and these belong to the RNA virus family (Chan et al. 2013). These are non-segmented, positive-sense RNA with a genome of size ~ 30 kb and allowing it to play as mRNA for translation of the replicase polyproteins because of cap structure in 5′ and poly-A tail in 3` ends (Perlman and Netland 2009
Spike glycoprotein
Coronavirus S proteins have a large group of multifunctional viral transmembrane proteins of class I that have 1160-1400 amino acids (Li et al. 2019). The crucial immune-dominant proteins of CoV's are S proteins which can induce the host's immune responses (Li et al. 2003).
Membrane protein
The protein is present in ample amounts inside the virion particles, and it gives a proper shape to the viral envelope (Ziebuhr 2005).
Envelope protein
The smallest among the main structural proteins is the Envelope protein (Brian and Baric 2005). This protein also has a crucial role in pathogenesis, connecting, and viral departure (Fischer et al. 1998).
Nucleocapsid protein
This protein also has multi-functions, among other functions that play a vital role in complex formation with viral genome and enhances assembly in M protein (Zúñiga et al. 2007;Frieman and Baric 2008).
Life cycle and process of SARS-CoV-2
The life cycle of the novel coronavirus begins with the arrival of the virion into the cell that is being invaded. The cell entry is facilitated by glycoprotein spikes present in the structure SARS-COVID and binds with the receptors of the host cell, and the process is called host cell recognition. The ability to withstand the new host cell environment and to escape from the human immune system is also provided by the spikes (Kirchdoerfer et al. 2018;Perlman and Netland 2009). The cellular proteases like Cathepsins, the human airway trypsin-like protease (HAT), TMPRSS2 facilitates the entry of these virions by separating the spike proteins and further building up additional perforation changes (Glowacka et al. 2011;Bertram et al. 2011) (Fig. 5).
The SARS-COVID requires ACE2 as a critical receptor for cell entry, whereas MERS-COVID requires dipeptidyl peptidase (DPP4) (Wang et al. 2013;Raj et al. 2013). After entering the cell, the RNA gets uncoated, and two replicase polyproteins are obtained by translation of the replicase gene present on the RNA strand. Also, the unique proteins of replicase enzymes are obtained by further processing the proteinases of the virus. Moreover, these proteins result in full-length RNA of negative sense, which is further reproduced to distribute shorter mRNAs (Song et al. 2019). Then, the genomic RNA and the viral proteins are further assembled into virions in the Golgi bodies and endoplasmic reticulum, and the vesicles help in the transportation of these particles and are released outside the cell. (Shereen et al. 2020).
Genetic aspects of COVID-19
COVID-19 possesses an ssRNA (positive sense) linked with a nucleoprotein inside a capsid consisting of matrix protein. It consists of the largest genome (26.4-31.7 kb) than other viruses known till now (Mousavizadeh and Ghasemi 2020). Previous studies propose that ARDS is more likely to be developed in all instances, such as in MERS-CoV, SARS-CoV, and SARS-CoV-2 (Ding et al. 2003). In numerous patients, the genes like TNF, ACE2, IL-10, and VEGF are considered to be involved with the progress of ARDS (Meyer and Christie 2013). Even though this SARS-CoV-2 belongs to the SARS family, it slightly differs from SARS-CoV in Envelope (E),
Fig. 4 Structure of human SARS-CoV-2 virus
Page 4 of 11 Veerabathiran et al. Bull Natl Res Cent (2021) 45:79 Membrane (M), Nucleocapsid, and Spike proteins which are given in Table 1 (Rehman et al. 2020). Genetic studies revealed that DNA polymorphisms in TMPRSS2/ACE2 were more likely correlated with a genetic vulnerability to SARS-CoV2, so interpreting these studies will be more valuable for developing vaccines (Hou et al. 2020). Gene interaction with environmental factors like smoking and lifestyle activities is considered. Also, there is a high susceptibility of COVID-19.
In COVID-19 infection, SARS-CoV is suspected of playing a significant role in genetic predisposition because it matches 80 percent of the genetic identity (Darbeheshti and Rezaei 2020).
Methodology
The genes mentioned in this study are identified from past 25-year literature papers from Web of Science, Pub-Med, and several other databases. All the literature was selected based on the title and abstract, and two independent authors referred to the published papers according to the content. The articles were separated into three groups. The first, second, and third group articles were collected according to the SARS-CoV-2 context, followed by gene, and genetic polymorphism articles were addressed gene expressions and their association with the disease. The process of collecting the relevant articles for this review is shown in Fig. 2. The selected genes have a mutation in both intronic and exonic regions and identified genes' expression in various locations. The following Page 5 of 11 Veerabathiran et al. Bull Natl Res Cent (2021) 45:79 is a list of selected genes which is associated with the SARS-CoV-2.
Major genes associated with COVID-19
There are various SNPs and genes associated with COVID-19. In this review, we empathize with the current scenario, recent advancements, and enduring challenges of the susceptibility of four significant genes [ACE2, (IL-2, 7, 10), TNF, and VEGF] associated with COVID-19 are progressively involved in the development of ASRD, SARS, and other respiratory problems according to their function (Table 2).
Interleukin (IL-2, 7, 10) Gene IL-2 encodes a protein that secretes cytokine produced by activated CD4 + and CD8 + T lymphocytes essential for the T and B lymphocytes proliferation (www. ncbi. nlm. nih. gov). For B cells and T cells, and cytotoxic cells, IL-2 is a central mediator for growth and development, including natural killer and lymphokine-activated killer cells (Kasprzak and Olejniczak 2008). For proinflammatory cytokines and chemokines, SARS-CoV-2 infection can be a potent inducer of IL-7. In association with the severity of disease for COVID-19, circulating cytokines and chemokines have been found (www. ncbi. nlm. nih. gov). IL-7 plays a hectic role in the immune system's homeostasis, and it helps increase the healthspan by altering the immune system (Nguyen et al. 2017). This gene encodes a cytokine protein produced by monocytes primarily and a very few extents by lymphocytes and has a pleiotropic effect in immune-regulation and inflammation (www. ncbi. nlm. nih. gov). One of the required antiinflammatory cytokines is IL-10; it plays a crucial role as a negative regulator to immune responses for microbial antigens. During the response to proinflammatory signals, immune cells can produce IL-10, and it also functions when there is excessive inflammation when infected (Iyer and Cheng xxxx). In severely sick COVID-19 patients, cytokine storm syndrome was examined, and it was reported that the extent of interleukins (IL-2, IL-7, IL-10) was high in fundamentally sick patients (Zwirner and Domaica 2010). The proliferation of T, B, and NK cells in IL2 cytokine prevents autoimmune diseases (rheumatoid arthritis (RA), type 1 diabetes, multiple sclerosis) and does not initiate an autoimmune response (cell tolerance), shown in Fig. 6. In COVID-19 patients, the cytokine IL2 or its receptors (IL2R) elevated and increased the condition's severity (Costela-Ruiz et al. 2020).
IL7 plays a significant role in the WBC (lymphocytes) differentiation, and it activates the T cells, which regulates the negative transforming growth factor-beta (TGF β) in COVID-19 patient's shown in Fig. 7. TGF-β transformation results to cause the elevation of IL7, and it directly increases the severity (Costela-Ruiz et al. 2020).
In COVID-19 patients, the viral resistant is eliminated by inhibition of IL10, and it also blocks the IL10 signals shown in Fig. 8. It was found that the elderly patients were highly affected in COVID-19 by the hyperinflammatory causing the reduction of T cell receptors (Costela-Ruiz et al. 2020).
ACE2 (angiotensin-converting enzyme-2) gene
This gene's encoded protein has a place with the angiotensin-changing over compound groups of dipeptidyl carboxypeptidases. It has a unique identity to human angiotensin one changing over chemical protein act as an active spike protein of SARS-CoV and SARS-CoV-2, the objective agent of COVID-19. The emitted protein catalyzes the cleavage of angiotensin one into angiotensin 1-9, and angiotensin II is converted into the vasodilator angiotensin 1-7. ACE2 is known to be communicated in various organs of the human body (www. ncbi. nlm. nih. gov). It has been accounted that ACE2 is the fundamental host cell receptor of 2019-nCoV and assumes a vital job in the passage of infection into the cell to cause the last contamination. Single-cell transcriptomes from free information created in-house were utilized to distinguish and affirm the ACE2-communicating cell synthesis and extent in the oral cavity ). The outcomes exhibited that the ACE2 communicated on the mucosa of the oral cavity. For SARS-CoV-2, the essential receptor is ACE2 (in vivo); contaminations and the S proteins of the SARS-CoV-2 virus diminish ACE2 articulation ).
TNF (tumor necrosis factor) gene
These genes are the component of the TNF ligand superfamily that encodes for a proinflammatory cytokine, of which macrophages are mainly associated with these cytokine secretions and their specified locus on human chromosome 6p21.3 (El-Tahan et al. 2016). They get bound with similar TNF receptors and result in similar pleiotropic effects; many pathological processes are correlated with these types of the gene, including cell proliferation, cell death, immune regulation, and inflammatory responses (Boraska et al. 2010). The levels of TNF-α were found similarly higher in aged or older patients than others, and exhaustion in T cell checks is seen, which demonstrates that this TNF-α is a sort of negative regulator for the proliferation of T cells (Diao et al. 2019a). The plasma proportions of this TNF are found higher in SARS-COV-2-infected patients, and their concentrations are based on the severity of infection, which means higher concentrations are seen in ICU patients than in non-ICU patients (Diao et al. Page 6 of 11 Veerabathiran et al. Bull Natl Res Cent (2021) 45:79 This growth factor induces proliferation and migration of vascular endothelial cells and is essential for both physiological and pathological angiogenesis VEGF levels are higher during SARS-CoV-2 infection. This gene is under review for the study of coronavirus biology, and it is involved in cytokine storm inflammatory response www. ncbi. nlm. nih. gov Page 7 of 11 Veerabathiran et al. Bull Natl Res Cent (2021) 45:79 2019a). The primary synthesizers of TNF called the macrophages are seen to a greater extent in the infected individuals; thus, these elevated levels of proinflammatory cytokines called the TNF are observed; in some patients, it turns out to be a cytokine storm (Soy et al. 2020). So different TNF blocking antibodies like etanercept, adalimumab, etc., are effectively employed to treat inflammatory diseases, and now these treatments are suggested for earnest requirements toward the COVID patients (Channappanavar et al. 2016).
VEGF (vascular endothelial growth factor) gene
Eight conserved cysteines distinguish these genes; they also represent homodimer structures and functions. These are the protein types found with vascular permeability actions and are further subdivided into VEGF-A, B, C, D, E, PlGF, and Trimeresurus flavoviridis svVEGF (Shibuya 2011). This VEGF performs a prime part in maintaining the growth, improvement, and maintenance of a healthy circulatory system, thereby ensuring normal angiogenesis (Ruggiero et al. 2011). They get bound with VEGFR and perform a top part in the activating endothelial cell. The alveolar immune regulation is maintained by the integrity of the endothelial barrier in lung tissue which is crucial in COVID-affected patients ). In the SARS-CoV-2-affected individuals, the serum levels of VEGF are found to be elevated. However, there is not much difference in these VEGF levels between ICU and non-ICU patients . VEGF is mainly associated with ALI and ARDS and is believed to be a prime factor for their cause; since the proportions of these genes are found elevated in the Page 8 of 11 Veerabathiran et al. Bull Natl Res Cent (2021) 45:79 COVID-19-infected persons, they may lead to acute lung and respiratory syndromes in affected individuals (Turkia). These VEGFs perform a decisive part in brain inflammation (which results in neurological defects) and is identified as a promising therapeutic agent in suppressing the inflammation that is caused by COVID infection (Yin et al. 2020;Rodríguez-Puertas 2020). Figure 9 demonstrates that the SARS-CoV-2-infected brain (> 30 age) had decreased angiotensin 1-9/1-7 levels in cerebrovascular endothelial cells by depleting ACE2 in the improvement of angiotensin II type 1 receptor. It also the synthesis of VEGF, which pushes the inflammation in the central nervous system (CNS). In the meantime, the inflammatory cell employ becomes worsen by the VEGF synthesis, and pathological angiogenesis influences the proinflammatory, and there is a contrary level of angiotensin II increasing. This inflammation's side effects are nausea, anosmia, vomiting, headache, hemorrhagic stroke, disturbances of consciousness, altered mental state, acute necrotizing, seizure, etc. (Yin et al. 2020;Rodríguez-Puertas 2020) (Fig. 10).
Impact of gene expression
Each selected genes have a differential expression on SARS-CoV-2 conditions and increase the disease severity. Expression in the ACE-2 gene affects the brain areas' function; IL-2, IL-7, IL-10 gene expression negatively regulates the transformation growth factor-β, alteration in TNF gene depletes the T lymphocytes, and VEGF gene expression causes the edema by the extravasation of immune cells (Table 3).
SARS-CoV-2, which falls under the family of Coronaviridae, is highly contagious and has a higher rate of human-human transmission; the required method of transmission is through inhaling the droplets or coming in direct contact with virus surfaces. COVID-19 is a high risk to the population and healthcare workers worldwide. Many ideas for therapeutic options and vaccine development have been initiated, and also few countries have started to test the vaccines for SARS-CoV-2 infection. The four genes mentioned above contribute to the susceptibility of COVID-19, even though other factors influence, but genetic factors have a crucial role in disease severity. It is essential to emphasize the current issue with COVID-19 in which various genes show a positive correlation. As per the study conducted by a genome-wide association consisting of 1980 persons and they are infected with COVID-19 and other respiratory conditions, they have analyzed 8,582,968 SNP's. The conclusion was made that gene cluster 3p21.31 is a genetic susceptibility in patients affected by the virus (Zhu et al. 2020). Hence, this assessment highlights the significant genes that focus on genetic changes and levels; ACE2 is a causative agent for coronavirus, and IL-2, 7, 10, TNF, and VEGF are involved in cytokine storm inflammatory response. These four genes are under study for coronavirus biology so that these genes may be used as potential biomarkers for early diagnosis and used for targeted drug delivery for COVID-19.
Conclusion
This study encompassed the abnormal changes in the genes caused by the SARS-CoV-2 and pointed out the significant genes affected by this novel virus with the complete metabolic pathways. It is found that cytokines such as IL2, IL7, and IL10 play an essential role in maximizing the seriousness of the SARS-CoV-2. The ACE2 enzyme and VEGF affect the brain and cause inflammation in the central nervous system. The purpose of this review is to Fig. 9 The severity of SARS-CoV-2 due to the interleukin-10 cytokine Page 9 of 11 Veerabathiran et al. Bull Natl Res Cent (2021) 45:79 bring down the seriousness of the SARS-CoV-2. It is better to target the cytokines for the possible results, and for treating the SARS-CoV-2 neurological symptoms, VEGF is a probable therapeutic target in reducing inflammation in the brain. Therefore, this review could benefit not only for this virus's therapeutic basis but also for clinical observing, identification, and mediation of SARS-CoV-2 infection in the future. | 2023-02-24T15:14:17.284Z | 2021-04-23T00:00:00.000 | {
"year": 2021,
"sha1": "61cd36db00957c83d29d8e94a0d1f4e3ade75280",
"oa_license": "CCBY",
"oa_url": "https://bnrc.springeropen.com/counter/pdf/10.1186/s42269-021-00540-y",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "61cd36db00957c83d29d8e94a0d1f4e3ade75280",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
247061475 | pes2o/s2orc | v3-fos-license | Touch and Go: Membrane Contact Sites Between Lipid Droplets and Other Organelles
Lipid droplets (LDs) have emerged not just as storage sites for lipids but as central regulators of metabolism and organelle quality control. These critical functions are achieved, in part, at membrane contact sites (MCS) between LDs and other organelles. MCS are sites of transfer of cellular constituents to or from LDs for energy mobilization in response to nutrient limitations, as well as LD biogenesis, expansion and autophagy. Here, we describe recent findings on the mechanisms underlying the formation and function of MCS between LDs and mitochondria, ER and lysosomes/vacuoles and the role of the cytoskeleton in promoting LD MCS through its function in LD movement and distribution in response to environmental cues.
INTRODUCTION
Lipid droplets (LDs) have an established function in storing lipids, which are used for energy production, membrane biogenesis and synthesis of signaling molecules. LDs also function in storage of signaling proteins, their precursors and hydrophobic vitamins, and for sequestering toxic lipids, which is critical to reduce lipotoxicity and oxidative stress (Welte and Gould, 2017;Jarc and Petan, 2019;Geltinger et al., 2020;Roberts and Olzmann, 2020;Renne and Hariri, 2021). Finally, recent studies support a role for LDs in ER protein quality control (Garcia et al., 2018;Roberts and Olzmann, 2020).
The physical properties of LDs are distinct from those of other organelles. They consist of neutral lipids, primarily triacylglycerol (TAG) and sterol esters (SE), surrounded by a phospholipid monolayer. Although proteins are associated with LDs, conventional transport proteins that are integrated into lipid bilayers do not take part in transfer of lipids and other constituents from LDs to other organelles. Instead, specialized proteins, such as lipases that associate with the LD boundary membrane, release lipids and vitamin A from LDs (Schreiber et al., 2012;O'Byrne and Blaner, 2013;Grumet et al., 2016;Olzmann and Carvalho, 2019). Moreover, transfer of LD components to other organelles as well as communication between LDs and other subcellular compartments occurs at membrane contact sites (MCS) between LDs and other organelles.
MCS are sites of close apposition between two organelles. While these contacts may be homotypic (between identical organelles) or heterotypic (between different organelles), the focal point for this review article is heterotypic interactions between LDs and mitochondria, ER, lysosomes (the vacuole in yeast) and the role of the cytoskeleton in promoting contact site formation at LDs. LD MCS are not as well understood as other MCS. Nonetheless, LD MCS are enriched in proteins that mediate specific functions at those sites and are produced and stabilized by tethering proteins. Moreover, in yeast the distance between LDs and other organelles at MCS has been determined by electron microscopy to be <30 nm (Perktold et al., 2007;Binns et al., 2006), which is in the range of that observed in other MCS, typically 10-80 nm (Scorrano et al., 2019;Vance, 2020).
Although the structural components of many LD MCS have not been identified, the function of many LD MCS is well established. The endoplasmic reticulum (ER) constitutes the major site for the biogenesis of LDs and lipids that are incorporated into nascent LDs. Therefore, LD-ER contact sites are essential for LD formation, growth and budding from the ER (Olzmann and Carvalho, 2019;Choudhary and Schneiter, 2021). Recent studies revealed that LDs mediate removal of unfolded or damaged proteins from the ER, and that this occurs at LD-ER contact sites (Vevea et al., 2015;. At mitochondria, LDs deliver fatty acids, which are produced from neutral lipids that are stored in LDs and oxidized for energy production (Finn and Dice, 2006;Rambold et al., 2015;Wang et al., 2021). Toxic lipids or proteins that are sequestered in LDs can be delivered to lysosomes (the vacuole in yeast) by multiple pathways, including transfer events at LD-lysosome contact sites and piecemeal or wholesale uptake of LDs into the lysosome/vacuolar compartment (Tsuji et al., 2017;Schulze et al., 2020;Liao et al., 2021). Finally, contacts between LDs and the cytoskeleton contribute to LD MCS formation through effects on LD movement and positional control (Pfisterer et al., 2017;Valm et al., 2017;Kilwein and Welte, 2019). Here, we review recent findings on the structure and function of LD MCS in yeast and mammalian cells, and how these membrane contacts respond to cellular or environmental cues.
LD INTERACTIONS WITH MITOCHONDRIA
Mitochondria are the metabolic centers of the cell. Fatty acids (FAs) that are stored as TAG and other lipids in LDs are used for energy production by β-oxidation in mitochondria. Conversely, mitochondria are the source of ATP and other components that contribute to growth or expansion of LDs. Close contacts between LDs and mitochondria were described in 1959 (Palade, 1959) and have been detected in many cell types (Novikoff et al., 1980;Stemberger et al., 1984). They are the sites for transfer of constituents between mitochondria and LD for LD consumption and expansion and are prominent in tissues with high energy demands such as heart (Kuramoto et al., 2012), skeletal muscle (Shaw et al., 2008), brown adipose tissue (Yu et al., 2015) and liver (Shiozaki et al., 2011;Ma et al., 2021). Although these contact sites have been evident for decades, recent studies have revealed important details of their function and structure.
LD-Mitochondria MCS Function in Transfer of Fatty Acids From LDs to Mitochondria
During periods of nutrient deprivation, cells reprogram their metabolism from glycolysis to oxidation of FAs for ATP production. During this process, FAs that are stored in TAG in LDs are transferred from LDs to mitochondria (Finn and Dice, 2006). Emerging evidence supports a role for LD-mitochondria MCS in this FA transfer event. First, starvation of cultured mammalian cells results in an increase in contact site formation between LDs and mitochondria (Herms et al., 2015;Rambold et al., 2015;Nguyen et al., 2017;Valm et al., 2017). Livecell imaging of fluorescent FAs revealed that FAs move from LDs into mitochondria when nutrients are limiting. This process requires close association of mitochondria with LDs. It is also dependent on release of FA from TAG stored in LDs: depletion of an LD-associated neutral lipase, adipose triglyceride lipase (ATGL), or drug-induced inhibition of lipase activity reduces the mitochondrial accumulation of fluorescent FAs (Herms et al., 2015;Rambold et al., 2015;Valm et al., 2017).
Several proteins have been implicated in formation of these LD-mitochondria MCS ( Figure 1). The SNARE proteins SNAP23 and VAMP4 localize to LDs in mouse fibroblasts (Boström et al., 2005), and SNAP23 has been detected on LDs and mitochondria in skeletal muscle (Strauss et al., 2016). More importantly, deletion of SNAP23 produces a decrease in both LDmitochondria MCS and β-oxidation of radiolabeled FAs in mouse fibroblasts (Jägerström et al., 2009). A proximity labeling study revealed that ACSL1, a long-chain acyl-CoA synthetase that directs FAs to mitochondria for β-oxidation, interacts with SNAP23 and VAMP4 in hepatocytes (Young et al., 2018). In addition, glucose deprivation, a condition that stimulates FA oxidation, promotes co-immunoprecipitation of SNAP23, VAMP4 and ACSL1 in hepatocytes. (Young et al., 2018). These findings support the notion that increased association of LD and mitochondria contributes to elevated FA oxidation and indicate a role for SNAP23, VAMP4 and ACSL1 in establishing physical and functional interactions between LDs and mitochondria during this process.
Other studies support a role for the vacuolar protein sorting 13D (VPS13D) protein in FA transfer from LD to mitochondria at MCS between these organelles (Wang et al., 2021). VPS13D is a VPS13 family protein (Velayos-Baeza et al., 2004;Wang et al., 2021) that localizes to LD-mitochondria contact sites in response to oleic acid stimulation and starvation in cultured cells (Wang et al., 2021). Structure-function analysis revealed that the N-terminal region of VPS13D is responsible for mitochondrial targeting and that two amphipathic helices in the C-terminal region of the protein target VPS13D to the LDs. Moreover, VPS13D has a putative lipid transfer domain (LTD) at its N terminus that binds to FAs and is required for VPS13D function in FA transfer from LD to mitochondria. Finally, VPS13D recruits a subunit of the ESCRT (the endosome sorting complex required for transport), a complex that produces changes in membrane curvature (Vietri et al., 2020), to LDmitochondria MCS. Specifically, the VAB (VPS13 adaptor binding) domain of VPS13D interacts with the ESCRT protein TSG101 and is required for recruitment of TSG101 to LDmitochondria MCS. Moreover, localization of the VAB domain and TSG101 to this MCS results in the formation of a constricted or tubular structure at the surface of LDs (Wang et al., 2021). Finally, pulse-chase assays of FA transfer from LD to mitochondria revealed that the deletion of VPS13D or TSG101 results in a significant reduction of FA transfer (Wang et al., 2021). Collectively, these findings support a model for VPS13D in energy mobilization by FA oxidation in cells exposed to nutrient limitation. According to this model, VPS13D is recruited to LDmitochondria junctions in response to starvation, where it contributes to FA transfer from LDs to mitochondria 1) as a lipid transfer protein and 2) by recruiting ESCRT components to LD-mitochondria MCS and facilitating ESCRT-dependent membrane remodeling at those sites.
Finally, the perilipin family protein perilipin 1 (PLIN1) has been implicated in LD-mitochondria contact site formation in brown adipose tissue through interactions with the mitochondrial outer membrane fusion GTPase, mitofusin 2, MFN2 (Boutant et al., 2017). MFN2 and its homolog MFN1 mediate the fusion of mitochondrial outer membranes. In addition, MFN2 is involved in mitochondria-ER contact sites (Giacomello et al., 2020). Nonetheless, depletion or knockout of MFN2 in brown adipose tissue results in fewer LD-mitochondria MCS, altered lipid metabolism and reduced FA oxidation by mitochondria (Boutant et al., 2017). In addition, co-immunoprecipitation studies show that MFN2 directly interacts with PLIN1, and this interaction is enhanced by a treatment with an adrenergic agonist. Finally, PLIN1 expression increases in mice subjected to cold treatment (Yu et al., 2015). These observations suggest that increased mitochondria-LD contacts mediated by MFN2-PLIN1 facilitate the coupling of TAG hydrolysis with FA oxidation upon exposure of brown adipose tissue to cold (Boutant et al., 2017).
LD-Mitochondria Contact Site Function in LD Expansion
Contact sites between LD and mitochondria can also function in expansion of LD under conditions that promote lipid storage. In brown adipose tissue, a subpopulation of mitochondria is closely associated with large LDs. Benador et al. (2018) developed a method to separate LD-associated mitochondria from LD-free mitochondria and found that these two populations of mitochondria are physically and functionally distinct. LDassociated mitochondria exhibit 1) elevated TCA cycle, ATP synthetic and pyruvate oxidation activities, 2) reduced βoxidation activity, and 3) increased incorporation of free FAs into TAG in ATP synthase-dependent processes. Thus, contact site formation between LD and mitochondria is associated with lipid storage and generation of energy for this process by oxidation of glucose, not FAs. In contrast, LD-free mitochondria display higher FA oxidation. These observations support the idea that LD-associated mitochondria promote LD expansion and lipid storage by providing ATP for acyl-CoA synthesis during TAG production (Benador et al., 2018).
The LD protein perilipin 5 (PLIN5) has been implicated in LD-mitochondria interactions during LD expansion. PLIN5 is highly expressed in oxidative tissues, such as skeletal and cardiac muscle, brown adipose tissue and liver (Wolins et al., 2006), and is upregulated in response to exercise in muscle tissue (Tarnopolsky et al., 2007). Moreover, PLIN5 overexpression increases the number of LDs and the incorporation of radiolabeled lipids into TAG in brown adipose tissue and in cultured liver cells (Wang et al., 2011b;Benador et al., 2018). On the other hand, deletion of PLIN5 in mice results in a loss of LDs, and cultured cardiomyocytes from Plin5-null mice exhibit more FA oxidation activity compared to cardiomyocytes from wildtype mice (Kuramoto et al., 2012). Other studies indicate that PLIN5 function in LD expansion may be due to its function in LD-mitochondria MCS. PLIN5 can localize to the mitochondrial surface independent of LD-mitochondria MCS, and localizes to LD-mitochondria interfaces by super-resolution imaging (Gemmink et al., 2018). Moreover, overexpression of PLIN5 in CHO cells induces the recruitment of mitochondria to LD, and this recruitment depends on the presence of 20 amino acids at the C-terminal of the protein (Wang et al., 2011b). This observation supports the notion that PLIN5 is part of a tethering complex that promotes LD expansion at LD-mitochondria MCS.
Interestingly, in hepatocyte-specific Plin5 null mice, the decreased LD-mitochondria interactions resulted in reduced fatty acid oxidation and reduced fatty acid storage into TAGs (Keenan et al., 2019). Therefore, it is possible that even in tissues where PLIN5 is highly expressed, it can promote different aspects of LD-mitochondria interactions. Moreover, PLIN5 has been detected at mitochondria and in the cytoplasm independently of LD (Bosma et al., 2012;Gemmink et al., 2018), suggesting that it may also regulate lipid metabolism. Indeed PLIN5 also regulates the lipolytic activity of ATGL (Granneman et al., 2011;Wang et al., 2011a). These findings raise the possibility that PLIN5 affects TAG production via its regulatory activities on lipolysis independently from its mitochondrial tethering activity.
Mitoguardin 2 (MIGA2) is a mitochondrial outer membrane protein that promotes mitochondrial fusion and modulates body fat in mice by regulating mitochondrial phospholipid metabolism (Zhang et al., 2016). MIGA2 has also been implicated in LDmitochondria MCS formation in differentiating white adipocytes (Freyre et al., 2019). Overexpression of MIGA2 in adipocytes leads to increased LD-mitochondria MCS formation (Freyre et al., 2019). Structure-function analysis of MIGA2 revealed a direct role for the protein in these MCS: its N-terminal transmembrane domains bind to mitochondria and its C-terminal amphipathic region is exposed to the cytosol and binds directly to LDs (Freyre et al., 2019). Finally, pre-adipocytes lacking MIGA2 exhibit reduced adipocyte differentiation, decreased LD abundance, and diminished TAG synthesis. Consistent with this, radiolabeled glucose is not converted into TAGs in MIGA2-knockout preadipocytes (Freyre et al., 2019). Collectively, these data suggest that MIGA2 is a tether that links LDs to mitochondria and raise the possibility that MIGA2 affects LD expansion through effects on de novo lipogenesis at MCS in adipocytes.
LD-ER CONTACT SITES
LDs form at and bud from the ER in all eukaryotes. LD biogenesis sites are the most complex and best characterized LD MCS. These MCS develop at specialized domains within the ER membrane, are enriched in specific lipids and proteins, and have a welldefined function in LD formation, directional growth and budding. These LD-ER MCS have activities found in other MCS including transfer of lipids and proteins between organelles. However, unlike other MCS in which a pre-existing organelle makes contacts with and is tethered to another organelle, LD-ER MCS develop within the ER membrane during LD biogenesis. While other MCS involve transitory interactions between two physically separate structures, the ER-LD MCS is not so simple. LDs and ER have different membrane and protein composition and different functional characteristics, but the distinction between these two compartments is less stark than, for example, that between ER and mitochondria. There is evidence from electron microscopy (Kassan et al., 2013) and fluorescence imaging (Jacquier et al., 2011;Valm et al., 2017) that in yeast, LDs and ER maintain longterm continuity. Fluorescence and biochemical studies in fly (Wilfling et al., 2013) and mammalian (Zehmer et al., 2009) cells have supported this model, although there are differences among cell types (Hugenroth and Bohnert, 2020).
Here, we describe formation of LD-ER contact sites, their function in LD biogenesis and the environmental cues that modulate these processes.
Formation of LD-ER MCS at Sites of LD Biogenesis
In light of the critical function of LDs in lipid storage and homeostasis, it is not surprising that LD biogenesis is regulated in response to changes in nutrient availability. Indeed, LD biogenesis is induced by nutrient limitations including the transition from mid-log to stationary phase in yeast, or nitrogen starvation (Jacob, 1987;Kurat et al., 2006;Li et al., 2015). It is also induced by supplementation with oleic acid (Callies et al., 1993;Fujimoto et al., 2006). In contrast, LD biogenesis is required for the survival of nutrient-limited cells (Sandager et al., 2002;Garbarino et al., 2009). One critical step in LD-ER contact site formation during LD synthesis is coalescence of neutral lipids (NL) to form a lens-shaped structure between the leaflets of the ER lipid bilayer. When the NL TAG reaches a threshold concentration (3-5 mol%), it undergoes a phase separation within the ER membrane leading to formation of the TAG lens (Khandelia et al., 2010;Duelund et al., 2013). In yeast, where these structures were first identified, NL lenses are ca. 50 nm in diameter (Choudhary et al., 2015).
The major molecular components and processes in LD-ER biogenesis are illustrated in Figure 2. Lens formation is induced by and requires synthesis of TAG and sterol esters (SE). In yeast, TAG is generated by acylation of the precursor diacylglycerol (DAG) by the diacylglycerol acyltransferases Dga1 and Lro1 (Lecithin cholesterol acyl transferase Related Open reading frame 1). SE are generated from sterols by the acyl-CoA:sterol acyltransferases Are1 and Are2. Indeed, inhibition of NL synthesis by deletion of all SE and TAG biosynthetic enzymes (DGA1, LRO1, ARE1 and ARE2) blocks LD biogenesis (Sandager et al., 2002). Similarly, inhibition of DAG synthesis from phosphatidic acid by deletion of lipin (Pah1, phosphatidic acid phosphohydrolase 1, in yeast) results in reduced LD abundance (Adeyo et al., 2011).
The seipin protein complex determines the site of lens formation, mediates MCS formation between LDs and ER at those sites, and promotes TAG incorporation into lenses and nascent LDs. Seipin is encoded by the BSCL2 (Berardinelli-Seip Congenital Lipodystrophy 2) gene in humans and SEI1/FLD1 (Seipin/Few LDs) gene in yeast. It is an integral ER membrane protein that localizes to LD-ER contact sites (Szymanski et al., 2007;Fei et al., 2008;Salo et al., 2016;Wang et al., 2016). Seipin contains a highly conserved ER lumen domain, short N-and C-terminal cytosolic domains and two transmembrane domains (Lundin et al., 2006). The luminal domain contains a hydrophobic helix (HH) near the ER bilayer and a βsandwich fold (Sui et al., 2018;Yan et al., 2018). The βsandwich fold binds anionic phospholipids such as phosphatidic acid (Yan et al., 2018) and is similar in structure to β-sandwich domains in the sterol-binding Niemann-Pick C2 (NPC2) proteins. Recent, cryo-EM studies revealed that seipin oligomerizes to form a ring-like structure containing 10-12 subunits and that luminal HHs in that ringlike structure bind to TAG, which promotes TAG cluster formation at low concentrations (Prasanna et al., 2021;Zoni et al., 2021). Interestingly, yeast seipin lacks the HH domain found in human or Drosophila seipins. However, yeast seipin binds to Ldb16 (low dye binding 16), which contains HH-like regions and supports HH function in the yeast seipin complex (Klug et al., 2021).
Seipin functions in LD-ER MCS and LD formation through its interactions not just with lipids but with proteins including Nem1 (nuclear envelope morphology 1) and LDAF1 (LD activator factor 1), also known as Tmem159 and promethin in mammals, and Ldo45 (LD organization 45 kD protein) in yeast. Seipin-Nem1 interactions promote NL biosynthesis at sites of lens formation. Both proteins localize to and co-localize at punctate structures at sites of lens formation and do so independent of NL biosynthesis or the presence of LDs (Choudhary et al., 2020). Nem1 activates DAG production, and functions with seipin to recruit TAG biosynthetic enzymes (Dga1 and Lro1) at LD-ER MCS during lens initiation and growth (Choudhary et al., 2020).
Interaction of seipin with LDAF1 is also critical for the TAG phase transition during initiation of lens formation. Although small lens-likes structures can form in the ER membrane in the absence of seipin (Salo et al., 2016;Wang et al., 2016), recent studies support the model that seipin and LDAF1 stimulate lens formation by lowering the critical concentration of TAG for phase conversion within membranes. Specifically, deletion of LDAF1 inhibits LD formation during early stages of that process at all TAG concentrations tested, indicating that LDAF1 is required for initiation of LD biogenesis. Notably, it is released from seipin and recruited to the surface of nascent LDs as they mature (Chung et al., 2019). Consistent with this, molecular simulation studies revealed that binding of seipin to TAG promotes its association with LDAF1, which stabilizes nascent lens structures (Prasanna et al., 2021;Zoni et al., 2021). Finally, targeting of LDAF1 to the plasma membrane (PM) results in formation of PM-ER MCS, as well as recruitment of seipin and LD biogenesis at that site. Thus, seipin and LDAF1 can drive lens formation and LD biogenesis in vivo (Chung et al., 2019).
Generation of Lipid and Protein Asymmetry at LD-ER MCS During LD Growth and Budding
LD-ER interactions at sites of LD biogenesis are disrupted when nascent LDs bud from the ER into the cytosol. Budding of LDs from the ER and the size of LDs that are released from ER are influenced by membrane curvature and surface tension at the LD-ER MCS. Phospholipids that promote negative membrane curvature, such as DAG or phosphatidylethanolamine (PE), stabilize the LD-ER contact site and favor retention of LDs in the ER. In contrast, lysolipids, which promote positive membrane curvature, destabilize LD-ER MCS and favor LD budding and generation of small LDs (Ben M'barek et al., 2017).
Fat storage-inducing transmembrane protein 2 (FITM2) is an evolutionarily conserved ER-localized transmembrane protein that is required for budding of LDs from ER membranes FIGURE 2 | LD biogenesis at LD-ER MCS. TAG accumulates between leaflets of the ER bilayer during lens formation. Seipins, Nem1, and LDAF1 localize to and are required for LD-ER MCS formation at sites of LD biogenesis. Other LD biogenesis proteins including FITM2 and Pex30 are recruited to LD-ER MCS and LDAF1 is later transferred from MD-ER MCS to the surface of LDs during LD budding from the ER membrane. Finally, LDs are separated from ER and released to cytosol during LD scission.
Frontiers in Cell and Developmental Biology | www.frontiersin.org February 2022 | Volume 10 | Article 852021 (Choudhary et al., 2015). Studies in yeast indicate that FITM2 proteins promote this process by regulating the levels of DAG. Although DAG is a precursor for TAG and therefore required for LD biogenesis, DAG can inhibit budding of nascent LDs from LD-ER MCS by promoting negative membrane curvature at those contact sites. Therefore, its levels must be tightly regulated during LD biogenesis. Indeed, lysolipids promote positive membrane curvature and budding of LDs from ER in the absence of FITM2 in yeast. This suggest that the increase of membrane curvature by lysolipids reduces the defects in LD biogenesis caused by high DAG levels in the absence of FITM2 . The FITM2 proteins of yeast (Yft2 and Scs3) are recruited to sites of LD biogenesis by binding to seipin and Nem1 2020). Moreover, deletion of both FITM2 proteins in yeast results in increased DAG and this defect is rescued by deletion of NEM1 . Since Nem1 promotes DAG production, FITM2 proteins may modulate DAG levels though effects on Nem1.
Interactions between seipin and Pex30 (Peroxisome-related 30) have been implicated in modulation of the phospholipid composition at LD-ER MCS during lens formation. This process is downstream of the recruitment of FITM2 proteins to seipin-Nem1 sites (Choudhary et al., 2020). Pex30 is an ER membrane protein with established functions in control of peroxisome size, shape and formation (Joshi et al., 2016;Vizeacoumar et al., 2003;. Interestingly, Pex30 is associated with seipin complexes at LD-ER contact sites during LD formation. Moreover, deletion of Pex30 results in abnormal LD morphology, and deletion of seipin and Pex30 results in inhibition of LD biogenesis, abnormal ER morphology, and growth defects Wang et al., 2018). Notably, the defect in LD biogenesis in sei1Δ pex30Δ double mutants is rescued by deletion of Pct1 (phosphocholine cytidylyltransferase 1), the rate-limiting enzyme in the phosphatidylcholine (PC) biosynthesis Kennedy pathway. PC is the most abundant phospholipid in the LD membrane. Thus, Pex30 may contribute to LD biogenesis by modulating phospholipid composition in the LD-ER contact site and on the surface of the nascent LD during LD biogenesis (Wang et al., 2018). Interestingly, Pex30 contains membrane-shaping reticulon-like regions (Joshi et al., 2016) and may also contribute to deforming the membrane at LD-ER MCSs and budding of the nascent LDs from the ER membrane.
Role for ERAD in Removal of Surplus LD Proteins From the ER Membrane
The ER-associated degradation pathway (ERAD) was originally identified as a pathway for degradation of unfolded or damaged proteins in ER membranes. In ERAD, unfolded proteins are ubiquitinated, recognized and extracted by the AAA-ATPase Cdc48 in yeast (p97/VCP in mammals), and degraded by proteasomes (Christianson and Ye, 2014). Recent studies support a novel role for ERAD in degrading LD proteins within the ER membrane.
In mammals, diacylglycerol acyltransferase 2 (DGAT2), an enzyme that catalyzes the conversion of DAG to TAG, is degraded by ERAD with the aid of the ubiquitin ligases gp78 and Hrd1 (Choi et al., 2014;Luo et al., 2018). In yeast, a subset of LD proteins, Pgc1 (phosphatidyl glycerol phospholipase C), Dga1, and Yeh1 (yeast steryl ester hydrolase), are substrates for the ERAD ubiquitin ligase Doa10 and degraded by ERAD. The HH domain of Pgc1 has been implicated as a degron for ERAD: it is both necessary and sufficient for Doa10-dependent degradation (Ruggiano et al., 2016). Interestingly, degradation of Pgc1 by ERAD is accelerated in the absence of yeast FITM2 (Yap et al., 2020). Moreover, the regions for ERAD degradation and for targeting of proteins to LDs overlap (Ruggiano et al., 2016). These findings raise the possibility that proteins that are not incorporated into LDs are degraded in the ER by ERAD.
LD-ER Contact Sites and ER Proteostasis
As described above, resident LD proteins are recruited to nascent LDs at LD-ER MCS. Recent evidence indicates that unfolded ER proteins, which accumulate in ER under conditions of ER stress and compromise ER and cellular function and fitness, are removed from the ER in LDs by transport from ER to LDs at LD-ER MCS. In contrast to the ERAD system which relieves ER stress by removing individual unfolded proteins from the organelle, this LD-based ER proteostasis mechanism enables high-throughput removal of unfolded ER proteins (Figure 3) (Vevea et al., 2015).
Early studies revealed that ER proteins are recovered in isolated LDs. Although these proteins were first interpreted as contaminants in LD preparations, several lines of evidence indicate that ER proteins are recruited to LDs by ER stress. Specifically, treatment of yeast with a reducing agent, dithiothreitol, which inhibits oxidative folding in the ER, results in recruitment of 1) proteins that contain disulfide linkages and undergo oxidative folding in the ER, 2) protein disulfide isomerase (PDI) proteins, multifunctional ER redox chaperones, and 3) other ER chaperones to LDs. Similarly, treatment with tunicamycin, an agent that induces protein misfolding by inhibiting protein glycosylation in the ER, results in recruitment of proteins that are glycosylated in ER and the ER chaperones described above to LDs. Imaging studies revealed that ER proteins that are recovered with LDs also co- localize with LDs in living yeast exposed to ER stress. These imaging studies also provide documentation of 1) association of LDs with protein aggregates in the ER membrane, 2) colocalization of those protein aggregates with LDs as they bud from ER membranes and move away from the ER, and 3) localization of LDs and their associated ER protein aggregates in the vacuole (yeast lysosome) . Equally important, LD function in ER protein quality control is a physiologically relevant stress response. Indeed, LD biogenesis or abundance is up-regulated in response to ER stressors in yeast (Fei et al., 2009;Vevea et al., 2015;, in mammalian cells (Lee et al., 2012) and in mouse liver (Yamamoto et al., 2010;Zhang et al., 2011). Furthermore, inhibition of LD biogenesis dramatically reduces cellular growth and survival in yeast challenged by ER stressors . Overall, these studies support a model for LD function in ER protein quality control whereby unfolded proteins are transferred from ER membranes to nascent LDs at LD-ER MCS, removed from ER by LDs as they bud from the ER and degraded in response to ER stress.
LD-LYSOSOME/VACUOLE MCS
The lysosome (vacuole in yeast) plays a major role in catabolism, recycling of cellular waste, excretion of waste products and cellular signaling. Contact site formation between LDs and lysosomes/vacuoles plays direct and indirect roles in LD autophagy (lipophagy). Lipophagy, in turn, is essential for the mobilization of LD-bound lipids for energy production in response to nutrient limitations and other stressors, and for degradation of excess or toxic lipids or unfolded proteins that are stored and sequestered in LDs during ER stress. Lipophagy is also critical for delivery of sterols and other lipids in LD to the vacuolar membrane in the stationary phase in yeast (Tsuji et al., 2017;Garcia et al., 2018;Jarc and Petan, 2019).
LD-lysosome/vacuole MCS have been implicated in three forms of lipophagy. In LD macroautophagy, which is the primary form of lipophagy in mammalian systems, LDs are encapsulated within autophagosomes, and delivered to the lumen of the lysosome by fusion of autophagosomes with the lysosomal membrane (Singh et al., 2009). In LD microautophagy or microlipophagy (µLP) which is predominantly understood in yeast, LDs make direct contact with the lysosome/vacuole and partial or wholesale uptake of LDs into the lysosome/vacuole at sites of invagination in the lysosome/vacuole membrane (Garcia et al., 2018;Schulze et al., 2020). Finally, in chaperone-mediated autophagy (CMA), specific LD proteins are targeted to the lysosome by chaperones and translocated across the lysosomal membrane by the lysosome-associated membrane protein type 2A (LAMP2A) (Kaushik and Cuervo, 2015;. All three forms of autophagy are induced by nutrient limitation and other environmental cues. Below, we review the two forms of lipophagy that occur by direct contact between LDs and the lysosome/vacuole at MCS between the organelles: LD microlipophagy (Figures 4I-III) and CMA ( Figure 4IV).
LD-Vacuole MCS During LD Microautophagy in Yeast
Microlipophagy (µLP) was first identified in yeast, and has emerged as the primary mechanism for lipophagy in yeast. µLP can be induced by stressors including nitrogen or glucose limitation, entry into stationary phase, lipid imbalance, and ER stress. Although these conditions all induce µLP, two forms of µLP occur at distinct LD-vacuole MCS and require distinct factors that modulate vacuolar membrane dynamics, invagination and scission (van Zutphen et al., 2014;Wang et al., 2014;Vevea et al., 2015;Oku et al., 2017;Seo et al., 2017;Tsuji et al., 2017;Liao et al., 2021). Below, we describe these two mechanisms of µLP at LD-vacuole MCS in yeast and the role of specific proteins and lipids in that process.
LD-Vacuole MCS at Lo Microdomains During µLP in Yeast
In µLP induced by entry into stationary phase or nitrogen starvation, LDs make contacts with the vacuole at liquid ordered (L o ) microdomains in the vacuolar membrane (Tsuji et al., 2017;Wang et al., 2014) (Figure 4II). L o microdomains are lipid raft-like regions that are enriched with sterols and have distinct protein and lipid composition compared to the bulk of the vacuolar membrane, which has been referred to as a liquid disordered (L d ) domain. Transfer of sterols from LDs to vacuoles at LD-vacuole MCS during L o microdomain formation in stationary-phase yeast cells and intravacuolar transfer of sterols to L o microdomains by Neiman-Pick proteins mediates formation of these microdomains under multiple stress conditions (Tsuji et al., 2017;Liao et al., 2021). These microdomains form in response to various stresses including entry into stationary phase, nitrogen or glucose starvation, osmotic stress, cycloheximide (CHX)mediated translation inhibition, weak acids, heat, and ER stress induced by lipid imbalance, DTT, or TM (Toulmay and Prinz, 2013;Liao et al., 2021). Thus, Lo microdomain formation is a general stress response ( Figures 4I, II).
The mechanisms underlying LD MCS formation at L o microdomains and the vacuolar membrane dynamics and invagination at those MCS during release of LDs into the vacuolar lumen are not well understood. However, Ivy1 can bind to Ypt7, the Rab7 GTPase of yeast, and requires Ypt7 for localization to invaginations in the vacuolar membrane in Frontiers in Cell and Developmental Biology | www.frontiersin.org February 2022 | Volume 10 | Article 852021 response to nutrient limitation (Lazar et al., 2002;Numrich et al., 2015). Moreover, as described below, Rab7 has been implicated in LD-lysosome MCS formation in mammalian cells (Schroeder et al., 2015). Thus, Ivy1 may contribute to µLP through effects on MCS formation between LDs and L o microdomains on the vacuolar membrane. Interestingly, Ivy1 is also a phospholipidbinding protein that contains a putative I-BAR domain, which binds to and stabilizes membranes with negative membrane curvature (Itoh et al., 2016). Therefore, Ivy1 may contribute to the invagination of the vacuolar membrane at contact sites between LDs and vacuolar membrane L o microdomains ( Figure 4IV).
Lo Microdomain-Independent, ESCRT-Dependent µLP in Yeast
µLP is induced by the diauxic shift from glycolysis to respirationdriven metabolism during late log phase in yeast (Oku et al., 2017). Moreover, in response to ER stress, LDs that contain unfolded ER proteins are targeted for degradation by µLP (Vevea et al., 2015;. Although many stressors induce L o microdomain formation in the vacuolar membrane, LDs do not form MCS with the vacuole at L o microdomains during µLP induced by ER stressors or the diauxic shift in yeast. Rather, under these conditions, LD-vacuole MCS form at L d domains in the vacuolar membrane that contain Vph1, which is excluded from L o microdomains (Vevea et al., 2015;Oku et al., 2017;. In addition, ESCRT complex proteins are upregulated and recruited to sites of membrane scission at these LD-vacuole MCS, and are required for L o microdomainindependent µLP in yeast (Vevea et al., 2015;Oku et al., 2017; ( Figure 4I). The mechanisms underlying LD-vacuole MCS formation during ER stress-induced µLP are not well understood. However, recent studies indicate that ER stressors induce vacuolar fragmentation in yeast. Moreover, LDs develop persistent interactions with clusters of fragmented vacuoles during L o microdomain-independent µLP, which supports MCS between LDs and one or more vacuoles during this process. The fragmented vacuoles fuse to form a cup-shaped structure surrounding LDs, and then engulf the LDs. ER stressinduced µLP is blocked by inhibition of this vacuolar fusion . Overall, these studies show that vacuolar fragmentation, clustering and fusion around LDs occur during stress-induced µLP, but ongoing studies are needed to determine tmore of the components and regulators of the MCS involved in µLP. Additionally, it has been discovered that the deletion of Rab7, a protein implicated in LD-lysosome MCS, results in accumulation of enlarged, clustered lysosomal compartments (MVBs) in mammalian cells (Schroeder et al., 2015), so it is possible that the clustering and fusion of degradative compartments is a conserved component of the µLP pathway.
LD-Lysosome MCS During Microlipophagy (µLP) in Mammalian Cells
LD degradation by macroautophagy has been studied extensively in mammalian cells. However, LD microautophagy (µLP) also occurs in mammalian cells, as revealed in recent studies of nutrient limitation in hepatocytes (Schulze et al., 2020). These studies documented formation of MCS between LDs and lysosomes, and uptake of LD segments or of intact LDs into lysosomes at invaginations in the lysosome membrane. Specifically, live-cell visualization of pH-sensing mRFP1-GFP targeted to the LD marker protein PLIN2 revealed persistent (>60 s) interactions between LDs and lysosomes and uptake of LDs into the acidic lumen of the lysosome under nutrient-limited conditions ( Figure 4III). Interestingly, nutrient limitation resulted in an increase in the frequency of persistent LDlysosome contacts. Moreover, silencing of canonical macroautophagy or CMA components has no effect on persistent LD-lysosome contacts, and EM studies revealed that MCS formation between LDs and lysosomes occurs in the absence of double-membrane, autophagosome-like structures. These findings provide the first evidence that LD degradation in response to nutrient limitations can occur by µLP in mammalian cells (Schulze et al., 2020). The mechanism underlying µLP in mammalian cells is not well understood. However, emerging evidence supports a role for Rab7, a small GTPase and important regulator of endocytic trafficking, in LD-lysosome MCS formation in hepatocytes (Schroeder et al., 2015). Specifically, nutrient limitations result in recruitment of Rab7 to LDs, and an increase in MCS between LDs and degradative compartments including lysosomes, MVBs and late endosomes. Moreover, depletion of Rab7, or inactivating mutation of Rab7, inhibits interactions of LDs and degradative compartments and results in an accumulation of enlarged, clustered MVBs and an overall inhibition of starvationinduced LD degradation. This raises the interesting possibility that Rab7 mediates contact site formation between LDs and lysosomes directly, or by promoting MCS formation between LDs and late endosomes/MVBs (amplisomes) and that late endosomes/MVBs at these MCS mature to form lysosomes (Schroeder et al., 2015). Interestingly, Rab7 has also been implicated in LD activities that may affect LD MCS through effects on vacuolar fusion or LD motility.
LD-Lysosome MCS During CMA in Mammalian Cells
Although CMA typically targets soluble cytosolic proteins, the LD-associated perilipin proteins PLIN2 and PLIN3 are degraded by CMA at LD-lysosome MCS in cultured mammalian cells. (Kaushik and Cuervo, 2015;. PLIN2 functions in LD biogenesis, stability and trafficking and serves as a scaffold that regulates association of LDs with the macroautophagy machinery (Tsai et al., 2017). PLIN3 also regulates macroautophagy in a TORC1 (target of rapamycin 1) -dependent manner (Garcia-Macia et al., 2021). Starvationinduced CMA of PLIN2 and PLIN3 is mediated by the 70-kD heat shock protein, hsc70, which binds to the pentapeptide motifs LDRLQ on PLIN2 and SLKVQ on PLIN3, promotes phosphorylation of PLIN2 by 5′ AMP-activated protein kinase (AMPK), and delivers PLIN2 and PLIN3 to the lysosomeassociated membrane protein 2A (LAMP2A) (Kaushik and Cuervo, 2015;, the vacuolar membrane protein that facilitates translocation of CMA substrates from the lysosomal surface to the lumen (Chiang et al., 1989;Salvador et al., 2000;Bandyopadhyay and Cuervo, 2008). Deletion of the pentapeptide CMA recognition motif on PLIN2 results in an increase in PLIN2 levels and a decrease in association of LDs with lysosomes (Schweiger and Zechner, 2015). These findings are consistent with the model that hsc70 binds to LD-associated PLIN2 and that CMA of PLIN2 occurs at MCS between LD and the lysosome ( Figure 4IV).
CMA of PLIN2 and PLIN3 is triggered by stressors including nutrient limitation, oxidative and lipogenic stresses, and hypoxia (Cuervo et al., 1995;Kiffin et al., 2004;Dohi et al., 2012;Rodriguez-Navarro et al., 2012;Kaushik and Cuervo, 2015), and contributes to stressor-stimulated release of lipids from LDs. Specifically, removal of PLIN2 and PLIN3 from the LD surface promotes association of LDs with 1) cytosolic lipases (e.g., ATGL) that catalyze release of FA from TAG and 2) the LD macroautophagy machinery. In turn, this promotes the release of lipids from LDs after degradation by the lysosome (Kaushik and Cuervo, 2015). These findings support a function of LD-lysosome MCS, and a role for CMA in regulation of lipid homeostasis.
CYTOSKELETAL MODULATION OF LD-ORGANELLE INTERACTIONS
As described above, environmental cues including nutrient availability and exposure to stressors induce MCS formation between LDs and organelles including mitochondria, ER and lysosomes. The cytoskeleton plays a fundamental role in this process by controlling the position and movement of LDs and organelles that interact with LDs. For example, in response to nutrient limitation, LDs change from clustered to a dispersed distribution, which allows LDs to make contact with mitochondria for up-regulation of lipid metabolism (Herms et al., 2015;Nguyen et al., 2017;Kong et al., 2020). Although multiple mechanisms have been identified for cytoskeletal control of organelle motility, the best characterized mechanism relies on motor moleculedriven, polarized movement of organelles along actin or microtubule tracks. Here, we summarize cytoskeletal function in LD interactions and contact site formation with other organelles.
Evidence of Cytoskeleton-Directed LD Distribution and Motility
Cytoskeletal components and motors have been found on LDs in a variety of organisms, including fungi, plants, and mammals. Proteomic analysis of LDs revealed actin, tubulin, and motor proteins on LDs (Turró et al., 2006;Weibel et al., 2012;Brocard et al., 2017;Pfisterer et al., 2017;Yu et al., 2017;Zhi et al., 2017;Bersuker et al., 2018). In particular, a high-confidence LD proteome generated from proximity labeling confirmed that actin, tubulin, and a kinesin family protein, KIF16B, are recovered with isolated LDs (Bersuker et al., 2018). Additionally, immunofluorescence staining in rat adrenocortical cells and adipocytes showed that beta-actin is present on the LD surface (Fong et al., 2001).
The actin and microtubule cytoskeletal networks and their associated motor proteins are involved in LD morphology and distribution within the cell. For example, destabilization of the actin cytoskeleton by treatment with either cytochalasin D (CytD) or latrunculin-A decreases the size of LDs in J774 macrophages (Weibel et al., 2012). Destabilizing microtubules by nocodazole treatment also decreases LD size (Boström et al., 2005;van Zutphen et al., 2014;Gu et al., 2019). Presumably, this change in size results from a change in the balance between the addition and removal of LD cargo, which occurs at specific MCS. Consistent with this idea, the position and dynamics of LDs are also dependent on the cytoskeleton. Destabilization of actin filaments prevents LD movement from the vegetal pole to the animal pole in zebrafish embryos (Dutta and Kumar Sinha, 2015). Post-translational modifications of tubulin affect LD motility and distribution. For example, during nutrient deficiency, detyrosinated tubulins accumulate and form networks that promote LD dispersion in Vero cells (Herms et al., 2015). In contrast, acetylated tubulins immobilize LDs in hepatic cells (Groebner et al., 2019).
Although these studies reveal that morphology and distribution of LDs depend on the cytoskeleton, it is not always clear whether the effects observed upon global destabilization of microtubule or actin cytoskeletons are due to direct effects on LD-cytoskeleton interactions. However, the effects of disrupting motor proteins, which drive motility on cytoskeletal tracks, are less ambiguous. Both the actin-based motor myosin and the microtubule-based motors kinesin and dynein drive LD distribution and motility ( Figure 5) (Gross et al., 2000;Andersson et al., 2006;Shubeita et al., 2008;Knoblach and Rachubinski, 2015;Pfisterer et al., 2017;Rai et al., 2017;Gu et al., 2019;Veerabagu et al., 2020).
In some cases, specific motor proteins that drive LD motility have been identified. In budding yeast, anterograde movement of LDs from mother cells to buds relies on a type V myosin, Myo2p ( Figure 5) (Knoblach and Rachubinski, 2015). During zebrafish development, inhibiting Myosin-1 with pentachloropseudilin alters the dynamics and distribution of LDs (Gupta et al., 2017). Knockdown of non-muscle myosin IIa (NMIIa) enlarges LDs and promotes their clustering in human osteosarcoma U2OS cells (Pfisterer et al., 2017). Posttranslational modification of motor proteins also alters motor-LD interactions. For example, ERK-mediated phosphorylation of dynein increases its affinity for LDs . Motor knock-down studies are somewhat more specific than drug-induced cytoskeletal disruption, but still may be subject to pleiotropic effects because motor proteins are shared by multiple cargos. A more specific approach is to target the cargo adaptor proteins that bridge LDs and cytoskeletal/motor proteins, although these adaptors are less well understood. One recently identified adaptor is the LD protein perilipin 3 (PLIN3), which interacts with the dynein intermediate chain subunit, Dync1i1, in AML12 mouse hepatic cells ( Figure 5) (Gu et al., 2019). Identifying more of these LD-specific cargo adaptor proteins will allow in-depth characterization of the biological function of LD-cytoskeletal interaction.
Functional Consequences of LD-Cytoskeleton Interactions
MCSs between LD and other organelles play an important role in exchanging metabolites. Therefore, any change in the distribution or dynamics of those sites can affect their function. Indeed, not only the size, but also the lipid composition of LDs in J774 macrophages is changed by actin destabilization (Weibel et al., 2012). Destabilization of the actin cytoskeleton reduces the dissociation of LDs from peroxisomes in Arabidopsis (Cui et al., 2016). Microtubules are required for LD autophagy (Boström et al., 2005;van Zutphen et al., 2014;Gu et al., 2019). Nocodazole-treated COS-7 cells have fewer contact sites between LDs and mitochondria or peroxisomes, as well as fewer ternary contacts between LDs, peroxisomes, and Golgi (Valm et al., 2017).
LD-mitochondria interactions are crucial for mobilizing the energy stored in LDs (Rambold et al., 2015). When nutrients are depleted, Vero cells exhibit dispersion of LDs, and a concomitant increase in LD-mitochondria contacts, consistent with the need for lipid exchange and fatty acid metabolism. Starvation-induced LD-mitochondria contacts include both relatively short-lived FIGURE 5 | LD-cytoskeleton interaction. Lipid droplets are transported on cytoskeletal fibers (actin filaments or microtubules) by cytoskeletonassociated motor molecules (myosins, kinesins, and dyneins). In most cases the adaptors linking motors to the LD surface are unknown.
Frontiers in Cell and Developmental Biology | www.frontiersin.org February 2022 | Volume 10 | Article 852021 interactions ("touch and go") and more stable connections (Herms et al., 2015). Microtubules are required for formation of these contacts (Valm et al., 2017), and the dispersion of LDs from the perinuclear area to the cell periphery specifically depends on detyrosinated microtubules. Detyrosination is promoted by the activation of the energy sensor, AMP protein kinase (AMPK) (Herms et al., 2015). AMPK also phosphorylates PLIN3, which may induce conformational changes of PLIN3 to facilitate LD dispersion (Zhu et al., 2019). Given the link between PLIN3 and dynein and microtubules (Gu et al., 2019), the LD dispersion caused by phosphorylated PLIN3 may be due to the altered interaction between PLIN3 and dynein. Taken together, LD-mitochondria interactions are elevated upon starvation, and this response requires the microtubule network to shuttle LDs to mitochondria and facilitate lipid metabolism in this system. In another well-characterized system, microtubule-based motor proteins on the surface of LDs stimulate lipid transfer to ER and therefore facilitate lipoprotein assembly in liver cells (Rai et al., 2017). In rat hepatocytes, LDs are actively transported by the motor molecule kinesin-1 on microtubules to the cell periphery, which promotes MCS formation between LDs and smooth endoplasmic reticulum (sER) (Barak et al., 2013;Rai et al., 2017;Kumar et al., 2019). Kinesin-1 is recruited to LDs by directly binding to phosphatidic acid (PA) (Kumar et al., 2019). However, this binding is dependent on the metabolic state of the cells. In nutrient-rich conditions, the GTPase ADP ribosylation factor 1 (ARF1) recruits PA-producing phospholipase-D1 (PLD1) to LDs, which results in elevation of PA levels on LDs, increased association of kinesin-1 with LDs (Wilfling et al., 2014;Rai et al., 2017;Kumar et al., 2019). These LDs are then actively transported to cell periphery to form MCS with sER, which facilitates TAG production in sER and very low density lipoprotein (VLDL) assembly (Thiam et al., 2013;Rai et al., 2017;Kumar et al., 2019). In contrast, in the fasted state, insulin levels decrease, which downregulates the recruitment of ARF1 to the LDs. This diminishes microtubule-dependent LD movement and the formation of LD-sER MCSs at the periphery, resulting in reduced TAG levels (Kumar et al., 2019).
Dynein and microtubules are also involved in LD biogenesis in the alcohol-induced liver damage model (Gu et al., 2019). Highalcohol diets induce accumulation of LDs and elevate the levels of perilipins in liver cells, including the dynein-interacting protein PLIN3. Moreover, immunofluorescence staining revealed that Dync1i1 colocalizes with LDs, and PLIN3 and LDs are partially colocalized with microtubules. Depolymerizing microtubules by nocodazole or knocking down PLIN3 inhibits LD biogenesis from LD-ER contact sites, which reduces the size and distribution of LDs in AML12 cells.
The examples discussed above illustrate the importance of cytoskeletal function in regulating interactions between LDs and other organelles. Cellular modulation of the number and dynamics of these MCS is vital for LD biogenesis, lipid secretion and lipoprotein assembly.
CONCLUSION AND FUTURE DIRECTIONS
MCS that form between LDs and organelles including mitochondria, ER and lysosomes/vacuoles function in LD biogenesis and in transfer of lipids, FAs, unfolded proteins and surplus or toxic proteins to or from LDs. Moreover, emerging evidence supports a role for the cytoskeleton in formation of MCS between LDs and other organelles by controlling the position and movement of LDs in response to environmental cues. However, fundamental questions regarding LD MCS remain unanswered. While many tethers that link LDs to mitochondria under conditions of nutrient limitations have been identified, the mechanisms that regulate LD-mitochondria MCS formation and loss are not well understood. Although LD-ER contact sites have an established function in LD biogenesis, the mechanism underlying scission of nascent LDs from ER membranes at LD-ER MCS is not known. The finding that LDs function in ER proteostasis through transfer of unfolded proteins from ER to LDs at LD-ER MCS revealed a novel function for LDs. However, it is not clear whether this process is linked to LD biogenesis. Indeed, if mature LDs can associate with LDs to remove unfolded proteins and mitigate ER stress, the proteins serve as tethers at those LD-ER MCS and mechanisms that promote those MCS remain unknown. Moreover, the proteins that tether LDs to lysosomes/vacuoles; how liquid ordered (L o ) and disordered (L d ) domains in the vacuolar membrane contribute to MCS and vacuolar membrane dynamics at those sites, and the mechanism underlying Rab7 function in LD-lysosome/vacuole MCS in mammalian cells and yeast during µLP are all open questions. Finally, while it is clear that cytoskeletondependent LD motility is critical for association of LDs with other organelles in response to nutritional cues, the cytoskeleton may contribute to MCS by other mechanisms including force generation for membrane deformation or scission or for transfer of constituents to and from LDs. | 2022-02-24T14:17:24.988Z | 2022-02-24T00:00:00.000 | {
"year": 2022,
"sha1": "1a259e9243c3ce07983ca1ae5d663af8323060d4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "1a259e9243c3ce07983ca1ae5d663af8323060d4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
105025357 | pes2o/s2orc | v3-fos-license | An efficient one-pot conversion of carboxylic acids into benzimidazoles via an HBTU-promoted methodology
Benzimidazole is a privileged, and routinely used pharmacophore in the drug discovery process. Herein, we report a mild, acid-free and one-pot synthesis of indole, alkyl and alpha-amino benzimidazoles through a novel HBTU-promoted methodology. An extensive library of indole-carboxylic acids, alkyl carboxylic acids and N-protected alpha-amino acids has been converted into the corresponding benzimidazoles in 80–99% yield. Since alpha-aminobenzimidazoles are highly useful synthons as chiral ligands for chemical catalysis, as well as for drug discovery endeavors, our reported method provides direct access to this scaffold in a simple, one-pot operation from commercially available carboxylic acids.
Introduction
Heterocyclic structures have been extensively utilized during the process of drug development. [1][2][3] The presence of heterocycles modulates physicochemical properties and the pK a prole of therapeutic leads. Additionally, nitrogen substitution enables a useful functional handle for further derivatization. In this vein, the benzimidazole core (1, Fig. 1) has become a highly sought aer and privileged pharmacophore in drug discovery. [4][5][6][7][8] Benzimidazole's structural similarity to purine makes it a key structural motif in drug design. 6 Hence, this important pharmacophore is commonly encountered in drugs used for the treatment of cancer, infectious diseases, hypertension and other illnesses (Fig. 2). Benzimidazole is also actively used in drug leads that exhibit a broad range of pharmacological activities, including anticancer, antibacterial, and anti-inammatory activities. [4][5][6][7][8] Due to the high prevalence of benzimidazoles within medicinal organic molecules, there has been a considerable interest in developing efficient approaches for their synthesis. 9-14 One of the common synthetic approaches employed to access benzimidazoles involves condensationdehydration sequence of o-aryldiamine with an aryl-aldehyde substrate under mild conditions. A second approach involves reaction of o-aryldiamine with carboxylic acid derivatives under forcing conditions, in the presence of a mineral acid or acetic acid, and under reuxing temperatures. Other recent methods for the preparation of benzimidazole derivatives include several examples of transition metal-catalysed C-N coupling of N-(2haloary)amidines with 1,2-phenylenediamines, and the intramolecular oxidative C-N couplings of arylamidines with Nsubstituted 1,2-phenylenediamines via the TEMPO-air promoted oxidative coupling. [12][13][14] A most commonly used (Phillip's method) 9d method involves the condensation of o-aryldiamine with carboxylic acids or its derivatives, including heating the reagents together in the presence of aqueous hydrochloric acid (Scheme 1). Due to the harsh nature of reaction condition, substrate scope for this method is very limited; as sensitive functional groups are less likely to survive such harsh conditions. Additionally, limited availability of substrates for other available methods oen impedes their application during medicinal chemistry efforts.
In recent years, alpha-aminobenzimidazoles (3 and 4) have emerged as potent drug leads for infections, cancer and autoimmune diseases, 15-17 as well as metal-binding motifs. 5 Due to the limited synthetic utility of existing methods, we envisioned developing a mild, functional group tolerant method for accessing a diverse class of benzimidazole synthons (Scheme 2). Such a methodology would greatly benet our research group and others who are interested in developing benzimidazolecontaining peptides as drug leads, 16 and chiral benzimidazoles as ligands. The reported study explores a simple, yet reliable approach to access a structurally diverse library of benzimidazoles that includes amino acid derived alphaaminobenzimidazoles, alkyl benzimidazoles and indole benzimidazoles.
Result and discussion
We began our exploration by synthesizing the 'amide' intermediate via a standard O-(benzotriazole-1-yl)-N,N,N 0 ,N 0 -tetramethyluronium hexauorophosphate (HBTU) assisted coupling approach. Once we have the amide intermediate, we isolated it and proceeded to screen mild conditions to perform a dehydrative cyclization of the amide into the benzimidazoles. Our initial attempts to perform the dehydrative cyclization under a mild basic or acidic condition ( Table 1, entry 2-4) failed to yield the desired product. We then realized that carbodiimides are known for their ability to promote oxidation or dehydration. 20,21 Based on this knowledge, we selected several commonly used carbodiimide-based coupling agents, including N,N 0 -diisopropylcarbodiimide (DIC), 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDCI), O-(1H-6-chlorobenzotriazole-1-yl)-1,1,3,3tetramethyluronium hexauorophosphate (HCTU) and HBTU to study the dehydrative cyclization (Table 1, entry 5-9). For this investigation, Boc-valine derived amide (13) was used as a model substrate. We discovered that under an optimized condition, HBTU (1 equiv.) yielded the best conversion of the amide 13 into the corresponding benzimidazole 19c (Table 1, entry 7). Additionally, we found that catalytic amount of HBTU (0.3 equiv.) is ineffective in providing the desired product in high yield. Although DIC, EDCI and HCTU are useful carbodiimide agents for amide formation, these agents did not afford the desired product. This is perhaps due to the reduced reactivity of these coupling agents. Finally, we also found that (benzotriazol-1-yloxy) tripyrrolidinophosphonium hexa-uorophosphate (PyBOP), a more reactive coupling agent Scheme 1 A most commonly used synthetic method of benzimidazoles (Phillip's Method).
Scheme 2 Proposed one-pot, two-step synthesis of Boc-protected amino acid derived benzimidazole (12). compared to DIC and EDCI, indeed exhibited desirable reactivity towards the dehydrative cyclization. Phosphonium-based coupling agents are useful activating reagents for amide formation and cyclization of thioureas. 22 Since the amide precursor is synthesized via an HBTU activated process, we attempted a one-pot strategy to form the amide and a subsequent benzimidazole formation by using 2 equivalents of HBTU. This approach worked remarkable well as a one-pot process. Initial formation of aryl-amide occurred with high efficiency within 4 hours at room temperature, and then a one-pot HBTU promoted cyclization under reuxing temperature yielded the desired product in less than 3 hours. The twostep, one-pot synthesis worked extremely well in PhMe, yielding 19c in 96% isolated yield (Scheme 3). To further conrm the role of HBTU in this process, we performed a series of control reactions to rule out the role of other components present in the reaction mixture from the amidation step. We observed that none of the components in the reaction mixture, which are part of the amidation reaction or byproducts formed during the amide formation promoted the conversion of amide into the benzimidazole (Table 1, entries 10 and 11). These observations led us to conclude that HBTU is an effective promoter of benzimidazole synthesis, and it is presumably behaving like an activating agent.
To investigate the versatility of solvents, we performed the benzimidazole synthesis in 1,4-dioxane, DMF or PhMe. The coupling was performed in one of these solvents, and subsequently the crude reaction mixture was subjected to cyclization under reuxing temperature in the same solvent. Based on the high yield obtained, all three solvents were suitable for this operation, providing great exibility with the choice of solvents. Since solubility of substrates (i.e. peptides and indoles) in toluene tends to be a limitation for synthesis, we have demonstrated that either DMF or 1,4-dioxane can be employed to overcome this limitation.
In comparison to reported methods for benzimidazole synthesis, our approach highlights several important improvements and advantages. First, the benzimidazole synthesis is a one pot process and high yielding, where current methods require isolation of the aryl-amide prior to dehydrative cyclization. Second, there is no need to perform the cyclization in the presence of an acid as a co-solvent, which greatly broadens the substrate scope and the functional group tolerability, including various protecting groups found in amino acids and peptides. Third, the reaction works extremely well in three different solvents enabling synthetic exibility for substrates with limited solubility.
We then investigated the substrate scope using a small library of commercially available carboxylic acids (Scheme 4). Boc-Asp-OMe (14b) was successfully converted to the betabenzimidazole derivative (15b) in 92% yield, providing a unique non-protein amino acid that is useful for medicinal chemistry. Additionally, four different aliphatic carboxylic acids (14c-14f) were converted into the corresponding benzimidazoles in high yield. As peptide substrates are of prime interest to others and us in the eld, a Cbz-protected dipeptide (14g) was successfully transformed into the C-terminal benzimidazole derivative (15g) in good yield as well. This demonstrates the utility of reported method for the synthesis of peptide-based benzimidazoles for drug discovery.
During our investigation, we realized that indole-2-carboxylic acid is a privileged substrate and the corresponding benzimidzole is widely used in drug discovery efforts. 18,19 We have successfully synthesized various indole-2-benzimidazoles (17a-17f), where the aryl ring of the benzimidazole is substituted with different functional groups (Scheme 5). We envisioned that having halogen substitution on the aryl-ring provides a useful chemical handle for further structure diversication via Pdcatalysed cross-coupling reactions. We also noticed that electron rich 1,2-diaminobenzene derivatives yielded better yield than those that are electron decient. This may be due to change in nucleophilicity of the diamine.
To further validate our method, we proceeded to synthesize an extensive library of alpha-amino acid derived benzimidazoles. There are two reasons for this endeavor: (i) we wanted to access a structurally diverse collection of alpha-amino benzimidazoles from commercially available amino acids with suitable protecting groups; and (ii) we envisioned accessing alpha-amino acids precursors for the synthesis of peptide-based benzimidazoles. This library includes thirteen Boc-protected amino acids (18a-18n, Table 2) and three Cbz-protected amino acids (22-24, Fig. 3). We reacted all Boc-amino acids with 1,2-diaminobenzene under optimized, one-pot reaction condition, and isolated the desired benzimidazoles (19a-19n) in excellent yield. We also learned that many side chains protecting groups, including benzyl ether, benzyl thioether, and benzyl esters are stable to the reaction condition. Additionally, the side chain of Boc-Asn-OH (18l) required no protecting group to generate the Boc-Asn derived benzimidazole 19l. It is also interesting to note that compound 19d is structurally similar to veliparib, a poly(ADP-ribose)polymerase (PARP) inhibitor that is in clinical trials. 23 One interesting observation was made when Boc-Gln-OH (18m) was reacted to form the corresponding benzimidazole. Absence of protecting group on the side chain yielded an interesting tricyclic structure (19m). Based on literature precedent, 24 we propose that the side chain amide underwent a transamidation reaction with the benzimidazole nitrogen, generating the unique tricyclic product 19m. Since compound 19m has an amine handle, and a conformationally distinct tricyclic structure, it could be a useful synthon for medicinal chemistry efforts.
During the synthesis of various alpha-aminobenzimidazoles (Table 2), we investigated the extent of epimerization of the alpha-chiral centre. Current synthetic methods limit the access to amino acid derived benzimidazoles in high enantiopurity and high yield. Although our approach is superior in terms of mild reaction condition and high yield, we were unable to obtain these benzimidazoles in high enantiopurity. Our initial analysis showed that the amino acid derived products are obtained as a racemic mixture or with poor ee. We believe that the basic components (i.e. DIPEA, 1,2-diaminobenzene) used during coupling condition are perhaps epimerizing the relatively acidic alpha-proton of the benzimidazoles. Alternatively, the epimerization may be occurring during the dehydrative cyclization process where the pK a of the alpha-proton is signicantly lowered. Our initial efforts to overcome this limitation by replacing the base used to a milder base (i.e. Nmethylmorpholine or pyridine) failed to improve the Paper enantiopurity of benzimidazoles. We are continuing to investigate various conditions to overcome the problem of epimerization.
We have also utilized the reported method for the synthesis of two halogenated analogues of Boc-tyrosine derived benzimidazoles (20 and 21), and three N-Cbz protected amino acid derived benzimidazoles (22)(23)(24). Using these substrates, we demonstrated that both Boc and Cbz carbamates are tolerated. Additionally, having a halogen handle on the aryl ring of benzimidazole provides a new venue to diversify the benzimidazole core for medicinal chemistry purposes.
In addition to being an effective coupling agent, we believe that HBTU (26) is playing a very important role in the cyclization process. Herein, we propose a plausible mechanistic pathway, which may explain the HBTU-promoted formation of the benzimidazole (Scheme 6). The intermediate aryl-amide (25) is relatively stable, and for it to undergo dehydration, it needs to be activated. We propose that HBTU (26) helps in the activation of amide, where the oxygen atom of the amide reacts with the carbodiimide motif rst. Following the attack of the amide oxygen, a molecule of 1-hydroxybenzotriazole (HOBt, 29) is lost from HBTU. In the subsequent step, the second aryl-amine motif (28) reacts to kick-out a molecule of tetramethylurea (30) and forms the desired benzimidazole (2). Based on LC-MS and 1 H NMR analyses, we conrmed the formation of the key byproducts, HOBt (29) and tetramethylurea (30) during the conversation of amide substrate into benzimidazole. This nding provides experimental support for this mechanistic proposal.
In a recent report, amino acid derived thiazoles have been shown to be potent modulators of P-glycoprotein, which contributes to drug resistance in cancer cells. 25 Since the amino acid benzimidazoles we have generated, including 20, 21, 23 and 24 are isosteres of reported thiazole derivatives, we plan to evaluate these compounds for potential P-glycoprotein binding affinity, and reversal of anticancer drug resistance. The proposed synthetic approach provides convenient entry to prepare and evaluate potentially bioactive P-glycoprotein modulators.
Experimental section
General procedure for the conversion of carboxylic acids into benzimidazoles To a solution of commercially available carboxylic acid (1.0 equiv.) in 30 mL of toluene or DMF was added N-ethyldiisopropylamine (1.9 equiv.) and the solution was stirred for 10 minutes at room temperature. To the stirring solution, HBTU (2 equiv.) was added and the reaction mixture was stirred for another 10 min. To the reaction mixture, O-phenylenediamine (1 equiv.) was added and stirred for 4 hours. Thereaer, the reaction mixture was heated under reux for 3 hours. The reaction was cooled to room temperature, aer which the solvent was removed in vacuo in the case of toluene, but for DMF, the reaction mixture was diluted with water and the desired product was extracted using ethyl acetate (EtOAc). The organic layer was dried over anhydrous sodium sulphate, ltered and concentrated in vacuo. The crude product was puried using column chromatography using hexanes/EtOAc in an increasing polarity up to 1 : 1 mixture. The fractions containing the desired product were concentrated and recrystallized in hexanes/EtOAc (1 : 1) to yield the desired product. All compounds are fully characterized using 1 H NMR, 13 C NMR, LC-MS and IR (see ESI † for detail).
Conclusions
We have reported a convenient and mild methodology to readily convert commercially available carboxylic acids, including indole-carboxylic acids, aliphatic-carboxylic acids and alphaamino acids into corresponding benzimidazoles. Due to increased interest of such synthons in drug discovery, our approach provides access to structurally diverse benzimidazoles in a one-pot operation. The methodology is high yielding, acidfree and tolerates various common functional groups. As our research group and several other groups are highly interested in structurally unique benzimidazoles, our methodology is a considerable addition to the eld of organic chemistry and medicinal chemistry.
Conflicts of interest
The authors declare no conict of interest. | 2019-04-10T13:12:13.350Z | 2018-10-15T00:00:00.000 | {
"year": 2018,
"sha1": "1cc94c8b0eded294cb07bea82390e92ddf81dea0",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/ra/c8ra07773h",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "651195868e57b8aada367a8ba0eddb5882796d86",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
252596237 | pes2o/s2orc | v3-fos-license | The Mittag-Leffler condition descents via pure monomorphisms
This notes aims to clarify the proof given by Raynaud and Gruson that the Mittag-Leffler property descents via pure rings monomorphism of commutative rings. A consequence of that is that projectivity dencents via such ring homomorphisms, a revision of the proof also allows to prove that the property of being pure-projective also descents via pure monomorphisms between commutative rings}.
In their fundamental paper [6], Raynaud and Gruson introduced the class of Mittag-Leffler modules. They proved how useful such notion was, showing an important number of striking results. One of them was the descent of projectivity via pure ring monomorphisms [6, Théorème II.3.1.3] (or universally injective maps, as they are named in [6]) of commutative rings.
It seems there has been some misunderstanding in the literature because, as noted by Gruson in the paper [3], statement [6, Proposition II.2.5.2] is wrong. The descent of projectivity via pure monomorphisms is stated in [6,Examples II.3.1.4] that are presented as a consequence of the wrong statement, and no correction for that is given in [3]. However, to conclude the descent of projectivity via pure monomorphisms only [6, Proposition II.2.5.1, Théorème II.3.1.3] are needed and these results are perfectly correct in the original paper.
The descent of projectivity means that if R → T is a pure ring monomorphism of commutative rings, and M is a flat R-module then M R is projective if and only if M ⊗ R T T is a projective T -module. This result was reproved in [5] in the case the ring homomorphism R → T is faithfully flat. In [1], if was reproved for the case of pure-monomorphisms. In both papers, it was also observed that results of Brewer and Rutter [2,Theorem 2] allow to state the result in the following way: Let R → T be a pure ring monomorphism of commutative rings, and let M be an R-module. Then M R is projective if and only if M ⊗ R T T is a projective T -module.
In a recent preprint, Herbera, Prihoda and Wiegand have observed that suitable modifications of the original arguments due to Raynaud and Gruson allow also to show that pure projectivity descents via pure monomorphisms of commutative rings [4]. The proof by Raynaud and Gruson is based on [6, Proposition II.2.5.1] which shows that the Mittag-Leffler propety descents via pure monomorphism of commutative rings. This result is reproduced in Proposition 2.1.
To make clear that the result is [4] is correct, we have written this short note. In Section 1 we introduce the characterization of Mittag-Leffler modules that allows to prove [6, Proposition II.2.5.1]. The proof of the latter result is then included in Proposition 2.3. Finally, in the third section we include the detailed proof of the descent of pure projectivity and, as a consequence, the descent of projectivity.
We stress the fact that we are just reproducing arguments that are already in [6].
is injective for any family {Q i } i∈I of left R-modules.
The following characterization of Mittag-Leffler modules is also due to Raynaud and Gruson. It is also reproved in [5]. (i) M is a Mittag-Leffler module.
α≤β∈Λ is a directed system of finitely presented modules. Then for any α ∈ Λ there exists β ≥ α such that, for any left R-module Q, ker (u βα ⊗ R Q) = ker (u γα ⊗ R Q) for any γ ≥ β ∈ Λ. (iii) There exists a directed system of finitely presented modules (F α , u βα : The key to prove Proposition 1.2 is the following Lemma.
In particular, ker u = ker v if and only if u ′ and w are monomorphisms.
Proof. The proof is easily done using the push-out property combined with element-chasing.
Using Lemma 1.3, the characterization of Proposition 1.2 can be rewritten in the following fancy way: α≤β∈Λ is a directed system of finitely presented modules. Then for any α ∈ Λ there exists β ≥ α such that, for any γ ≥ β ∈ Λ, the homomorphisms w γα and u ′ βα in the push-out diagram There exists a directed system of finitely presented modules (F α , u βα : F α → F β ) α≤β∈Λ such that M = lim − → F α , and satisfying that for any α ∈ Λ there exists β ≥ α such that, for any γ ≥ β ∈ Λ the homomorphisms w γα and u ′ βα in the push-out diagram To prove the converse, assume that u⊗ R T is a pure monomorphism of T -modules. We claim it is also a pure monomorphism of R-modules. Indeed, for any R-module Q there is a commutative diagram in which the lower row is a monomorphism, then so is the upper row.
Note also that for any R-module X, the embedding X ⊗ R ϕ : X → X ⊗ R T is a pure monomorphism of R-modules.
Finally, the commutativity of the diagran is a pure monomorphism of R-modules, implies that u is also a pure monomorphism of R-modules, as we wanted to prove. Proof. Assume that M R is a Mittag-Leffler R-module. For any family of T -modules {Q i } i∈I the canonical map T ⊗ T i∈I Q i → i∈I T ⊗ T Q i is an isomorphism. Hence, the composition of maps By Proposition 1.2, for any α ∈ Λ there exists β ≥ α such that, for any left T -module Q, ker (u βα ⊗ R T ⊗ T Q) = ker (u γα ⊗ R T ⊗ T Q) for any γ ≥ β ∈ Λ. In view of Proposition 1.4, and since tensor products preserves push-out diagrams this is equivalent to say that for any α there exists β ≥ α such that, for any γ, in the push out diagram βα ⊗ R T are pure monomorphisms of T -modules. By Proposition 2.1, we deduce that w γα and u ′ βα are pure monomorphisms of R-modules. By Proposition 1.4, we deduce that M is a Mittag-Leffler R-module.
Descent of pure projectivity
A module M is said to be pure projective if the functor Hom R (M, −) is exact with pure short exact sequences. Equivalently, M is pure projective if it is a direct summand of a direct sum of finitely presented modules.
Pure-projective modules always decompose into a direct sum of countably presented pure-projective submodules. Now we include the proof that pure projectivity descents via pure monomorphisms. We reproduce this result from [4, §8].
First we recall the results that relate pure-projective modules and Mittag-Leffler modules.
Remark 3.1. The map ρ in the definition of Mittag-Leffler module is obviously bijective if M is a finitely generated free module. An easy diagram chase shows that it is also bijective if M is finitely presented. Thus finitely presented modules are Mittag-Leffler. Since the class of Mittag-Leffler modules is closed under direct summands and arbitrary direct sums, all pure-projective modules are Mittag-Leffler modules. Proof. If M R is pure projective then, clearly, M ⊗ R T is pure projective as a T -module. For the converse, assume that M ⊗ R T is a pure projective T -module. By Remark 3.1 M ⊗ R T is a Mittag-Leffler T -module, and then Proposition 2.3 implies that M is a Mittag-Leffler R-module. We need to prove that, in addition, is split exact. Therefore X ⊗ R T and (M/X) ⊗ R T , being isomorphic to direct summands of M ⊗ R T , are pure-projective as T -modules.
Step 1. Every countably generated pure submodule of M is contained in a countably generated adapted submodule of M .
Let X be a countably generated pure submodule of M . As M is Mittag-Leffler and the modules Q i are countably generated, we can construct a sequence (X n , I n ) n∈N0 such that (1) X 0 = X and, for every n ≥ 0, X n is a countably generated pure submodule of M and X n ⊆ X n+1 ; (2) for any n ≥ 0, I n is a countable subset of I and it consists of the elements i ∈ I such that the canonical projection of X n ⊗ R T in Q i is different from zero; (3) for any n ≥ 0, the image of X n+1 ⊗ R T contains ⊕ i∈In Q i . To be more specific, suppose X n , I n were defined. Since each Q i is countably generated, there exists a countable set G n ⊆ M such that the canonical image of G n R ⊗ R T in M ⊗ R T contains ⊕ i∈In Q i . By Lemma 3.2, there exists a countably generated X n+1 which is a pure submodule of M containing X n + G n R. Then I n+1 is chosen as described in (2).
Set Y = n∈N0 X n . By construction, Y is an adapted submodule of M .
Step 2. Let X be an arbitrary adapted submodule of M such that X = M . Then there exists an adapted submodule X ′ of M such that X ⊂ X ′ and X ′ /X is a countably generated adapted submodule of M/X. Hence X ′ /X is pure-projective; therefore the pure exact sequence By definition, if X is an adapted submodule of M then (M/X) ⊗ R T ∼ = ⊕ i∈I ′ Q i for a certain I ′ ⊆ I. Hence, it makes sense to talk about adapted submodules of M/X with respect to the decomposition induced by F ′ = {Q i } i∈I ′ . By Step 1, there exists a submodule X ′ of M containing X and such that X ′ /X is a countably generated adapted submodule of M/X. Therefore X ′ is also an adapted submodule of M . Since X is an adapted submodule of X ′ the rest of the statement is clear.
Finally, combining the first and the second steps, we deduce that there exist an ordinal κ and a continuous chain {X α } α<κ of adapted submodules of M such that (i) X 0 = 0, and (ii) for any α + 1 < κ, X α+1 /X α is pure projective and a direct summand of X α+1 .
Corollary 3.4. Let R → T be a pure ring monomorphism of commutative rings, and let M be an R-module. Then M R is projective if and only if M ⊗ R T is a projective T -module.
Proof. Since projective modules are exactly the pure-projective modules that, in addition, are flat, the statement follows from Proposition 3.3 and Corollary 2.2. | 2022-09-30T01:15:35.301Z | 2022-09-29T00:00:00.000 | {
"year": 2022,
"sha1": "2717111e8ddb3a0ddfb28cb69e4b4a3923c9a846",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2717111e8ddb3a0ddfb28cb69e4b4a3923c9a846",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
18861206 | pes2o/s2orc | v3-fos-license | Human Autoantibodies Reveal Titin as a Chromosomal Protein
Assembly of the higher-order structure of mitotic chromosomes is a prerequisite for proper chromosome condensation, segregation and integrity. Understanding the details of this process has been limited because very few proteins involved in the assembly of chromosome structure have been discovered. Using a human autoimmune scleroderma serum that identifies a chromosomal protein in human cells and Drosophila embryos, we cloned the corresponding Drosophila gene that encodes the homologue of vertebrate titin based on protein size, sequence similarity, developmental expression and subcellular localization. Titin is a giant sarcomeric protein responsible for the elasticity of striated muscle that may also function as a molecular scaffold for myofibrillar assembly. Molecular analysis and immunostaining with antibodies to multiple titin epitopes indicates that the chromosomal and muscle forms of titin may vary in their NH2 termini. The identification of titin as a chromosomal component provides a molecular basis for chromosome structure and elasticity.
mitotic chromosomes is a prerequisite for proper chromosome condensation, segregation and integrity. Understanding the details of this process has been limited because very few proteins involved in the assembly of chromosome structure have been discovered. Using a human autoimmune scleroderma serum that identifies a chromosomal protein in human cells and Drosophila embryos, we cloned the corresponding Drosophila gene that encodes the homologue of vertebrate titin based on protein size, sequence similarity, developmental expression and subcellular localization. Titin is a giant sarcomeric protein responsible for the elasticity of striated muscle that may also function as a molecular scaffold for myofibrillar assembly. Molecular analysis and immunostaining with antibodies to multiple titin epitopes indicates that the chromosomal and muscle forms of titin may vary in their NH 2 termini. The identification of titin as a chromosomal component provides a molecular basis for chromosome structure and elasticity.
A utoimmune diseases are characterized by the presence of multiple autoantibodies that react with components of nuclear, cytoplasmic, or surface origin (for review see Nakamura and Tan, 1992;Fritzler, 1997). In clinical medicine, autoantibodies have been used to establish diagnosis, estimate prognosis, follow the progression of a specific autoimmune disease, and, finally, increase our knowledge of the pathophysiology of autoimmunity. In cell biology, autoantibodies have been extremely useful as probes for the identification of novel proteins and isolation of their corresponding genes. Human autoimmune sera have been particularly useful in the study of the eukaryotic nucleus where they have identified a wide range of nuclear antigens, including both single-and double-stranded DNA, RNA, histones, small nuclear RNA-binding proteins, transcription factors, nuclear lamins, heterochromatin-associated proteins, topoisomerase I and II, and centromere proteins (Tan, 1989(Tan, , 1991Earnshaw and Rattner, 1991;Fritzler, 1997).
Scleroderma (systemic sclerosis) is a multisystem connective tissue autoimmune disease of unknown etiology in which vascular lesions and tissue fibrosis are prominent features. Even though autoantibody production may be an epiphenomenon of autoimmune diseases, autoantibody targets in scleroderma are very specific (White, 1996). The autoantigens to which scleroderma sera typically react include topoisomerase I, centromere proteins, RNA polymerases, fibrillarin, and several other nucleolar antigens (LeRoy, 1996). However, autoantibodies of rare occurrence have been reported that react with antigens localized to metaphase chromosomes and to the centrosome (Jeppesen and Nicol, 1986;Nakamura and Tan, 1992).
Here, we report on the isolation of a Drosophila gene using a scleroderma serum that recognized an epitope on condensed mitotic chromosomes from both human cultured cells and early Drosophila embryos. Using this serum to screen a Drosophila expression library, we isolated the gene that encodes the chromosomal protein that proved to be the Drosophila homologue of vertebrate titin ( D-Titin ). Titin is a sarcomeric protein responsible for the elasticity of striated muscle and may also function as a molecular scaffold for the assembly of myofibrils (for review see Keller, 1995;Labeit and Kolmerer, 1995;Trinick, 1996;Labeit et al., 1997;Maruyama, 1997;Squire, 1997). We show that D-Titin is expressed early and continuously in striated muscle and that antibodies directed against two different, nonoverlapping domains of Drosophila TITIN label the Z-disks of Drosophila sarcomeres. The D-TITIN antibodies also stain condensed human and Drosophila mitotic chromosomes, consistent with the staining observed with the original scleroderma serum. Immunofluorescence with monoclonal and polyclonal antibodies against multiple epitopes of vertebrate titin further supports its localization to condensed mitotic human chromosomes, suggesting a role for titin not only in myofibrillar assembly and muscle elasticity, but potentially in the architecture of mitotic chromosomes.
As the name implies, titin is a giant protein. Individual filamentous titin molecules, which range in molecular mass from 2,993 to 3,700 kD, span a half-sarcomere from the Z-disk to the M-line, a distance of ف 1.2 m in sarcomeres of relaxed skeletal muscle (Labeit and Kolmerer, 1995;Kolmerer et al., 1996;Sorimachi et al., 1997). Nearly 90% of titin's mass is comprised of Ig-like and fibronectin type III (FN3) 1 -like repeats which are distributed throughout most of the protein (Labeit et al., 1990;Maruyama et al., 1993;Labeit and Kolmerer, 1995). The I-band region of vertebrate titin also contains a domain rich in proline (P), glutamic acid (E), valine (V), and lysine (K) that varies from 163 to 2,200 residues, the so-called PEVK domain. The PEVK domain and the tandemly arranged Ig domains of the I-band region of titin confer elasticity to the titin filament (Linke et al., 1996;Trombitas et al., 1998). Titin has phosphorylation sites (Sebastyén et al., 1995), recognition sites for muscle-specific calpain proteases (Sorimachi et al., 1995;Kinbara et al., 1997) and a serine/threonine kinase domain near the COOH terminus Takano-Ohmuro et al., 1992).
Titin may function as the scaffold upon which the sarcomeres are assembled into myofibrils (Keller, 1995;Trinick, 1996). Titin mRNA is expressed in myoblasts before fusion (Colley et al., 1990), and titin mRNA and protein are among the earliest molecules to localize within the developing sarcomere (Fulton and Alftine, 1997;van der Ven and Fürst, 1997). Titin binds to different proteins in each region of the sarcomere. In the Z-disk, the NH 2 terminus of titin binds to the COOH-terminal region of ␣ -actinin, an actin-binding protein that cross-links titin to actin filaments (Ohtsuka et al., 1997 a , b ;Sorimachi et al., 1997;Turnacioglu et al., 1997). In cardiac muscle, the NH 2 terminus of titin binds to actin in the Z-disk near the Z/I-band junction Trombitas and Granzier, 1997;. The A-band region of titin provides a molecular template for the regular assemblies of thick filament proteins such as myosin, MyBP-C, and MyBP-H (C-and H-protein;Itoh et al., 1988;Fürst et al., 1989;Soteriou et al., 1993;Houmeida et al., 1995;Freiburg and Gautel, 1996;. Titin may also bind to myosin II (Eilertsen et al., 1994). In the M-line, titin binds to M-protein and phosphorylated myomesin, two myosin-binding proteins that cross-link titin to myosin filaments (Eppenberger et al., 1981;Obermann et al., 1996Obermann et al., , 1997. Functional evidence that titin acts as a myofibrillar scaffold derives from experiments where an NH 2terminal fragment of titin was fused to green fluorescent protein. This fusion protein, which localizes to the Z-disk, causes myofibrillar disassembly when overexpressed (Turnacioglu et al., 1997). The identification of titin, a gigantic protein important in both the structure and elasticity of muscle, as a chromosomal component has significant ramifications for understanding chromosome condensation and chromosome integrity during mitosis.
Staining of HEp-2 cells
Fixed HEp-2 cells (Kallestad, Chaska, MN) were blocked for 30 min at RT in PBS plus 0.1% Triton X-100 and 3% BSA (PBSTB), incubated for 1 h at room temperature in primary antibody, washed 3 ϫ for 5 min in PBSTB, incubated for 1 h with fluorescently labeled secondary antibody and washed 3 ϫ for 5 min in PBSTB. Cells were then incubated for 5 min at room temperature in 1.25 g/ml propidium iodide (Sigma Chemical Co., St. Louis, MO) and washed twice in PBS. RNaseA (100 g/ml; Sigma Chemical Co.) was included during antibody incubations. Images were collected on a confocal microscope (Noran Instrument, Middleton, WI). Forty scleroderma sera, provided by D. Isenberg (King's College, London, UK) were tested at a range of dilutions from 1:25 to 1:5,000 in the initial screen. The chosen scleroderma serum was used at a 1:200 dilution for subsequent experiments. Vertebrate titin polyclonal and monoclonal antibodies were used at dilutions of 1:25 and 1:100, respectively (provided by S. Labeit, EMBL, Heidelberg, Germany and J. Trinick, Bristol University, Bristol, UK). All fluorescently labeled secondary antibodies were used at a dilution of 1:200 (Vector Laboratories Inc., Burlingame, CA). Several fixative procedures (acetone/MeOH, acetone, formaldehyde based), with and without prior Triton X-100 permeabilization, were tested and produced the same staining patterns.
Antibody Production
The ␣ -LG polyclonal antiserum was made by immunizing rabbits with 450 g of  -gal:D-TITIN fusion protein administered subcutaneously. The ␣ -LG antiserum was affinity-purified as described (Earnshaw and Rattner, 1991) and used at a dilution of 1:4. To produce the ␣ -KZ antiserum, an XhoI/EcoRI fragment of the most 5 Ј cDNA (see Fig. 2) encoding 636 residues was ligated in-frame to the pTrcHisA expression vector (Invitrogen Corp., Carlsbad, CA) and transformed into BL21 (DE3) cells. Protein was purified from inclusion bodies 3 h after induction with 0.1 mM IPTG (Rio et al., 1986). Rat polyclonal antibodies were raised (Covance Inc., Denver, PA) against 1 mg of renatured inclusion body protein. The ␣ -KZ antiserum was used at 1:5,000 dilution for embryo immunostaining in Fig. 3 and at 1:500 for all other experiments. Equivalent dilutions of preimmune ␣ -LG and ␣ -KZ sera were used as controls, as well as secondary antibodies alone.
Drosophila Immunostaining and In Situ Hybridization
Embryo fixation and antibody staining were performed as described (Reuter et al., 1990). In situ hybridizations to whole-mount embryos were carried out as described (Tautz and Pfeifle, 1989), except formaldehyde was used in place of paraformaldehyde and levamisole was omitted from the staining reaction. Images were collected on an Axiophot microscope (Carl Zeiss, Inc., Thornwood, NY). Adult thoracic muscle and larval gut muscle were prepared for immunostaining and stained as described (Saide et al., 1989;Lakey et al., 1993). Texas red-phalloidin was used at a concentration of 0.1 U/ml (Molecular Probes, Inc., Eugene, OR). Images were collected on a Noran confocal microscope.
Human Scleroderma Serum Stains Mitotic Chromosomes
To identify novel nuclear components and to isolate and characterize the corresponding genes in Drosophila , we screened for human autoimmune sera that recognized nuclear components with cell cycle-dependent distribution in both human cells and early Drosophila embryos. Sera from 40 patients diagnosed with the autoimmune disease scleroderma were studied and only one serum was identified that gave chromosomal staining on both human epithelial HEp-2 cells and Drosophila 0-2 h embryos ( Fig. 1 A ). During interphase, when chromosomes are decondensed, low level staining was visible throughout the nucleus, with the exception of the nucleoli. During prophase, staining with this serum colocalized with the condensing chromosomes. From metaphase through telophase, chromosomes were stained uniformly.
Cloning of Drosophila Titin
To isolate the corresponding gene in Drosophila , the human autoimmune scleroderma serum was used to screen a Drosophila genomic expression library (Goldstein et al., 1986). Out of 5 ϫ 10 6 plaque-forming units screened, five independent, overlapping genomic clones were isolated, each encoding several copies of a 71-amino acid repeat rich in proline, valine, glutamic acid, and lysine residues ( Fig. 2 C ). The largest clone (designated LG) was expressed in Escherichia coli , and the corresponding fusion protein was purified and used to immunize rabbits. ␣ -LG affinity-purified antibodies gave the same chromosomal staining pattern on both human HEp-2 cells and Drosophila 0-2 h embryos in all stages of the cell cycle as was initially observed with the human serum ( Fig. 1 B ). We subsequently isolated additional exons from this Drosophila gene (see below) and used a different domain of the protein to raise a second polyclonal antiserum in rat (designated ␣ -KZ). The ␣ -KZ antiserum reproduced the staining pattern observed with the human autoimmune serum and the ␣ -LG antibody, that is, nuclear staining during interphase (not shown) and staining of condensed chromosomes during mitosis ( Fig. 1 C ).
Attempts to clone the entire genomic region corresponding to the chromosome-associated protein gene were unsuccessful most likely because of its repetitive structure. The isolation of cDNAs was similarly difficult due in part to the repetitive structure of the gene but also because of the predicted large size of the corresponding mRNA (see below). Nonetheless, cDNAs mapping to several discrete regions of the gene, encoding a total of 1,608 amino acids, were isolated and characterized (Fig. 2). The partial cDNAs, designated KZ, NB, and JT, were named according to the libraries in which they were found. See Fig. 2 legend for the details of cloning. All of the cDNA clones and genomic phage clones 1-5 map to cytological position 62C1-2 in a region known to contain only a single gene.
Notably, every open reading frame (ORF) identified from the chromosome-associated protein gene shows significant similarity to vertebrate titins ( Fig. 2 B ). Using the conceptual translation of the KZ cDNA to do a BLAST search, the two proteins with greatest similarity are chicken skeletal titin (P ϭ 2.7e Ϫ 99 ) and human cardiac titin (P ϭ 1.2e Ϫ 80 ). The ORF within the unprocessed NB cDNA also shows significant similarity to vertebrate titins. An alignment between the ORFs derived from the KZ and NB cDNAs and the chicken skeletal and human cardiac titins is shown in Fig. 2 B . In the region of overlap, the ORF encoded by the KZ cDNA shows 28.6% identity/ 58.3% similarity to chicken skeletal titin, and 27.4% identity/56.8% similarity to human cardiac titin. The ORF encoded by the NB cDNA shows 18.4% identity/48.9% similarity to chicken skeletal titin, and 17.2% identity/47.7% similarity to human cardiac titin in the region of overlap. The sequence conservation among the ORFs from the LG and JT clones and vertebrate titins is not as great; however, in these clones, the frequency of P, E, V, and K residues (63% for LG, 56.4% for JT) strongly suggests that these ORFs correspond to the elastic PEVK domain of vertebrate titin, which is 70% P, E, V, K (Fig. 2 C ). Thus, starting with a human autoimmune scleroderma serum, we have cloned a Drosophila gene encoding a nuclear protein that localizes to chromosomes and is homologous to vertebrate titins.
D-Titin in Striated Muscles and Their Precursors
To determine whether the gene that encodes the nuclear, chromosome-associated form of titin also encodes the muscle form of titin, we examined both transcript and protein accumulation in embryos, and determined the subcellular localization of the protein in muscle. Analysis of RNA expression by in situ hybridization to whole-mount Drosophila embryos revealed RNA accumulation as early as the germ band extended stage in both somatic and visceral muscle precursors (Fig. 3 A ). Protein was initially detected during late stage 11 in the precursors of both somatic and visceral muscles ( Fig. 3 A Ј ), before myoblast fusion (stage 13; Hartenstein, 1993). This early accumulation of protein in Drosophila muscle precursors parallels vertebrate titin accumulation in early myoblasts (Colley et al., 1990). Expression of both RNA and protein in all visceral and somatic muscles persisted throughout embryogenesis (Fig. 3, B-H Ј ). These muscles include the somatic or body wall muscles, the pharyngeal muscles, and the visceral musculature which surrounds the digestive system. We did not detect RNA or protein in embryonic cardiac muscle or cardiac muscle precursors.
To determine if the protein localizes to specific regions in the sarcomere, we immunostained adult thoracic muscle with antibodies directed against two different domains of the protein ( ␣ -KZ and ␣ -LG; Fig. 2) and with the original human autoimmune scleroderma serum. The ␣ -KZ antiserum stained the Z-disks of each sarcomere, which can be identified as the phase-dark bands on myofibrils (Fig. 4 A ). A double-stained image of a myofibril stained with ␣ -KZ and Texas red-phalloidin, which stains the filamentous actin of the I-band, supports this localization (Fig. 4 B ). Double staining with the ␣ -KZ antiserum and either the human autoimmune scleroderma serum (Fig. 4 C ) or the ␣ -LG affinity-purified antibodies (Fig. 4 D ) also revealed Z-disk staining. Both the ␣ -LG antibodies and the sclero-derma serum also stained the M-line suggesting potential cross-reactivity to other antigens. The scleroderma serum, but not the ␣ -LG antibodies, also stained along the length of the myofibril (Fig. 4 C ), suggesting the presence of additional, nontitin antibodies in the serum. Two antibodies against vertebrate titin recognized epitopes on Drosophila myofibrils: serum from a patient with myasthenia gravis, which recognizes the major immunogenic region (MIR) epitope in the I-band near the I/A-band junction (Fig. 4 E ;Gautel et al., 1993), and anti-Zr5/Zr6, a polyclonal antiserum that was raised to the expressed ␣ -actinin binding Z-repeat motifs Zr5/Zr6 (Fig. 4 F ; Sorimachi et al., 1997). The nearly complete overlap of the signals with ␣ -KZ and the MIR serum suggests that the resolution of confocal microscopy was insufficient to allow visual separation of Z-disk staining from I/A-band staining in nonstretched Drosophila myofibrils. We also found that the ␣ -KZ and ␣ -LG antibodies stained the Z-disks of visceral muscles from third instar larvae (Fig. 4 G ). Drosophila visceral muscle, unlike vertebrate smooth muscle, is striated.
We have named the gene isolated with the human autoimmune scleroderma serum D-Titin for Drosophila Titin. This name is based on the high level of similarity to vertebrate titins, the expression pattern of this gene during embryogenesis (Fig. 3), the localization of two different domains of the protein to the Z-disks in sarcomeres by immunofluorescence (Fig. 4) and the size of the protein on immunoblots ( Fig. 5; see below).
D-TITIN Migrates in the Megadalton Size Range
Vertebrate muscle titin isoforms range in molecular mass from 2,993 to 3,700 kD (Labeit and Kolmerer, 1995;Kolmerer et al., 1996;Sorimachi et al., 1997). Because the mini-titins identified in Drosophila are smaller (500-1,200 kD; Ayme-Southgate et al., 1991Fyrberg et al., 1992;Lakey et al., 1993), they are unlikely to span a halfsarcomere from the Z-disk to the M-line. Mutational analysis further suggests that these proteins do not provide the elasticity and proposed scaffolding functions of vertebrate titin. Our data shows that NH 2 -terminal regions of D-TITIN localize to the Z-disk, that D-TITIN has significant homol-ogy to vertebrate titins, and that D-TITIN is expressed early and continuously in striated muscles in Drosophila. These characteristics make D-TITIN a good candidate for the Drosophila homologue of vertebrate muscle titin. If it is indeed the homologue, D-TITIN should be in the 2-4-MD size range. To test this prediction, total protein extracts from 8-24 h embryos were prepared (in muscle precursors, D-TITIN is first detected at 7ف h by immunostaining of embryos), proteins were separated on denaturing polyacrylamide gradient gels (2.5-7.5%), and were trans- Figure 1. Human scleroderma serum identifies a chromosome-associated protein in human epithelial cells and Drosophila early embryos. (A) Chromosomal staining pattern recognized by the human autoimmune serum on HEp-2 cells (left panels) and Drosophila early embryos (right panels). HEp-2 cells and Drosophila 0-2 h embryos were double-stained with the scleroderma serum (green) and propidium iodide to detect DNA (red). The merged image is on the right (yellow in region of overlap). (B) Chromosomal staining pattern on HEp-2 cells (left panels) and Drosophila 0-2 h embryos (right panels) recognized by an affinity-purified polyclonal antibody (␣-LG) raised against the PEVK-rich repeats of the Drosophila protein identified by expression cloning with the human autoimmune scleroderma serum. Top and bottom panels show metaphase and anaphase nuclei, respectively, stained with ␣-LG (green) and propidium iodide (red). The merged image is on the right (yellow). (C) Immunofluorescence of HEp-2 cells (left panels) and Drosophila 0-2 embryos (right panels) using a polyclonal serum (␣-KZ) raised against an NH 2 -terminal peptide encoded by the KZ cDNA (green) and a DNA dye, propidium iodide (red). The merged image is on the right (yellow). Bar, 5 m.
The Journal of Cell Biology, Volume 141, 1998 326 ferred to nitrocellulose filters. Immunoblots incubated with both ␣-LG and ␣-KZ detected a discrete band in the megadalton size range, consistent with vertebrate titin (Fig. 5 a, lanes 2 and 4). ␣-LG and ␣-KZ preimmune sera revealed no cross-reacting polypeptides (Fig. 5 a, lanes 3 and 5). Thus, D-TITIN is likely to be the Drosophila homologue of vertebrate sarcomeric titin based on size, sequence similarity, developmental expression and subcellular localization.
To determine the size of chromosome-associated D-TITIN in nonmuscle cells, total protein extracts were prepared from both 0-2 h Drosophila embryos (myogenesis does not begin until several hours later) and from HeLa cells (epithelial cells). In the 0-2 h embryonic extracts, we detected a discrete high molecular mass polypeptide of identical size on immunoblots with both ␣-LG (Fig. 5 b, lane 2) and ␣-KZ (Fig. 5 b, lane 4). No cross-reacting polypeptides were detected with either ␣-LG or ␣-KZ preimmune sera (Fig. 5 b, lanes 3 and 5). Using total cell extracts from HeLa cells, we also detected a megadalton polypeptide with both ␣-LG and ␣-KZ antisera (Fig. 5 c, lanes 2 and 4), with no staining with the preimmune sera (Fig. 5 c, The genomic phage clones were either isolated directly (phage clone 5) using the LG genomic DNA expression clone as a probe, or were isolated using DNA flanking two nearby P-element insertions. Phage clones 1-4 and 6-8 were isolated with DNA flanking the v(3)ET1 and v(3)ET2 insertions, respectively. Three different D-Titin cDNA fragments were isolated from multiple independent screens of seven available cDNA libraries. The 5Ј KZ cDNA was isolated from a 9-12 h embryonic cDNA library (Zinn et al., 1988). We infer that the KZ cDNA encodes an NH 2 terminus based on the presence of a putative initiator methionine codon followed by an ORF encoding 882 AA. The ORF is flanked at the 5Ј end by 389 nt of noncoding sequence. The NB cDNA was isolated from a 12-24 h embryonic cDNA library (Brown and Kafatos, 1988). Within the unprocessed NB cDNA, there is a 1-kb ORF flanked at its 5Ј end by a 3Ј splice acceptor site and at its 3Ј end by a 5Ј splice donor site (Mount, 1982;Mount et al., 1992). Several small (р312 nt) cDNAs were isolated from a 0-24 h embryonic cDNA library (Tamkun et al., 1991). The largest cDNA isolated from the Tamkum library is indicated as JT cDNA. Multiple unsuccessful attempts were made to connect the genomic DNA from phage clone 5 to the surrounding phage containing DNA from this region. Nonetheless, all of the genomic phage clones (1-7) and all of the cDNA clones colocalize to the same site on polytene chromosomes from wild-type larvae, cytological region 62C1-2. Furthermore, genomic phage clones 1-5 and all the D-Titin cDNAs map to an interval for which only a single complementation group has been identified, based on genomic Southern mapping and in situ hybridization to polytene chromosomes from larvae carrying local deficiencies. An asterisk indicates genomic fragments that revealed somatic and visceral muscle RNA accumulation in whole-mount embryos by in situ hybridization. (B) Protein sequence alignment among the corresponding ORFs from two D-Titin cDNAs (KZ and NB), chicken skeletal titin and human cardiac titin. Identities among all three proteins are indicated by an asterisk and conserved residues shared among all three proteins are indicated by period. (C) Sequence of the PEVK-rich ORF originally isolated with the scleroderma autoimmune serum (LG clone) and the sequence from the largest cDNA isolated from the Tamkun library (JT cDNA). 63% of the residues in the LG clone are either proline (P), glutamic acid (E), valine (V), or lysine (K). 56.4% of the residues encoded by the JT cDNA are either P, E, V, or K. The PEVK-rich domain of vertebrate titin, which provides muscle elasticity, is %07ف P, E, V, K. These sequence data are available from GenBank/EMBL/DDJB under accession numbers AF045775, AF045776, AF045777, and AF045778. and 5). Thus, antibodies to D-TITIN detected a very high molecular mass polypeptide in nonmuscle cells from Drosophila and in human epithelial cells. Since, by immunofluorescence, the only detectable staining with these antibodies on Drosophila 0-2 h embryos and HEp-2 cells is chromosomal, we concluded that the chromosomal form of D-TITIN migrates in the megadalton size range and is approximately as large as the muscle form.
Antibodies to Vertebrate Muscle Titin Stain Human Chromosomes
Given that Drosophila TITIN localized to condensed Drosophila mitotic chromosomes and that antibodies directed against this protein also stained human chromosomes, we were curious whether antibodies to vertebrate muscle titin also stained condensed chromosomes. We used a panel of eight antibodies directed against different epitopes of vertebrate titin to immunostain HEp-2 cells. Six of the eight antibodies directed against vertebrate titin stained the condensed chromosomes in a pattern indistinguishable from that observed with the original scleroderma serum and the antibodies to the D-TITIN protein (Fig. 6, A and B). The antibodies that gave chromosomal localization include three mouse monoclonals, two of which recognize distinct epitopes in the A-band (BD6 and CE12; Whiting et al., 1989) and one of which recognizes the PEVK domain in the I-band (9D10; data not shown; Wang et al., 1991). We also observed chromosomal staining with two rabbit polyclonal antibodies to vertebrate titin: N2A, which recognizes an I-band epitope in skeletal titin, and A168, which recognizes an M-line epitope (Linke et al., 1996). Finally, the MIR human autoimmune serum, which recognizes an I/A-band epitope , also stained condensed mitotic chromosomes of HEp-2 cells although additional staining of the mitotic apparatus was visible with this serum (Fig. 6 A). The two vertebrate titin antibodies that did not recognize titin on HEp-2 chromosomes, anti-Zr5/Zr6 and T12, are directed against NH 2terminal regions of titin that either map to the Z-disk and bind to ␣-actinin (anti-Zr5/Zr6; Sorimachi et al., 1997) or map to the Z-disk/I-band junction (T12; Fürst et al., 1988). The NH 2 -terminal region of the D-Titin isoform encoded by the KZ cDNA does not contain regions homologous to the ␣-actinin-binding regions of vertebrate titin. It is likely that cDNAs encoding the NH 2 terminus of the muscle D-TITIN isoform will reveal homologies to the ␣-actininbinding regions since (a) the COOH-terminal sequence of ␣-actinin from Drosophila is highly homologous to the COOH terminus of human ␣-actinin (Fyrberg et al., 1990) and (b) the anti-Zr5/Zr6 antiserum stains Drosophila myofibrils but not chromosomes. Thus, titin localizes to chro-mosomes in both Drosophila embryos and human cells although the chromosomal and muscle forms of titin may vary in their NH 2 termini.
Proposed Function for Titin in Chromosome Structure
The uniform distribution of titin along condensed chromosomes suggests a structural role for titin in chromosome condensation, a role similar to the one titin plays as a scaffolding element in the sarcomeres (Trinick, 1994(Trinick, , 1996. Chromosome condensation during mitosis is essential for proper segregation and for compacting chromosomes so that they are no longer at the cleavage furrow during cytokinesis (for review see Hirano, 1995;Koshland and Strunnikov, 1996). Chromosome condensation is thought to occur by a deterministic process based on the fixed length and banding patterns of individual chromosomes in a given cell-type, the invariant position of specific sequences within a chromosome, and the fixed axial diameter of mitotic chromosomes (Koshland and Strunnikov, 1996). The invariant axial diameter of condensed mitotic chromosomes suggests the involvement of a protein that functions in part as a "molecular ruler", a function that has already been ascribed to titin in muscles (Trinick, 1994(Trinick, , 1996. Thus, we can envision the chromosomal form of titin functioning in the assembly of the higher-order structure observed in condensed mitotic chromosomes, perhaps determining the length and/or axial diameter of condensed chromosomes. Titin is the elastic component of sarcomeres where it acts as a molecular spring that prevents sarcomere disruption when muscles are overstretched. Likewise, the chro- and with the anti-Zr5/Zr6 polyclonal serum (red) that was raised against the rabbit titin Z-repeats that bind to ␣-actinin; the lower panel shows the merged image. (G) Third instar larval gut muscle stained ␣-KZ antiserum (green). The fluorescent staining overlaps the phase-dark bands (not shown) that correspond to the Z-disks of the gut muscles. No staining was observed with the ␣-KZ preimmune serum nor with any of the secondary antibodies. However, a regular pattern of accumulation on myofibrils was detected with the LG preimmune serum. Bar, 3 m. Figure 5. D-TITIN migrates in the megadalton size range and is detected in nonmuscle cells. Total protein extracts from (a) Drosophila 8-24 h embryos (after myogenesis), (b) Drosophila 0-2 h embryos (several hours before myogenesis), and (c) HeLa cells were separated on SDS-PAGE 2.5-7.5% gradient gels. Lanes 1, Coomassie blue-stained SDS-gels. Lanes 2-5, immunoblots incubated with ␣-LG, LG preimmune serum, ␣-KZ, and KZ preimmune serum, respectively. Lane 6 in a is a shorter exposure of an immunoblot from 8-24 h embryos incubated with the ␣-KZ antiserum that reveals the ladder-like array of titin degradation products. The protein size markers (cross-linked phosphorylase b; Sigma Chemical Co.), were visualized by Coomassie blue staining. mosomal form of titin could provide elasticity to chromosomes and resistance to chromosome breakage during mitosis. The elastic properties of purified titin (Kellermayer et al., 1997;Rief et al., 1997;Tskhovrebova et al., 1997) correspond well to the recently described elastic properties of chromosomes in living cells (Houchmandzadeh et al., 1997). Studies on vertebrate myofibrils have shown that the PEVK domain and the Ig/FN3 repeats constitute a two-spring system acting in series to confer reversible extensibility to titin (Linke et al., 1996;Trombitas et al., 1998). Under physiological stretching conditions, the Ig/FN3 domains straighten and the PEVK domain reversibly unfolds. Under more extreme nonphysiological stretching conditions, the Ig and FN3 domains also unfold. However, refolding of the Ig and FN3 repeats is slow and occurs only in the absence of stretch force. Similarly, metaphase chromosomes from living cells also show two levels of extensibility (Houchmandzadeh et al., 1997). Metaphase chromosomes from cultured newt lung cells can be stretched up to 10 times their normal length and return to their native shape. Further nonphysiological extensions of chromosomes from 10 to 100-fold are irreversible. The discovery of titin on chromosomes integrates the mechanical properties of muscle titin with the elastic properties of eukaryotic chromosomes.
Does titin remain associated with chromosomes during interphase? Although we detected titin in the nucleus during interphase with both the Drosophila titin antibodies and the antibodies directed against vertebrate titin, the resolution of confocal microscopy does not allow us to directly ask if titin remains bound to chromosomes. However, we have looked at the accumulation of D-TITIN on salivary gland polytene chromosomes from Drosophila third instar larvae using the ␣-KZ antiserum. Based on gene activity and chromatin ultrastructure, polytene chromosomes are functionally similar to diploid interphase chromosomes (Tissièrres et al., 1974;Elgin and Boyd, 1975;Bonner and Pardue, 1976;Woodcock et al., 1976). Low level D-TITIN staining was observed throughout the chromosomes with several discrete sites of higher accumulation (data not shown). These results suggest that titin remains associated with relatively decondensed interphase chromosomes, consistent with the chromosome core structure being templated during interphase (Andreasson et al., 1997). Measurements made at different stages in the cell cycle indicate that chromosome flexibility increases during the transition from interphase to metaphase (Houchmandzadeh et al., 1997). Regulated phosphorylation of titin may control the assembly of interphase chromosomes into the higher-order structure of metaphase chromosomes and indirectly alter chromosome flexibility.
Chromosomal and Muscle Forms of Titin
The chromosomal and muscle forms of titin are unlikely to be identical. Although both forms of D-TITIN appeared to comigrate, the resolution of the gradient gels used to detect both the muscle and chromosomal forms of D-TITIN may be insufficient to resolve the respective molecular mass differences. The most 5Ј D-Titin cDNA isolated in this work, which encodes an NH 2 terminus, does not contain the most 5Ј sequences found in vertebrate muscle Figure 6. Titin localizes to condensed mitotic chromosomes. HEp-2 cells double-stained with antibodies to vertebrate titin (green) and propidium iodide (red). The merged image is on the right (yellow in region of overlap). From top to bottom: N2A, MIR, BD6, CE12, and A168. Antibodies directed against the most NH 2 -terminal regions of vertebrate titin, anti-Zr5/Zr6 and T12, did not detect titin on chromosomes. Antibodies directed against the I-band regions of titin, N2A and 9D10 (not shown) showed weak chromosomal staining. The MIR serum (I/A-band junction) showed stronger chromosomal staining, as well as staining of the mitotic apparatus. Titin antibodies directed against A-band epitopes (BD6 and CE12) and the M-line epitope (A168) showed very strong staining of condensed chromosomes. Bar,5 m. titin, the so-called Z-repeats that bind to ␣-actinin in the Z-disk (Turnacioglu et al., 1996;Ohtsuka et al., 1997a,b;Sorimachi et al., 1997). Antibodies directed against the ␣-actinin-binding region of vertebrate titin did not recognize titin on human chromosomes, although antibodies directed to more COOH-terminal epitopes were reactive (Fig. 6). Furthermore, the most 5Ј cDNA contains a unique 81-amino acid sequence at the NH 2 terminus. Titin mRNA and protein have been detected in BHK (Jäckel et al., 1997). Moreover, in a subline derived from the BHK cells, the titin gene contained deletions in the Z-disk region. Titin mRNA was still detected in the mutant cell line. Altogether, these results suggest that muscle titin and chromosomal titin vary at least in their most NH 2 -terminal regions. Our results with D-TITIN suggest that the muscle and chromosomal forms of titin are encoded by splice variants of the same gene, and that we have not cloned the exons encoding the most NH 2 -terminal regions of muscle titin. Polytene chromosome in situ hybridization and genomic southern analysis revealed that D-Titin is a singlecopy gene (unpublished results). Determination of whether vertebrate chromosomal and muscle titins are also splice variants of the same gene or, instead, are encoded by two closely related genes awaits further investigation of vertebrate titin.
The previously known components of condensed chromatin include DNA, histones, topoisomerase II, the SMC family of proteins (for review see Chuang et al., 1994;Earnshaw and Mackay, 1994;Hirano and Mitchison, 1994;Peterson, 1994;Hirano, 1995;Saitoh et al., 1995;Strunnikov et al., 1995;Holt and May, 1996;Koshland and Strunnikov, 1996;Warburton and Earnshaw, 1997), the condensins, three recently identified proteins that form a complex with SMC family members , and the cohesins, proteins that link condensation and sister chromatid cohesion (Guacci et al., 1997;Michaelis et al., 1997). Topo II and SMC were identified as the two most abundant chromosomal "scaffold" proteins, which by definition, comprise an insoluble fraction purified from isolated mitotic chromosomes. These scaffold proteins are proposed to determine the characteristic shape of mitotic chromosomes. Both genetic and in vitro depletion studies confirm that topo II and SMC proteins are indeed required for chromosome condensation and subsequent chromosome segregation; whether their roles in chromosome condensation are structural or entirely enzymatic, however, remains to be determined (see previously cited reviews and Kimura and Hirano, 1997;Sutani and Yanagida, 1997). If titin is part of the chromosomal scaffold, why was titin not identified in the initial biochemical analyses of scaffold proteins? The simplest explanation is based on the size of titin. Almost all of the protein gels used to identify topo II, SMC and other scaffold proteins were 12.5% polyacrylamide gels and did not resolve proteins of molecular mass Ͼ200 kD. Indeed, in almost every published photograph of a protein gel of purified scaffold components, there is a high molecular mass component that fails to enter the gel (Adolph et al., 1977;Paulson and Laemmli, 1977;Laemmli et al., 1978;Lewis and Laemmli, 1982;Earnshaw and Laemmli, 1983). We demonstrated that chromosomally associated titin from HEp-2 cells and early Drosophila embryos migrates in the megadalton size range. Although a protein of this size can be resolved in 2.5-7.5% gradient gels, titin would not enter the 12.5% gels typically used in chromosome scaffold studies.
Titin as an Autoantigen
As an alternative to a biochemical approach, autoimmune sera have been successfully used as probes in the isolation of novel chromosomal proteins and for expression cloning of the corresponding genes (for review see Earnshaw and Rattner, 1991;Tan, 1989Tan, , 1991Fritzler, 1997). Even though the prevalence for autoantibodies against nuclear components appears to be higher, the spectrum of autoantibodies identified in sera from patients with autoimmune diseases is much broader, and includes numerous cytoplasmic antigens as well as several extracellular matrix proteins (Fritzler, 1997). Many autoantigens are proteins very well conserved throughout evolution, ranging from species as distant as man, fish, amphibia, Drosophila, yeast, and plants (Snyder and Davis, 1988;Tan et al., 1987;Mole-Bajer et al., 1990;Brunet et al., 1993;Shibata et al., 1993;Rendon et al., 1994;Bejarano and Valdivia, 1996). The work presented here is the first to identify titin autoantibodies in scleroderma sera, the first to reveal titin as a chromosomal component and, to our knowledge, the first successful cloning of a Drosophila gene using a human autoimmune serum.
The identification of autoantibodies against titin until now has been restricted to a subset of patients with myasthenia gravis (MG) who have also developed thymus neoplasia (Aarli et al., 1990;Gautel et al., 1993). However, the stimulus for the autoimmune response to titin may be due to molecular mimicry. This immunoreactivity is directed exclusively to a single epitope of titin, the MIR epitope. This epitope is shared with neurofilaments that are overexpressed in these thymomas (Marx et al., 1996). Since titin MIR autoantibodies can be detected in 97% of sera from MG-thymoma patients, the MIR epitope of titin is a sensitive marker for evaluating the presence of thymoma in MG patients . The isolation of the D-Titin gene in Drosophila using a scleroderma autoimmune serum raises the inevitable question of whether titin may represent a new, unidentified autoantigen in scleroderma.
D-Titin represents the third Drosophila member of the titin gene family. The other two family members, both of which are referred to as mini-titins (Vibert et al., 1996), include KETTIN and PROJECTIN. KETTIN is a 500-kD family member with low homology to vertebrate titin. KETTIN has been proposed to be one of the structural components of Z-disks; however, mutations in the corresponding gene have not yet been described . PROJECTIN is a highly homologous -002,1فkD titin family member most closely related to TWITCHIN (Ayme-Southgate et al., 1991Fyrberg et al., 1992), a Caenorhabditis elegans mini-titin that binds to myosin filaments in body wall muscles. TWITCHIN has the Ig and FN3 repeats and a myosin light chain kinase domain near the COOH terminus, but does not have an obvious PEVK region (Benian et al., 1989;Benian et al., 1996). TWITCHIN is thought to regulate myosin activity because mutations in twitchin (unc-22) cannot develop or sustain muscle con-tractions (Waterston et al., 1980;Moerman et al., 1988). Furthermore, mutations in twitchin can be suppressed by mutations in the myosin heavy chain gene. Lethal alleles of Drosophila PROJECTIN exist as mutations in the bent locus. Homozygous bent mutant animals die as late embryos but, unlike twitchin mutant worms, have apparently normal muscle contractions (Fyrberg et al., 1992;Ayme-Southgate et al., 1995), suggesting normal sarcomere organization. The identification of a true titin homologue in a genetically tractable organism will greatly facilitate the analysis of titin function both in sarcomeres and in chromosome structure and flexibility. Indeed, we have mapped the D-Titin gene to a cytological interval known to contain only a single gene. A thorough characterization of D-Titin mutations is currently underway. | 2014-10-01T00:00:00.000Z | 1998-04-20T00:00:00.000 | {
"year": 1998,
"sha1": "208886d27745bcc8340c15fc3146498b4596ae42",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/141/2/321/1278496/33020.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "208886d27745bcc8340c15fc3146498b4596ae42",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
234359163 | pes2o/s2orc | v3-fos-license | Brodalumab in the treatment of psoriatic arthritis – the latest reports
Dear Editor, The quest to find an optimal and effective treatment for psoriatic arthritis (PsA) is expanding research to new molecules and inhibiting further pathways in the pathogenesis of inflammation in PsA. In this report, I would like to draw your attention to brodalumab, a fully human IgG2 monoclonal antibody, which binds to human interleukin 17RA. Binding of this interleukin leads to blockade of the biological activity of the pro-inflammatory cytokines IL-17A, IL-17F, the heterodimer IL-17A/F, IL-17C and IL-17E [1]. It is worth being aware that IL-17A, IL-17F and the IL-17A/F heterodi mer have a multidirectional effect; they induce pro-inflammatory mediators such as IL-6, GROα and G-CSF from epithelial cells and fibroblasts, which affect the ongoing state of inflammatory tissues [2]. Brodalumab is used to treat moderate to severe plaque psoriasis (USA, Canada). Currently, brodalumab is also used to treat PsA but only in Japan [3]. It is worth emphasizing that brodalumab was effective in phase II clinical trials in PsA patients. Brodalumab has been used in a randomized, double-blind, and placebo-controlled trial. In this clinical trial, doses of 140 mg and 280 mg of brodalumab were administered once weekly for 12 weeks. Administration of brodalumab was associated with a significantly better clinical response compared to placebo (American College of Rheumatology 20 [ACR20] as the primary endpoint) [4]. The encouraging results of the second phase trials have prompted clinicians to conduct larger clinical trials. It is worth getting acquainted with the latest phase III study, the results of which were published at the end of October 2020.
Dear Editor, The quest to find an optimal and effective treatment for psoriatic arthritis (PsA) is expanding research to new molecules and inhibiting further pathways in the pathogenesis of inflammation in PsA.
In this report, I would like to draw your attention to brodalumab, a fully human IgG2 monoclonal antibody, which binds to human interleukin 17RA. Binding of this interleukin leads to blockade of the biological activity of the pro-inflammatory cytokines IL-17A, IL-17F, the heterodimer IL-17A/F, IL-17C and IL-17E [1]. It is worth being aware that IL-17A, IL-17F and the IL-17A/F heterodi mer have a multidirectional effect; they induce pro-inflammatory mediators such as IL-6, GROα and G-CSF from epithelial cells and fibroblasts, which affect the ongoing state of inflammatory tissues [2]. Brodalumab is used to treat moderate to severe plaque psoriasis (USA, Canada). Currently, brodalumab is also used to treat PsA but only in Japan [3].
It is worth emphasizing that brodalumab was effective in phase II clinical trials in PsA patients. Brodalumab has been used in a randomized, double-blind, and placebo-controlled trial. In this clinical trial, doses of 140 mg and 280 mg of brodalumab were administered once weekly for 12 weeks. Administration of brodalumab was associated with a significantly better clinical response compared to placebo (American College of Rheumatology 20 [ACR20] as the primary endpoint) [4]. The encouraging results of the second phase trials have prompted clinicians to conduct larger clinical trials. It is worth getting acquainted with the latest phase III study, the results of which were published at the end of October 2020.
What are the results of phase III trials of brodalu mab in PsA? AMVISION1 and AMVISION2 trials At the end of October 2020, the results of phase III clinical trials on the use of brodalumab in PsA were published [5]. These were double-blind, randomized, and placebo-controlled studies. The studies involved adult patients with active PsA who had been ill for at least 6 months. These patients did not tolerate traditional treatment or the treatment was insufficient. Additional inclusion criteria were having at least three painful and three swollen joints as well as active psoriatic lesions on the skin.
Patients (both in AMVISION-1 and AMVISION-2) were divided into three groups in a 1 : 1 : 1 ratio and received subcutaneous brodalumab 140 mg, brodalumab 210 mg and placebo, respectively [5]. This intervention was performed at week 0 and week 1, and then every two weeks until week 24. It should be noted that the primary endpoint was achievement of ACR20 at week 16 of treatment.
At week 16, it was noted that the primary endpoint was achieved by 45.8% of patients in the brodalumab 140 mg group, 47.9% of patients in the brodalumab 210 mg group and 20.9% of patients in the placebo group. It should be noted that similar results were noted at week 24 of this study. Patients receiving brodalumab achieved a greater percentage of ACR 50/70 compared to placebo. Moreover, patients receiving brodalumab had greater improvements in symptoms such as dactylitis and enthesitis. The study summary demonstrated that brodalumab had a good safety profile and the rate of serious adverse events was low [5].
Brodalumab compared with other biologic thera pies for psoriasis
In the context of the effectiveness of brodalumab in PsA treatment, it is worth analyzing the previous studies on the efficacy of brodalumab in comparison to other biological therapies used in the treatment of moderate to severe psoriasis. The effectiveness of individual biological drugs was compared based on the results in the PASI scale (Psoriasis Area and Severity Index).
The most effective preparations turned out to be brodalumab and ixekizumab. Brodalumab was used at a dose of 210 mg every two weeks, and ixekizumab Brodalumab in the treatment of psoriatic arthritis Reumatologia 2021; 59/2 at a dose of 80 mg, also every two weeks. Brodalumab at a dose of 210 mg was significantly more effective than drugs such as adalimumab (40 mg every two weeks), apremilast (30 mg twice daily), brodalumab 140 mg administered every 2 weeks, etanercept (50 mg weekly), infliximab (5 mg/kg), secukinumab (300 mg) and ustekinumab (45 mg or 90 mg -the dose depended on body weight). Brodalumab was more effective in PASI 100, 90, 75 and 50 scores. Studies have shown that 210 mg brodalumab administered every two weeks is more effective in treating moderate to severe psoriasis than other typical biological therapies [6,7].
Moreover, comparative studies are needed regarding the efficacy of brodalumab and other biological therapies in treating PsA.
In conclusion, brodalumab is already used successfully in plaque psoriasis, making it a good therapeutic option for patients with both skin and joint symptoms [8]. Its action is therefore multidirectional -it improves the clinical condition of the skin, joints and tendon attachments. Certainly, there will be more clinical trials with brodalumab in the future, as it is a promising therapeutic option for patients with psoriatic arthritis. It is worth monitoring the progress of clinical trials on brodalumab, as this drug offers hope for effective treatment of both psoriasis and psoriatic arthritis. Brodalumab, as a multi-directional drug, can find a place in dermatological and rheumatological therapy.
The author declares no conflict of interest. | 2021-05-12T05:17:42.258Z | 2021-04-27T00:00:00.000 | {
"year": 2021,
"sha1": "fc28bbb00fb25755ff24c55240913e70ac59f1c7",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-18/pdf-43878-10?filename=Brodalumab.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fc28bbb00fb25755ff24c55240913e70ac59f1c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260907592 | pes2o/s2orc | v3-fos-license | An Investigation of the Effect of Propylene Gas Flame on Emissions and Temperature Distribution of a Preheated Metal Plate
: This study investigates the effect of the propylene gas flame on the emissions and temperature distribution of the metal plate during the preheating process. Experimental tests were carried out using a preheating system with a cylindrical chamber for emissions measurement and a metal plate placed near the torch head. Emissions were measured using a gas analyzer, while the temperature distribution of the metal plate was measured using an infrared thermal camera and thermocouples. The findings reveal that the emissions decrease as the equivalence ratio is increased as it approaches a ratio of 1. However, when the appropriate equivalence ratio is reached, NO x emissions will rise and then gradually fall. The peak temperature of propane fuel is higher than that of other fuels because of the concentrated flame. Propane fuel can achieve a peak temperature of 347.65 ◦ C, surpassing both propylene fuel (275.45 ◦ C) and acetylene fuel (335.45 ◦ C). Using a propylene gas flame results in a reduction in emissions of carbon monoxide and nitrogen oxides compared to a propane flame. But, acetylene fuel produces the most NO x emissions, reaching 450.79 ppm for the experimental conditions. Additionally, the temperature distribution of the preheated metal plate was more uniform with the propylene gas flame, indicating improved heat transfer. However, the peak temperature of the metal plate was slightly lower when using the propylene gas flame.
Introduction
Preheating is a technique that involves providing heat to a metal plate and is frequently used in industrial processes, including preheating metal before welding and glass manufacturing [1,2]. The use of gas flames for the heating and processing of metals is a widely adopted industrial practice. Propylene gas, in particular, is known for its high energy density, which makes it a popular choice for many industrial applications. However, the use of propylene gas flames in metal processing can have significant environmental impacts, such as the release of harmful emissions into the atmosphere. To address this issue, there is a growing need to better understand the effects of propylene gas flames on the temperature distribution and emissions when heating metal plates [3]. The ratio of the distance between the burner head and the metal plate to the nozzle diameter, the Reynolds number, and the equivalence ratio are a few factors that have a substantial impact on the heat transfer properties of flame. Out of these factors, the equivalence ratio has a very significant effect on the heat transfer of the flame [4].
Numerous studies have been carried out on the heat transfer properties of different flames using analytical and numerical simulation techniques, including computational fluid dynamics (CFD) [4][5][6]. Liu et al. [7] and Zhen et al. [8] examined how a premixed hydrogen-liquefied petroleum gas (LPG) flame's heat transmission properties changed with hydrogen concentration. According to their findings, relatively high hydrogen concentrations caused a rise in combustion temperature and NO x production but a decrease in CO emission. Additionally, they evaluated that the LPG-H 2 and CH 4 -H 2 mixes showed improved flame heat transfer with the addition of hydrogen. For a particular hydrogen concentration, the CH 4 -H 2 mixture had a higher rate of heat transfer than LPG-H 2 .
Several studies have investigated the impact of gas flames on emissions and temperature distributions in metal processing. For instance, Kandilli et al. [9] investigated the effect of natural gas flames on the thermal and environmental performance of a metallic honeycomb monolith. The study found that the use of natural gas flames led to significant emissions of CO, NO x , and PM. Another study by Wang et al. [10] investigated the effect of propane gas flames on the thermal and environmental performance of a rotary kiln. The study found that the use of propane gas flames resulted in high emissions of CO and NO x . A study by Zulkefli et al. [11] investigated the effects of LPG flames on the emission of NO x and CO from a stainless-steel plate. The study found that the emission of NO x and CO increased with increasing flame temperature, and the emissions were more significant at the edge of the flame than in the center. Another study by Yao-Yao Wang et al. [12] investigated the impact of preheating on the surface quality and corrosion resistance of 316L stainless steel plates that were cut by laser. The authors conducted experiments where the stainless-steel plates were preheated to different temperatures before being cut with a laser. They then analyzed the surface qualities and corrosion resistances of the plates. The study found that preheating the plates to a specific temperature range resulted in improved surface quality and corrosion resistance. The study conducted by Bader A. Alfarraj et al. [13] investigated the emissions and performance of conventional liquefied petroleum gas (LPG) cookstove burners. The results showed that the emissions of carbon monoxide (CO), nitrogen oxides (NO x ), and particulate matter (PM) were found to be higher than the limits set by regulatory agencies. The study also found that the performance of the burners was affected by multiple factors, including the LPG pressure, air-fuel ratio, and burner diameter.
In the context of propylene gas flames, several studies have investigated their impact on emissions and temperature distribution. For example, A.T. Hartlieb et al. [14] investigated the impact of a quartz nozzle on the structure and temperature of a propene flame. Their results indicate that the nozzle can enhance mixing and improve the homogeneity of the flame, leading to a reduction in the required flame temperature. Specifically, the use of the nozzle results in a shift towards fuel-lean combustion, which reduces the temperature in the flame front and promotes complete combustion. The findings suggest that the use of a sampling quartz nozzle could be a viable strategy for controlling the temperature and improving the efficiency of low-pressure propylene (propene) flames. Krishna C. Kalvakala et al. [15] investigated the effects of oxygen enrichment and fuel unsaturation on soot and NO x emissions in different flames, including propene. The study found that increasing the oxygen concentration in the combustion air led to a decrease in soot emissions in propene flames. However, the increase in oxygen concentration also led to an increase in NO x emissions in propene flames. Additionally, the study found that fuel unsaturation, such as in propene, led to higher soot emissions compared to saturated fuels like propane. Overall, the results suggest that the combustion of propene can lead to significant emissions of both soot and NO x , which should be considered in developing effective emission reduction strategies.
While these studies provide insights into the effects of gas flames on metal surfaces [16][17][18][19][20], further research is needed to investigate the specific effects of propylene gas flames on the temperature distribution and emissions of metal plates. Moreover, the impact of the heat transfer characteristics on temperature distribution and thermal efficiency during combustion with a specific focus on NO x emissions have been extensively studied to date. However, none of the studies have highlighted the effect of the equivalence ratio on the temperature distribution and NO x emissions.
The current study aims to address this gap by investigating the interaction between propylene gas flames and metal plates and exploring the effects of the flame on temperature distribution and associated emissions. In summary, previous research has investigated the Sustainability 2023, 15, 12306 3 of 13 effects of gas flames on metal surfaces, including heat transfer characteristics, emissions, and surface quality [21][22][23][24][25]. However, there is a need for further research to investigate the specific effects of propylene gas flames on the temperature distribution and emissions of flame in heating metal plates, which is the focus of the current study. The findings of this research could contribute to the widespread adoption and use of propylene gas flames.
Experiment Setup
The schematic designs for the exhaust gas measurement system and the experimental setup are shown in Figure 1a,b, respectively. The experimental system consists of 7 components. The feed tanks supply the air and fuel to the torch, and the airflow meters are used to manage the flow rate of the fuel mixture. The torch is employed to burn fuel inside the main chamber, and an exhaust gas chamber is added to maintain the homogeneity of the exhaust gas and enhance the measurement accuracy. The signal from the exhaust gas temperature sensor is analyzed using an exhaust gas analyzer (Horiba MEXA-7100 DEGR). The experiments were performed in a well-ventilated laboratory environment with the torch system placed on a laboratory bench. The gas pressure, flow rate, and torch-toworkpiece distance were adjusted as required. The torch was connected to a regulator, which controlled the pressure of the fuel mixture gas, and it was mounted on a stand to ensure stability during the experiments. The fuel was stored in a feed tank and delivered to the torch system through a flexible hose. The air and fuel pressure were measured using a pressure gauge installed on the regulator, and the gas flow rate was measured using a flow meter installed on the flexible hose. The gas pressure and flow rate were adjusted using the regulator to achieve the desired operating conditions. The tests were conducted under steady-state conditions at near-room temperature conditions of approximately 27 • C.
The experiment setup and schematic design for the preheating procedure are shown in Figure 1c,d, respectively. The metal plate's total width, length, and thickness were 0.5 m, 0.5 m, and 0.03 m, respectively. The distance (d) between the torch outlet and the metal plate was 0.06 m. The gas torch combined fuel and air to facilitate combustion. After leaving the exits of the gas torch, the mixture of fuel and air was ignited, generating a combustion flame for preheating the metal plate. The operating conditions were optimized to achieve the best performance of the torch system using LPG as fuel. The optimal gas pressure and flow rate were determined based on the statistical analysis of the data. The torch-to-metal plate distance was also optimized for optimal performance. The temperature distribution on the reverse side of the metal plate was measured using a TVS-200EX infrared camera, as shown in Figure 1d. To compensate for the lower sensitivity of the infrared camera, an additional thermocouple sensor connected to a Midi logger 840 was employed for temperature measurements. To measure the temperature distribution of a metal plate during the preheating process, 9 thermocouples were positioned on the rear of the plate in three lines. The upper line's temperature was measured using Ch1, 2, and 3, the middle line's temperature was obtained using Ch4, 5, and 6, and the lower line's temperature was measured using Ch7, 8, and 9. The calibration of the equipment is shown in Table 1. rate, and torch-to-workpiece distance were adjusted as required. The torch was connected to a regulator, which controlled the pressure of the fuel mixture gas, and it was mounted on a stand to ensure stability during the experiments. The fuel was stored in a feed tank and delivered to the torch system through a flexible hose. The air and fuel pressure were measured using a pressure gauge installed on the regulator, and the gas flow rate was measured using a flow meter installed on the flexible hose. The gas pressure and flow rate were adjusted using the regulator to achieve the desired operating conditions. The tests were conducted under steady-state conditions at near-room temperature conditions of approximately 27 °C.
Fuel Properties
Propylene gas, a hydrocarbon gas with the chemical formula C 3 H 6 , is a colorless and flammable gas. It has a high energy density and burns cleanly, making it a popular choice for heating, cutting, and welding. Propylene gas has a lower heating value than natural gas, but it can be used as a substitute for natural gas in many applications. Propylene gas has a high flash point and low volatility, which makes it relatively safe to handle and store.
Propane gas with the chemical formula C 3 H 8 , lacking the carbon double bond of propene, is a hydrocarbon gas that is commonly used as a fuel for heating and powering vehicles. It is a colorless, odorless gas that is typically stored in pressurized tanks as a liquid. Propane gas has a higher vapor pressure than propylene gas, making it easier to store and transport. Additionally, propane gas has a low flammability range and can be safely used in enclosed spaces with adequate ventilation. It produces relatively low emissions of pollutants.
Acetylene gas is a hydrocarbon with the chemical formula C 2 H 2 . It has a high energy density and burns with a high-temperature flame, making it suitable for applications that require high heat. However, acetylene gas also has high flammability, which requires special handling and storage precautions. It also has a narrow flammability range and is sensitive to shock and friction. Acetylene gas produces high emissions of pollutants.
In summary, propylene gas, propane gas, and acetylene gas are all useful hydrocarbon fuels with different fuel properties. Propane gas has the highest heating value. Propylene gas has high-energy fuels, while acetylene gas has the highest flame temperature. Detailed information on the properties of these fuels is presented in Table 2. The air-fuel equivalence ratio is the ratio of actual air-fuel ratio (AFR) to stoichiometric air-fuel ratio (AFR). An equivalence ratio of 1.0 corresponds to the stoichiometric air-fuel ratio, while rich air-fuel mixtures have an equivalence ratio of <1.0, and lean mixtures have an equivalence ratio of >1.0. There is a direct relationship between equivalence ratio and air-fuel ratio (AFR).
where m air : mass of air. m f uel : mass of fuel.
The Effect of the Equivalence Ratio on Emission Exhaust Gas
The graph presented in Figure 2 provides a visual representation of the relationship between equivalence ratio and total hydrocarbon (THC) emissions. The data indicate that as the equivalence ratio increases, there is a noticeable reduction in THC emissions when the ratio is below 1. This can be attributed to the presence of excess oxygen in the combustion chamber, which facilitates the combustion process and promotes the oxidation of unburned hydrocarbon molecules. However, an interesting observation is made when the equivalence ratio surpasses 1. In this scenario, the THC emissions show a slight increase. This phenomenon can be attributed to the conditions of lean combustion, where the mixture becomes fuel-lean, and there is an insufficient amount of oxygen available for complete combustion. As a result, the combustion process becomes sluggish, leading to an extended combustion time. These factors contribute to the rise in THC emissions.
Furthermore, it is noteworthy that propylene fuel exhibits higher THC emissions compared to propane and acetylene fuels. This can be attributed to the unique combustion characteristics of propylene. The flame rate of propylene is relatively higher, resulting in a shorter burning time. However, this shorter duration may lead to incomplete combustion, where some hydrocarbon molecules are not fully oxidized. As a consequence, propylene fuel emits a greater amount of THCs. These findings align with the fuel properties discussed in Section 2.2, which highlight the combustion behavior and characteristics of the different fuels. Figure 3 depicts the effect of the equivalence ratio on CO2 emissions. The findings demonstrate that the carbon dioxide (CO2) emission decreases as the equivalence ratio values increase. The impact of the equivalence ratio on CO2 emissions is relatively minimal compared to other emissions. CO2 is primarily determined by the carbon content in the fuel rather than the equivalence ratio. However, extremely high equivalence ratios can lead to incomplete combustion and increased CO2 emissions. Aside from that, propane emits significantly more CO2 than other gases. The fuel is completely burned, leaving behind only carbon dioxide (CO2) and water. It is evident that propane fuel burns more completely than propylene and acetylene fuel because of its molecular structure and combustion characteristics. Propane gas has a relatively simple chemical structure, which makes it easier to burn completely in the presence of oxygen. Propane gas also has a narrower flammability range, enabling better control and optimization of conditions for However, an interesting observation is made when the equivalence ratio surpasses 1. In this scenario, the THC emissions show a slight increase. This phenomenon can be attributed to the conditions of lean combustion, where the mixture becomes fuel-lean, and there is an insufficient amount of oxygen available for complete combustion. As a result, the combustion process becomes sluggish, leading to an extended combustion time. These factors contribute to the rise in THC emissions.
Furthermore, it is noteworthy that propylene fuel exhibits higher THC emissions compared to propane and acetylene fuels. This can be attributed to the unique combustion characteristics of propylene. The flame rate of propylene is relatively higher, resulting in a shorter burning time. However, this shorter duration may lead to incomplete combustion, where some hydrocarbon molecules are not fully oxidized. As a consequence, propylene fuel emits a greater amount of THCs. These findings align with the fuel properties discussed in Section 2.2, which highlight the combustion behavior and characteristics of the different fuels. Figure 3 depicts the effect of the equivalence ratio on CO 2 emissions. The findings demonstrate that the carbon dioxide (CO 2 ) emission decreases as the equivalence ratio values increase. The impact of the equivalence ratio on CO 2 emissions is relatively minimal compared to other emissions. CO 2 is primarily determined by the carbon content in the fuel rather than the equivalence ratio. However, extremely high equivalence ratios can lead to incomplete combustion and increased CO 2 emissions. Aside from that, propane emits significantly more CO 2 than other gases. The fuel is completely burned, leaving behind only carbon dioxide (CO 2 ) and water. It is evident that propane fuel burns more completely than propylene and acetylene fuel because of its molecular structure and combustion characteristics. Propane gas has a relatively simple chemical structure, which makes it Figure 4 depicts the influence of the equivalence ratio on CO emission. The results demonstrate that lowering the equivalence ratio increases CO emissions. It is understandable that increases in CO emission with a decrease in the equivalence ratio were brought on by a drop in the oxygen concentration. Furthermore, because THC emissions rise with lambda (air/fuel) when the relative air-fuel ratio is larger than 1, the presence of unburned hydrocarbons in the reaction zone slows CO oxidation, as seen in Figure 4. Therefore, CO emissions increase during times of oxygen scarcity, implied by the single oxygen atom in the carbon monoxide structure. Furthermore, acetylene creates far less CO than propylene and propane fuel because of its unique combustion properties and the stoichiometry of its combustion reaction. The stoichiometric ratio for acetylene combustion is much lower than for propylene and propane. This means that a smaller amount of air is needed to combust a given amount of acetylene relative to the other fuels. Furthermore, the combustion reaction of acetylene is highly exothermic, meaning that it releases a large amount of heat when it reacts with oxygen. This high heat release helps to ensure that complete combustion occurs, reducing the formation of harmful byproducts like CO. Figure 4 depicts the influence of the equivalence ratio on CO emission. The results demonstrate that lowering the equivalence ratio increases CO emissions. It is understandable that increases in CO emission with a decrease in the equivalence ratio were brought on by a drop in the oxygen concentration. Furthermore, because THC emissions rise with lambda (air/fuel) when the relative air-fuel ratio is larger than 1, the presence of unburned hydrocarbons in the reaction zone slows CO oxidation, as seen in Figure 4. Therefore, CO emissions increase during times of oxygen scarcity, implied by the single oxygen atom in the carbon monoxide structure. Furthermore, acetylene creates far less CO than propylene and propane fuel because of its unique combustion properties and the stoichiometry of its combustion reaction. The stoichiometric ratio for acetylene combustion is much lower than for propylene and propane. This means that a smaller amount of air is needed to combust a given amount of acetylene relative to the other fuels. Furthermore, the combustion reaction of acetylene is highly exothermic, meaning that it releases a large amount of heat when it reacts with oxygen. This high heat release helps to ensure that complete combustion occurs, reducing the formation of harmful byproducts like CO. Figure 5 shows the impact of the equivalence ratio on NO x emissions. According to the figure, a drop in the equivalence ratio resulted in a sharp decrease in NO x in most of these experiments. The main contributor to NO x generation is the reduction in NO x at an equivalent oxygen concentration. Additionally, at fuel-lean conditions, the availability of oxygen is relatively higher compared to the fuel, resulting in lower peak flame temperatures. This leads to a reduction in the formation of NO x , as lower temperatures inhibit the reaction between nitrogen and oxygen. Conversely, under fuel-rich conditions, the excess fuel generates higher peak flame temperatures, thereby promoting the formation of NO x . Figure 5 shows the impact of the equivalence ratio on NOx emissions. According to the figure, a drop in the equivalence ratio resulted in a sharp decrease in NOx in most of these experiments. The main contributor to NOx generation is the reduction in NOx at an equivalent oxygen concentration. Additionally, at fuel-lean conditions, the availability of oxygen is relatively higher compared to the fuel, resulting in lower peak flame temperatures. This leads to a reduction in the formation of NOx, as lower temperatures inhibit the reaction between nitrogen and oxygen. Conversely, under fuel-rich conditions, the excess fuel generates higher peak flame temperatures, thereby promoting the formation of NOx. Figure 5 shows the impact of the equivalence ratio on NOx emissions. According to the figure, a drop in the equivalence ratio resulted in a sharp decrease in NOx in most of these experiments. The main contributor to NOx generation is the reduction in NOx at an equivalent oxygen concentration. Additionally, at fuel-lean conditions, the availability of oxygen is relatively higher compared to the fuel, resulting in lower peak flame temperatures. This leads to a reduction in the formation of NOx, as lower temperatures inhibit the reaction between nitrogen and oxygen. Conversely, under fuel-rich conditions, the excess fuel generates higher peak flame temperatures, thereby promoting the formation of NOx. Furthermore, it is noteworthy that acetylene exhibits the highest NO x emission value. This can be attributed to the significant heat generated during acetylene combustion, resulting in an increase in chamber temperature, which in turn promotes the formation of NO x emissions. This is predictable because the N 2 bond is stronger than the O 2 bond, and the N 2 bond requires more energy to break. One factor that contributes to the high NO x emissions of acetylene is its combustion temperature. Acetylene has a relatively low ignition temperature and a high flame temperature, which leads to rapid combustion and high temperatures. Moreover, acetylene has a triple bond between its carbon atoms, which makes it highly reactive, leading to the creation of an oxygen-rich flame zone, which favors the formation of NO x emissions. transfer rate of propylene fuel is greater than that of propane fuel. However, the central point of propane fuel has a higher temperature, indicating the flame of the propane fuel is more focused. emissions of acetylene is its combustion temperature. Acetylene has a relatively low ignition temperature and a high flame temperature, which leads to rapid combustion and high temperatures. Moreover, acetylene has a triple bond between its carbon atoms, which makes it highly reactive, leading to the creation of an oxygen-rich flame zone, which favors the formation of NOx emissions. Figure 6a-c depict the temperature contours from the gas torch outlets to the metal plate of propane, propylene, and acetylene, respectively. As shown in Figure 6a-c, the heat transfer rate of propylene fuel is greater than that of propane fuel. However, the central point of propane fuel has a higher temperature, indicating the flame of the propane fuel is more focused. Additionally, propylene fuel is considered safer due to its more uniform temperature distribution compared to propane fuel. Propane fuel has a higher flame temperature than propylene fuel, which can lead to localized hotspots during combustion. These hotspots can result in uneven heating of the material being heated, causing thermal stresses and deformation of the material. Additionally, the localized hotspots can increase the risk of Additionally, propylene fuel is considered safer due to its more uniform temperature distribution compared to propane fuel. Propane fuel has a higher flame temperature than propylene fuel, which can lead to localized hotspots during combustion. These hotspots can result in uneven heating of the material being heated, causing thermal stresses and deformation of the material. Additionally, the localized hotspots can increase the risk of ignition or fire if the hotspots exceed the ignition temperature of the material or surrounding environment. In contrast, propylene fuel has a lower flame temperature than propane fuel, which leads to a more uniform temperature distribution during combustion. This uniform temperature distribution reduces the risk of localized hotspots and thermal stresses on the heated material. Furthermore, a more uniform temperature distribution also means that the overall temperature of the heated material can be kept lower, which can reduce the risk of ignition or fire. Figure 7a-c present the temperature distributions of propylene, propane, and acetylene fuels, respectively, as a function of time. These distributions provide valuable insights into the thermal behavior and characteristics of each fuel throughout the experimental duration.
Comparison of Temperature Distributions on a Metal Plate
(c) Additionally, propylene fuel is considered safer due to its more uniform temperature distribution compared to propane fuel. Propane fuel has a higher flame temperature than propylene fuel, which can lead to localized hotspots during combustion. These hotspots can result in uneven heating of the material being heated, causing thermal stresses and deformation of the material. Additionally, the localized hotspots can increase the risk of ignition or fire if the hotspots exceed the ignition temperature of the material or surrounding environment. In contrast, propylene fuel has a lower flame temperature than propane fuel, which leads to a more uniform temperature distribution during combustion. This uniform temperature distribution reduces the risk of localized hotspots and thermal stresses on the heated material. Furthermore, a more uniform temperature distribution also means that the overall temperature of the heated material can be kept lower, which can reduce the risk of ignition or fire. Figure 7a-c present the temperature distributions of propylene, propane, and acetylene fuels, respectively, as a function of time. These distributions provide valuable insights into the thermal behavior and characteristics of each fuel throughout the experimental duration. The temperature distribution of propylene fuel, as depicted in Figu interesting pattern. At the central point, the temperature experiences a r initial value, reaching 222.15 °C within 360 s. As time progresses, the tem ally approaches a steady-state value, indicating a more stable thermal end of the experiment (1800 s), the maximum temperature recorded at th the propylene fuel is 275.45 °C. This finding suggests that propylene fu fast response in terms of temperature increase and achieves a moderat perature.
In Figure 7b, the temperature distribution of propane fuel is showc point temperature of the propane fuel gradually increases from the star and reaches 211.05 °C after 540 s. However, unlike propylene fuel, the te of propane fuel does not stabilize and continues to exhibit fluctuations b imental timeframe. This indicates the potential for a further rise in tem bility. Impressively, the central point of the propane fuel records a maxim The temperature distribution of propylene fuel, as depicted in Figure 7a, exhibits an interesting pattern. At the central point, the temperature experiences a rapid rise from its initial value, reaching 222.15 • C within 360 s. As time progresses, the temperature gradually approaches a steady-state value, indicating a more stable thermal condition. At the end of the experiment (1800 s), the maximum temperature recorded at the central point of the propylene fuel is 275.45 • C. This finding suggests that propylene fuel has a relatively fast response in terms of temperature increase and achieves a moderate maximum temperature.
In Figure 7b, the temperature distribution of propane fuel is showcased. The central point temperature of the propane fuel gradually increases from the starting temperature and reaches 211.05 • C after 540 s. However, unlike propylene fuel, the temperature profile of propane fuel does not stabilize and continues to exhibit fluctuations beyond the experimental timeframe. This indicates the potential for a further rise in temperature or variability. Impressively, the central point of the propane fuel records a maximum temperature of 347.65 • C at 1800 s, indicating a higher peak temperature compared to propylene fuel. Figure 7c illustrates the temperature distribution of acetylene fuel. Similar to propylene fuel, the central point temperature of acetylene fuel experiences a rapid initial increase. Within 480 s, the temperature rises quickly from the starting temperature to 301.45 • C. As the experiment progresses, the temperature of the acetylene fuel gradually stabilizes and approaches a steady-state value. At 1800 s, the central point of the acetylene fuel reaches a maximum temperature of 335.45 • C, indicating a relatively high peak temperature.
The observed temperature distributions highlight the distinct characteristics of each fuel gas. Propylene fuel demonstrates a rapid but stable temperature increase, propane fuel exhibits a gradually rising temperature with potential fluctuations, and acetylene fuel showcases a rapid initial increase followed by a relatively stable temperature profile. Figures 8 and 9 compare the temperatures and temperature increase rates of the three fuels at the middle point. The graphs show that the temperature of acetylene fuel increases at a higher rate than for propene, but the temperature of propane fuel increases faster than other fuels. After a period of burning, the temperature of propylene and acetylene fuel stabilizes, while the temperature of propane continues to rise. Furthermore, the peak temperatures of propane and acetylene fuels are higher than that of propylene fuel due to the concentrated flame. Moreover, acetylene's atomic structure has a triple bond, and propylene's atomic structure has a double bond, which allows it to easily react with oxygen and create heat rapidly, but propane generates more heat due to its high latent heat of vaporization. On the other hand, propane has a lower flame temperature compared to acetylene. Despite this, propane's combustion process is more complete, resulting in a higher energy output per unit mass of fuel. This leads to a more rapid increase in temperature relative to propylene and acetylene. fuel exhibits a gradually rising temperature with potential fluctuations, and acetylene fuel showcases a rapid initial increase followed by a relatively stable temperature profile. Figures 8 and 9 compare the temperatures and temperature increase rates of the three fuels at the middle point. The graphs show that the temperature of acetylene fuel increases at a higher rate than for propene, but the temperature of propane fuel increases faster than other fuels. After a period of burning, the temperature of propylene and acetylene fuel stabilizes, while the temperature of propane continues to rise. Furthermore, the peak temperatures of propane and acetylene fuels are higher than that of propylene fuel due to the concentrated flame. Moreover, acetylene's atomic structure has a triple bond, and propylene's atomic structure has a double bond, which allows it to easily react with oxygen and create heat rapidly, but propane generates more heat due to its high latent heat of vaporization. On the other hand, propane has a lower flame temperature compared to acetylene. Despite this, propane's combustion process is more complete, resulting in a higher energy output per unit mass of fuel. This leads to a more rapid increase in temperature relative to propylene and acetylene. Figure 9. A comparison of the change in temperature rates at the center points using the three fuels. fuel exhibits a gradually rising temperature with potential fluctuations, and acetylene fuel showcases a rapid initial increase followed by a relatively stable temperature profile. Figures 8 and 9 compare the temperatures and temperature increase rates of the three fuels at the middle point. The graphs show that the temperature of acetylene fuel increases at a higher rate than for propene, but the temperature of propane fuel increases faster than other fuels. After a period of burning, the temperature of propylene and acetylene fuel stabilizes, while the temperature of propane continues to rise. Furthermore, the peak temperatures of propane and acetylene fuels are higher than that of propylene fuel due to the concentrated flame. Moreover, acetylene's atomic structure has a triple bond, and propylene's atomic structure has a double bond, which allows it to easily react with oxygen and create heat rapidly, but propane generates more heat due to its high latent heat of vaporization. On the other hand, propane has a lower flame temperature compared to acetylene. Despite this, propane's combustion process is more complete, resulting in a higher energy output per unit mass of fuel. This leads to a more rapid increase in temperature relative to propylene and acetylene.
Conclusion
In this study, we used an experimental strategy to overcome some of the shortcomings of previous experimental optimization approaches. We carefully researched the Figure 9. A comparison of the change in temperature rates at the center points using the three fuels.
Conclusion
In this study, we used an experimental strategy to overcome some of the shortcomings of previous experimental optimization approaches. We carefully researched the equivalence ratio, which has sensitive impacts on exhaust gases such as NO x , CO, CO 2 , and THC, as well as the temperature distribution when heating a metal plate. The ideal equivalence ratio was also found. When the equivalence ratio is at its ideal value, the torch system performs better. While this is not true for NO x emissions, as the equivalence ratio of three fuels increases and reaches 1.0, the CO, CO 2 , and THC emissions decrease. The combustion parameters of three fuels are optimized at an equivalence ratio of 0.95.
Analyzing the temperature profiles of the different fuels, distinct patterns emerged. Acetylene fuel demonstrated a rapid increase in temperature, surpassing the other fuels. In contrast, the temperature increase for the other fuels was gradual and fluctuating, with the potential for further escalation. Notably, the central point of propane fuel recorded the highest temperature of 347.65 • C at 1800 s, exceeding both propylene fuel (275.45 • C) and acetylene fuel (335.45 • C). Furthermore, except for the center point of the propylene fuel, the temperature was consistently higher compared to the other analyzed fuels. This denotes that, as the flame of propane and acetylene fuels are more concentrated, the flame of propylene fuel spreads over a wider area.
Using a propylene gas flame can lead to a reduction in emissions of carbon monoxide and nitrogen oxides compared to the propane flame. Additionally, the temperature distribution of the preheated metal plate was more uniform with the propylene gas flame, indicating improved heat transfer. These findings highlight the potential benefits of employing propylene gas as a fuel source in preheating systems. Its adoption could enhance energy efficiency, promote environmental sustainability through reduced emissions, and facilitate the production of sustainable chemicals. Moreover, the utilization of propylene gas has the potential to optimize energy consumption in various industrial processes. | 2023-08-16T15:09:01.313Z | 2023-08-12T00:00:00.000 | {
"year": 2023,
"sha1": "b65e70fd776e41013d09baca7ef8974bf942783b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/15/16/12306/pdf?version=1691818164",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d68e26ec3c1153c69ad2a02ed2b47b102faede44",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
255813485 | pes2o/s2orc | v3-fos-license | Validation of in-house liquid direct agglutination test antigen: the potential diagnostic test in visceral Leishimaniasis endemic areas of Northwest Ethiopia
Visceral leishmaniasis in Ethiopia is a re-emerging threat to public health, with increased geographical distribution and number of cases. It is a fatal disease without early diagnosis and treatment; thus, the availability of affordable diagnostic tools is crucial. However, due to delays caused by import regulations, procurement and late delivery of imported test kits, accessibility remains a problem in the control program. Therefore, we aimed to produce and evaluate the performance of an in-house liquid (AQ) direct agglutination test (DAT) antigen. The AQ-DAT was produced at the Armauer Hansen Research Institute, using Leishmania donovani strain (MHOM/ET/67/L82). Sera from 272 participants; 110 microscopically confirmed cases of VL, 76 apparently healthy and 86 patients who had infectious disease other than VL were tested with AQ-DAT, and standard kits: Freeze-dried DAT (FD-DAT) and rK39. Taking microscopy as a gold standard; the sensitivity and specificity of the AQ-DAT were 97.3 and 98.8%, respectively. It had high degrees of agreement (k > 0.8), with a significant (P < 0.05) correlation compared to microscopy, FD-DAT, and rK39. Although further standardization is required, the in-house AQ-DAT could improve diagnostic accessibility, minimize intermittent stock outs and strengthen the national VL control program.
Background
Visceral leishmaniasis, also known as kala-azar, is a neglected tropical disease. East Africa carries the secondhighest burden of VL in the world. This region witnessed a re-emergence of VL, which makes it an increasing public health threat [1][2][3]. In the last few decades, VL outbreaks claimed hundreds of lives in the region both in the previously endemic and nonendemic areas [4][5][6]. In Ethiopia, over 33% of the total landmass are known to have the disease. Amhara, Tigray, Southern nation nationalities and peoples, Oromia and Somali region are at high risk of VL transmission in the country [7]. Visceral leishmaniasis in Ethiopia is caused by Leishmania donovani (L. donovani), and over 3.2 million people are estimated to be at risk with up to 5000 new cases a year [7,8].
Visceral leishmaniasis is fatal without proper treatment in over 95% of the cases. The main drugs available for the treatment of VL are antimony compounds, sodium stibogluconate (SSG) and meglumine antimoniate (glucatim), Liposomal Amphotericin B (AmBisome), paromomycin and now the oral drug miltefosine. The current first line treatment for VL in Ethiopia is a combination of antimonial with aminoglycosides (SSG and Paromomycin), SSG or glucatim (Monotherapy) and paromomycin and Liposomal Amphotericin B (AmBisome) in special situations like pregnant women. Liposomal Amphotericin B (AmBisome), Miltefosine and Paromomycin (Aminosidine) are second-line treatment for primary VL. The drugs available are not only prohibitively expensive for the most affected, but also are associated with severe side effects. Thus, early and accurate diagnosis is crucial for VL treatment and control. The diagnostic approaches include the direct methods; microscopy, culture and polymerase chain reaction (PCR), and the indirect ones, the patient produce an immune response to the L. donovani, the agent causing of VL in East Africa, are direct agglutination test (DAT), rK39 immuno-chromatographic test (ICT), and indirect fluorescence antibody test (IFAT) [9]. Most of the direct methods involve invasive and technically demanding sampling procedures and/or processing steps that need specialized training and expensive set-ups. Thus, their use in most endemic settings are hardly possible. Of the indirect method, DAT and rK39-ICT are less technically demanding and involve minimally invasive sampling procedures. They are the most widely validated tools in the Ethiopian endemic area [10][11][12][13]. Among them, DAT was shown to be more sensitive, specific and reproducible [12,13]. The production technique of the DAT antigen is not patented, and elsewhere the use of local strains of VL causing strain makes it even more sensitive and specific [14,15]. Yet, the national program is highly dependent on foreign aid to purchase DAT and this leads for unnecessary delays due to import regulations and processes. As a result, improving accessibility remains a challenge. Thus, local production of aqueous DAT antigen could increase userfriendliness and avoids/minimize stock-outs.
Sociodemographic characteristics of study participants
We used a serum sample for a total of 272 study subjects (110 VL patients and 162 controls) to evaluate the reliability and validity of the DAT antigen developed in-house for the diagnosis of VL. Among these 185 (68.0%) were males, 170 (62.5%) were within the age group 15-30 years, and 51 (18.75%) were agricultural laborers ( Table 1).
Discussion
The definitive diagnosis of VL has crucial importance not only because it is almost always fatal if left untreated, but also the delay in diagnosis has implications for the transmission and reduces cure rates [8,16]. Moreover, the high cost and severe side effects associated with the available chemotherapeutic options made the value for prompt and accurate diagnosis unquestionable [3,17,18]. However, the VL endemic East African countries, including Ethiopia lack sufficient capacity and resource for the purchase of diagnostic supplies, thus their control programs are donor dependent. The national neglected tropical disease programs, Ethiopian federal ministry of health recommended rK39-ICT at the primary health care center and DAT, and Microscopy at district and tertiary hospitals as a diagnostic tool for VL [19]. Yet accessibility is limited due to delays related to import regulations and processes, late ordering, intermittent stock outs even in the referral setups. Thus, in this study, we produced whole cell DAT antigen in liquid using MHOM/ET/67/L82 L. donovani strain and assessed performance comparing it with validated commercial kits; FD-DAT (ITMA-DAT/VL, Belgium) and rK39-ICT (InBios International Kalazar DetectTM Rapid test kit, The Netherlands). Our in-house AQ-DAT had a sensitivity (97.3%) comparable to FD-DAT (99.1%) and rK39-ICT (96.5%), taking microscopy as gold standard. This is similar to a study done in Sudan in which AQ-DAT, FD-DAT, and rK39 showed a sensitivity of 99, 95.8, and 79.2%, respectively [14]. Similarly, studies conducted in Brazil and Sudan documented better sensitivity of FD-DAT (98-100%) compared to rK39-ICT (85.7-90%) [20,21]. In contrast, another study from the Northeast of Sudan showed, lower sensitivity for FD-DAT (84%) compared with rK39-ICT (93%) [22]. Overall, the observed differences in sensitivity among studies could be due to the strain variation that affects the gene expression level of The specificity of the in-house AQ-DAT, FD-DAT and rK39-ICT were 98.8, 97.5 and 93.2%, respectively, which is in line with findings of the studies done in Sudan which revealed 100, 100 and 97.6%, respectively [14,21]. The AQ and FD-DAT, also showed similar specificity with the findings reported from the United Kingdom, Brazil and Sudan [20,21,23]. However, rK39-ICT in our study was higher than finding from Brazil (82%), and lower than finding from Ethiopia, Brazil, Sudan and United Kingdom (99, 100, 100%, respectively) [10,20,23].
In the present study, the comparison among AQ-DAT, FD-DAT and rK39-ICT found two to 10 controls as positive. The AQ-DAT, FD-DAT and rK39-ICT resulted in cross-reactivity with serum samples of parasitologically confirmed cases of CL 2/15 (1.2%), 4/15 (2.5%) and 7/15 (6.8%), respectively; rK39-ICT also reacted with 4 out of 15 schistosomiasis positive serum samples. It is plausible to attribute this to the genetic similarity of CL and VL causative agents of the same genus [24][25][26]. The rK39-ICT reaction with sera from schistosomiasis patients. It is worth notice in diagnosing migrant laborers from endemic areas, to prevent unwanted complications with anti-leishmanial treatment through better diagnostic tool.
All apparently healthy control groups were negative with AQ-DAT, FD-DAT and rK39-ICT, and it is in line with a study in Sudan [27]. In contrast, a study conducted in Brazil and Sudan; have a cross-reaction with healthy control [21,28]. Herein, we observed that AQ-DAT and FD-DAT are more specific and able to correctly identify the study subjects than rK39-ICT. Although microscopy is taken as a gold standard [29,30] in this study, positive response to specific antileishmanial treatment has also been reported in VL suspects (with un-confirmed infection) that tested positive in the DAT. Moreover, in the present study, high specificity could be due to the enrollment of control groups from VL non-endemic area.
The reproducibility of the in-house AQ antigen compared to FD-DAT and rK39-ICT (k = 0.962, P < 0.00) and K = 0.895, P < 0.00, respectively). The accuracy of the rK39-ICT was comparable to a study conducted other part of in Ethiopia (90.6%, K = 0.81; P < 0.05). the current findings demonstrates a substantial level of agreement between FD-DAT in this study (k = 0.962, P < 0.00) and study conducted in Ethiopia (87.7% with k = 0.75, P < 0.05) [12]. Hence, its reliability and the sustainable access of AQ-DAT might have advantages for use in peripheral health services compared with the current rK39-ICT and FD-DAT. This study did not assess the variations in the performance of the in-house liquid DAT antigen based on its stability by storing it after various period of time at different temperature. In this study, we assessed the diagnostic performance of AQ-DAT antigen with respect to FD-DAT and rK39, and found an encouraging outcome. The production and distribution of in-house DAT antigen to the end-users in different parts of Ethiopian Health Services requires involvement of commercial companies and/or research institutes. Currently, we plan to secure the implementation of AQ-DAT antigen production at larger scale as well as testing batch-to-batch variability and stability with the support and collaboration of a research institute (AHRI) and academic-research institute (University of Gondar).
Conclusion
Overall, AQ-DAT demonstrated comparable performance compared to FD-DAT and rK39-ICT in diagnosing VL patients. Thus, the AQ-DAT merits GCLP standard production with wider evaluation. Doing so, we believe this would make access to VL diagnosis more equitable and enable to build self-sustained programs. A questionnaire was used to capture sociodemographic, previous medical and travel history related to VL. Apparently healthy and non-VL patients with a reported history of VL and/or travel history to VL endemic area(s) were excluded from the study. Serum was separated from~8 mL of venous blood and stored at − 20°C until transported to AHRI.
Logarithmic stage promastigotes were harvested by centrifugation at 3000 RPM (Megafuge, Thermo Fisher Scientific Inc., USA) at 4°C for 10 min. The harvested promastigotes were washed three times with Lock's solution at the same condition above and then treated twice with 0.6% 2-Mercaptoethanol in Lock's solution and incubate at 37°C for 1 h. Then, promastigotes were washed three times with Lock's solution at the same centrifugation speed at 4°C. Followed by fixation with 3% (V/V) Formaldehyde in Lock's solution kept overnight at 4°C.
After overnight incubation, the sediment washed three times by sodium-citrate-saline (0.15 M NaCl and 0.05 M sodium citrate) at 3000 RPM at 4°C for 10 min. Subsequently re-suspended in 100 mL of sodium-citrate-saline solutions (0.15 M NaCl and 0.05 M sodium citrate) and stained in 0.2% Coomassie-brilliant blue (Fisher Scientific: Janssen Pharmaceutical, Geel, Belgium). After 3 h staining and checking microscopically for the absence of any aggregation and consistency of staining, the excess stain was washed, 2-3 times, in sodium-citrate-saline solutions (0.15 M NaCl and 0.05 M sodium citrate) until the supernatant became clear. According to the final Parasite Cell pellet Volume (PCV), the sediment was resuspended in 1.2% (V/V) formal-citrate solution (3% formaldehyde in sodium-citrate-saline) and kept in a magnetic stir for 1 h. Optimization of the concentration of antigen (stained promastigotes) was done by counting cells using Neubauer improved hemocytometer-having a depth of 0.02 mm and 0.0025mm 2 . Usually, 1 mL PCV is expected from one litter culture, which will be resuspended in 100 mL 1.2% (V/V) formal-citrate solution and the concentration was adjusted to 5 × 10 7 parasites/ mL. The prepared antigen was kept at 2-8°C until used. Positive and negative sera references have been including the maximum number of days Sera have been measured in 2-fold serial dilutions between 1:100 and 1:102,400. Serial dilutions showing ≥ 1:3200 titrations were considered positive for VL.
Direct agglutination and rK39 tests
Sera from participants were tested with the rK39-ICT kit (Lot number WM1264 with expired date 12/2019, INBios international Kalazar Detect™, The Netherlands), and FD-DAT (Lot number AHRI-2017-1B with expired date 01/11/2018, ITMA-DAT, Belgium) as per the suppliers' recommendations. All samples were tested with the AQ-DAT, FD-DAT and rK39-ICT test and read by the same person. Moreover, all sera were tested at AHRI without prior knowledge of the sample results. The inhouse AQ-DAT was used following the same procedure as for the commercial FD-DAT. In brief: DAT: the FD-DAT antigen was reconstituted according to the manufacturer's instructions. Twofold dilution series of the sera were made, starting at a dilution of 1: 100 and going up to a maximum serum dilution of 1: 102.400. Fifty μL DAT antigens were added to each well containing 50 μl diluted serum (1 μL patient serum in 49 μL gelatin-saline diluents containing 0.6% 2-Mercaptoethanol). After 18 h of incubation at room temperature samples with titer > = 1:3200 were read positive. Positive and negative controls (ITMA-DAT) were included in every 5th plat. For AQ-DAT, all procedures were the same; the effort was made to adjust the antigen concentration; roughly around 5 × 10 7 parasites per mL, after counting the promastigotes in the FD-DAT. We adjusted it by counting Cell to optimize the concentration of the antigen using Neubauer improved hemocytometry having depth 0.02 mm and 0.0025mm 2 .
rK39-ICT: A 20 μL serum sample was added, followed by 2-3 drops of chase buffer solution to the absorbent pad and results were read after 10-20 min. Results were interpreted as positive when both control and test lines appear; negative when the only control line appears; invalid when no control line appears. Invalid results were repeated as per the manufacturer's recommendation.
Quality assurance
A blood specimen was collected using appropriate tube and transported to the testing lab according to standardized procedures. During the storage of sample temperature monitoring of all incubators, fridges, and freezers were assessed regularly by using calibrated thermometers. All assays were performed according to the manufacturer's instructions and standard operation procedures (SOPs) were strictly followed. Culture media preparation and sterility testing were conducted according to the SOPs and performance of the media were tested with known reference L. donovani promastigote. The qualities of all commercial test kits were evaluated by using a known positive and negative serum before the actual test performed. Laboratory tests were performed and interpreted blindly without prior knowledge of the previous result.
Funding
This project was financed by the Ethiopian Ministry of Science and Technology, Armauer Hansen research institute and the University of Gondar. These funds were used to collect data and laboratory material and process.
Availability of data and materials
Data support these findings are contained within the manuscript and will share upon request to the corresponding author.
Ethics approval and consent to participate
The study was approved by the School of Biomedical and laboratory Science, University of Gondar (SBMLS892/10) and AHRI/ALERT Research and ethical review committee (AF-10-015). Each study participant was informed about the objective of the study and written informed consent was obtained from each participant before they were enrolled and asked to give sample. Written informed consent was also obtained from the legal guardian/parent on behalf of the child / young person. The legal guardian/ parent have been informed of the right to withdraw the child / young person from the study at any time. Assent was asked for children participants under the age of 16 years in addition to their parents/guardian consent. All confirmed cases were linked to the appropriate department for proper treatment and followed-up as per the guideline of the UoGH. | 2023-01-15T14:49:30.460Z | 2020-04-15T00:00:00.000 | {
"year": 2020,
"sha1": "c36f3eb05842179431901cafe5cb38ad8f410821",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12866-020-01780-0",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c36f3eb05842179431901cafe5cb38ad8f410821",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
247028854 | pes2o/s2orc | v3-fos-license | Climate change and the assemblages of school leaderships
Abstract Anthropogenic climate change and the necessary transformation of society to mitigate its consequences constitutes an unprecedented educational challenge. Responding to the climate emergency and to society’s awakening climate activism generates a complex situation for school leadership in particular. Here, we report findings from our research with climate activist students and teachers in Aotearoa New Zealand. We argue that school leadership plays a crucial role in enabling student and teacher agency and the development of effective Climate Change Education in schools. We utilise assemblage thinking, situating this within the new materialisms, to conceptualise schools and their leadership as dynamic assemblages, and we discuss teacher and student experiences as actors across such assemblages. We conclude that deterritorialisation and decoding of educational institutions and their leadership practices can promote and enable education to become a driver of the cultural transformation of society that the climate emergency mandates.
Introduction
Anthropogenic climate change is the most profound contemporary challenge for the future of humans, nonhumans, and ecosystems (Carter, 2019;IPCC, 2018;Ripple, Wolf, Newsome, Barnard & Moomaw, 2019;Schellnhuber et al., 2016;Steffen et al., 2018). Climate change compels us to re-conceptualise the relationship between humanity and the rest of the more-than-human worlds we inhabit. It injects urgency into the turn towards new materialist ontologies and the project of bridging the divide between discourse and materiality, the social and the natural sciences, and the interrogation of relationships of power and politics, with profound implications for education (Barad, 2007;Ellenzweig, 2017;Reid, 2019;Zembylas, 2017). Wide-ranging and sustained educational initiatives will be required to prepare citizens for the informed decision-making and action-taking needed to meet the challenges we face. Education is central to the generation as well as the reproduction of culture, social patterns and strata. We refer here to Nash's (1990) discourse of Bourdieu's extensive work and theories. Nash (1990) highlights Bourdieu's thesis of schools as a culturally reproductive and 'conservative force ' (p. 435) in society, while recognising the autonomous powers that schools have, in principle, to 'shape consciousness, over and above the power of the family ' (p. 435). And with respect to the dominant forces of society and the formation of culture, Webb, Schirato, and Danaher (2020) paraphrase Bourdieu in saying that 'something becomes "culture" because it is in someone's (or some institution's) interests for this to be so ' (p. 155). Education aids the reproduction of the dominant culture.
Standing at the precipice of the apocalyptic futures evoked by the failings of predominant Western neoliberalist culture we argue with Irwin (2020), that cultural reproduction can no longer continue ( Figure 1) and that instead education must reclaim its autonomy to transform itself into a generator of sustainability culture and driver of cultural transformation. Education can become a catalyst for positive change to move society towards potential climate-friendly futures. To do so, the generation of agency within educational institutions to change societal attitudes, behaviours and expectations is paramount. We here refer also to the book Touchstones for Deterritorializing Socioecological Learning (Cutter-Mackenzie- Knowles et al., 2019) in which the authors explore wide-ranging aspects of 'the radical re-imagining (or de-imagining as they call it) of educational theory and practice : : : in the shadow of the Anthropocene ' (p. 276, 278).
Educational leadership has a crucial role in shaping the culture of schools and the praxis of Climate Change Education (CCE). Educational leaders must lead this transformation of education in the context of what Shields (2013) calls a VUCA world (a world dominated by volatility, uncertainty, complexity, and ambiguity). We argue that the climate emergency and the awakening of societal climate activism constitutes a context for school leadership that currently ranks somewhere between complex and chaotic on Gilbert's (2015) interpretation of the Cynefin leadership framework developed by Snowden and Boone (2007). Gilbert (2015), in her essay on leadership in collaborative complex education systems, argues that the climate emergency cannot just be a new input into an old system that 'will just be colonised to old ways of thinking' (p. 2) and that the required change has to come from 'within the system, not from top-down' (p. 9) and from the 'interactions between the system's elementspeople (teachers, students, school leaders, parents, policymakers, researchers etc.)' (p. 9). Complex systems constitute a 'domain of emergence' (Snowden & Boone, 2007) in which solutions emerge through experimentation that is safe to fail. Shields (2013), citing Caron (2009), argues for transformative leadership that emphasises 'not the volatility, uncertainty, complexity, and ambiguity that are givens, but the need for vision, understanding, clarity, and agility' with 'foresight, insight, and action' (p. 5). Assemblage theory, initially developed by Deleuze andGuattari (1977, 2004) and made more readily accessible by DeLanda (2013DeLanda ( , 2016, is emerging as a useful tool to engage in a critical analysis of the complex relationships in social and educational contexts. Here, we are using assemblage theory in the thematic analysis of our research data and to derive suggestions for the direction school leadership could take to enable deeper engagement with CCE and the transformation of society towards sustainable futures. Due to the climate emergency, our future now hinges on understanding the complexities of the entanglement of society and culture with Earth system processes (Barad, 2007;O'Brien, 2016;Verlie, 2017Verlie, , 2018. The dramatic climate change-induced events of 2021 remind us that Swyngedouw (2013) was likely correct and that 'the apocalypse is already here' (p. 17). However, Swyngedouw then reminds us not to despair but instead to focus on the 'emancipatory possibilities of apocalyptic life' (p. 17) and on driving the cultural transformation and the politicisation of the environment required to navigate dystopian futures.
Backgroundcurriculum and policies in the space of CCE leadership in Aotearoa New Zealand Educational leadership from the perspective of our research is positioned in the context of the national curriculum and the educational, cultural and political landscape of Aotearoa New Zealand. Aotearoa New Zealand has no formal constitution as a country but relies on the historic Treaty of Waitangi, Te Tiriti o Waitangi, signed in 1840 between the indigenous Māori population and the colonial British Empire as its founding document and anchor for the bi-cultural status of the nation (Orange, 2021). Te Tiriti has profound implications for the nation's education system (Glynn, 2015) and will undoubtedly also be shaping the outlook for the national engagement with climate change and CCE.
The New Zealand Curriculum (NZC) (Ministry of Education, 2015) is the guiding document for education in Aotearoa. In its inception, the NZC echoed recommendations from the 1975 founding Environmental Education (EE) document, The Belgrade Charter (UNESCO, 1975). Its principles outlined the need for community engagement and future focus and offered values that supported the concept of ecological sustainability. The key elements of the NZC align well with EE and offer curriculum guidance to school leadership while allowing flexibility and autonomy within schools to reflect their individual character and community. Furthermore, the NZC suggests EE can be used as a conduit for students to become 'confident, connected and actively involved life-long learners' (Ministry of Education, 2015, p. 8). Overall, EE is well-fitted to the framework of the NZC. However, undermining this intent is the vague and voluntary nature of associated educational policies in regards to EE. For individual schools the level of engagement with EE depends largely on the school's elected governing arm, the school's Board of Trustees (Ministry of Education, 2015, p. 44), the school's leadership team, and often relies on poorly supported but motivated individual teachers within schools. It is, therefore, no surprise that climate strike youth leaders, interviewed at the University of Auckland Sustainable Development Goals workshop (Glasgow, 2019), decried the lack of leadership and governance support for EE/CCE at their schools. This lack of support has been documented globally (Kwauk, 2020). The studies reported here suggest that youth and teachers are aware of the incommensurate educational policies and practices in the face of climate change and are demanding educational transformation that will lead to genuine capacity building necessary to address climate change. Kwauk (2020) analysed issues that hold back the education sector in the times of climate change and highlights hesitancy, lack of knowledge and vision, and structural limitations of school leadership as critical roadblocks. Kwauk (2020) argues that school leadership frequently has a polarising and ambiguous stance towards the treatment of climate change and often fails to take or encourage action by staff and students due to perceived constraints by policies and accountability (p. 7). Kwauk (2020) further states that the lack of leadership leads to a lack of support at micro and macro levels, from encouraging teacher education for sustainability to directing institutions towards implementing meaningful sustainability curricula and assessment. This lack of leadership translates into a failing demand for the resourcing and the building of capacity factors for climate education at all levels and promotes a 'lukewarm stance on climate action' by the school system (p. 8).
In the 1990s, the Aotearoa New Zealand school system underwent a market-oriented reform based on neoliberal ideology and committed to an agenda of globalisation (Codd, 2005;Gordon, 1992). These reforms, according to Codd, resulted in an 'erosion of trust and a degradation of teaching as a profession ' (2005, p. 193) and the education system moved closer to the 'orbit of economic policy' (Codd, 2005, p. 193). The climate emergency is an anathema to neoliberal doctrines and the management culture these have evoked in education. Wilks, Turner, and Shipway (2019) argue that 'the dominance of neoliberal governance structures in school management' (p. 80) contributed to an increase in self-legitimising structures of regulation, compliance with 'myard of policies, procedures and processes' (p. 81), which resulted in a risk-averse and success-metric driven administrative style. Risk aversion, as Wilks et al. (2019) argue, has been amplified by complicit media and their reinforcement of neoliberal governance. It filters down to teachers and students and generates 'disembodied learning' (p. 83) and 'disempowered students' (p. 85). We argue that this neoliberal culture has set the scene for many of the difficulties of the education system to engage proactively with the cultural transformation required to maintain McNeil's 'dark optimism' (Cutter-Mackenzie-Knowles et al., 2019, p. 278) in the light of the unimaginable disruptions heralded by the climate emergency.
On a positive note, CCE and the cultural transformation this entails would not be the first significant transformative process for Aotearoa New Zealand's education system that distances itself from the neoliberal drift. A successful example was the Te Kōtahitanga programme for 'culturally responsive pedagogies designed to enhance Māori student achievement based on the Effective Teaching Profile concept' (Meyer, Penetito, Hynds, Savage & Hindle, 2010, p. 2). The programme focused on cross-curricular intervention to improve Māori (indigenous people of Aotearoa New Zealand) academic achievement by reshaping mainstream schools. A considerable body of evidence confirmed Māori achievement improved at schools in the programme compared to those who were not (Ministry of Education, 2021). In order to achieve success for this transformative programme, Meyer et al. (2010) highlighted the need for 'school leadership to achieve whole-school change' (p. 2) and for 'a permanent senior teacher leadership role' (p. 6) assigned to this project. Bishop (2019) argued that for effective educational reform to occur, school leaders must support ongoing and collaborative transformational practice. Lessons learned from the implementation of the Te Kōtahitanga programme may be transferable to the cultural change that CCE mandates.
Rationale and context of our studies
During 2020 and 2021, we conducted qualitative research in two separate studies with climate activist teachers and climate strike student leaders in Aotearoa New Zealand to gather information about the lived experience of the research participants with respect to CCE at their respective schools. Our research coincided with the rise of the student strike movement that Greta Thunberg initiated (Murphy, 2021) and the selected episodes from our teacher and student interviews in this paper centre around the management of the situations surrounding the student climate strikes by the respective school leaderships. The climate strike movement motivated millions of youth worldwide and, as a result, begs deeper exploration into student engagement, agency and climate action (Bright & Eames, 2020). It is acknowledged that this situation was unprecedented for school leadership teams and that the management of student participation during the strikes was problematic for many secondary schools. It forced school leadership to take a stand and thereby revealed some of the challenges that school leadership will face in confronting the climate emergency.
Methods
The studies reported here were designed using a qualitative approach for data gathering and a postqualitative approach for the analysis of our findings in reference to assemblage theory. In the study with teachers, the recruitment of the participants was undertaken on teacher-centric social media sites in Aotearoa New Zealand. Seventeen participants self-selected or were selected by snowball sampling based on their self-declared stance as climate activist teachers into the study. Three of the participants worked in teacher education or in professional development, the remaining participants were high school teachers. The participants came from a mix of rural and urban schools. Two participants were Māori. The participants were initially interviewed using openended, unstructured interviews that permitted rich data on their experience to be obtained in a method inspired by the Pacifica tradition of the Talaloto (Naufahu, 2018). Subsequent data were gathered using structured surveys. In the study with climate strike leaders, fifteen student leaders from rural and urban schools were selected via social media and snowball sampling to conduct semi-structured interviews via zoom. The data from both studies were thematically analysed using Nvivo. The narratives of our participants were augmented through triangulation with knowledge gathered about the respective schools through the schools' public websites, the researchers' knowledge of the educational landscape in Aotearoa, as well as geographic and publicly available demographic knowledge about the location and the communities of the respective schools. Ethics approval for the respective studies was gained from the University of Waikato and informed consent was sought from all participants, including from parents of the students who participated in the study. Confidentiality and anonymity were enhanced through the use of pseudonyms and the disguise of localities.
Assemblage theory as an analytical tool
Inspired by St. Pierre (2018) to apply postqualitative elements in the analysis and the discussion of our findings, we refer to the posthuman relational ontology and the theory of assemblages. Assemblage theory, based on the work of Deleuze andGuattari (1977, 2004; see also Deleuze, 1988), and later developed by DeLanda (2006DeLanda ( , 2013DeLanda ( , 2016, has been increasingly referenced as a methodology for critical research (Baker & McGuirk, 2017;Fox & Alldred, 2015Bazzul & Kayumova, 2016). In our application of assemblage theory we cite DeLanda (2016) frequently due to the excellent summary DeLanda provides of the extensive writings by Deleuze and Guattari, who developed the key elements of assemblage theory. We refer to DeLanda (2016) for references to the original works by those authors. Assemblage theory conceptualises reality as a material-discursive manifold of potentialities that is morphogenetic for and shaped by heterogeneous and material-discursive assemblages of humans and more-than-human entities in which humans and the environment are combined in a flat ontology and relationality is highlighted over essentialism (Fox & Alldred, 2020, p. 270). Delanda (2016) depicts the Deleuzian agencements [assemblages] as bricolages of heterogeneous elements, most of which are assemblages in themselves. The word 'assemblage', suggestive of a passive and constructed composition, is a problematic translation of Deleuze's French term 'agencement', which evokes a sense of autonomy, agency and dynamism.
Assemblages delineate the belonging of their components through territorialisation and enforce internal cohesion and function through coding. Referring to the work of Deleuze and Guattari, DeLanda (2016) introduces the concept of these two parameters being like 'knobs' (p. 3) which can be set to different values ( Figure 2).
Territorialisation, according to DeLanda (2016) refers 'not only to the determination of the spatial boundaries of a wholeas in the territory of a community, city, or nation-statebut also to the degree to which an assemblage's component parts are drawn from a homogeneous repertoire, or the degree to which an assemblage homogenises its own components' (p. 22).
Territorialisation is a measure of distinction and belonging and a delineation between 'us' and 'them', between who or what is inside or outside of the assemblage. A reduction in the territorialisation parameter of an assemblage is referred to as deterritorialisation. The term deterritorialisation is also applied to a process that takes a subject out of a territory and results in a loss of belonging or association with an assemblage. Deterritorialisation of a subject results often in a reterritorialisation into a different assemblage or territory. In the human domain, climate change could lead to a process of significant and literal deterritorialisation for affected communities with a likelihood of dystopian proportions of future climate refugees. And in this paper, we will argue for the need to deterritorialise the assemblages of education, a sentiment that is reflected by Cutter-Mackenzie-Knowles et al. (2019).
Coding of the assemblage, according to DeLanda (2016), refers to 'the role played by special expressive components in an assemblage in fixing the identity of a whole' (p. 22). The expressive components can be emerging phenomena such as the genetic coding of DNA in living cells, or intrinsic physical laws, or, can be deliberately generated in the case of human control within assemblages. Coding supports the legitimacy of the authority structure within the assemblage through 'linguistically coded rituals and regulations' (p. 22) such as written or spoken rules and procedures and lays out the rights and obligations of the assemblage's components. The more authoritarian an assemblage becomes, the more explicit and wide-reaching its coding gets, limiting the degrees of freedom of the components. The discussion by Wilks et al. (2019) of the neoliberal coding of the assemblages within the education system is an example of coding in the context of our research.
Schools are an example of assemblages ( Figure 3). According to assemblage theory, assemblages are evoked and move within a manifold of potentialities (Delanda, 2016, p. 119). This manifold is a combination of the physical and social space. Assemblages reposition themselves in this social-material manifold in a search for optimising certain parameters of their output. Traditionally schools strive to excel in their prime function of successful social reproduction and promotion of important cultural traits. Climate change and the resulting climate emergency as events are reshaping the social-material manifold of potentialities with far-reaching consequences. The productive processes of morphogenesis within this social-material manifold are fundamentally affected by this reshaping, with implications for the assemblages that constitute themselves within this space (Figure 4). The assemblages of schools and the assemblages of school leadership are under particular pressures to respond. The climate strike youth movement added a new dimension to the reality in which the assemblages of schools exist ( Figure 5). Education has fundamental obligations due to its capacity to affect the social dynamics that will determine our future. How education responds to this challenge will be crucial. The application of assemblage theory as an analysis tool for climate change policy development by Fox and Alldred (2020) was inspirational to our approach. We here apply assemblage theory as a methodological and analytical framework for analysing schools and school leadership as assemblages. We also conceptualise the situation of teachers and students as assemblages and consider how the dynamism of the climate emergency and the reactions of school leadership generate manifestations of deterritorialisation and re-territorialisation for the affected individuals (Fox & Alldred, 2015, p. 401). Fox and Alldred (2015) state that 'power resides in the affective flows between relations in assemblages' (p. 402), and research itself must be understood as a territorialising assemblage that 'shapes the knowledge it produces according to the particular flows of affect produced by its methodology and methods' (p. 403). It is here noted how the verb 'affect' is at times used as a noun in new materialist literature when contextualising a process of affecting as a phenomenon and a subject in its own right.
Findings and Discussion
We found that the experiences reported by students and teachers were primarily determined by the schools' leadership stance on the climate emergency. Through the lens of assemblage theory, DeLanda's (2016) territorialisation and coding parameters, patterns emerged from our data. We, therefore, have structured our findings in thematic groups depending on where we could place the assemblages of their respective schools on a coding versus territorialisation continuum (CTC) (Figure 6).
We labelled the positions of assemblages in the four quadrants on the CTC as conservative, progressive, anarchic and dysfunctional. High levels of coding correlate with disciplined and well-organised structures. High levels of territorialisation correlate with closed, dogmatic, defensive and conservative structures. The conservative position is characterised by well defined and defended territory and a strict set of coding. Traditional grammar schools within the context of education in New Zealand befit this label. Progressive schools explore new territory but often within a well defined and codified accountability structure. In the anarchic space, a lack of coding and territorialisation maximises freedom. The anarchic space is also a creative space but a lack of structure can inhibit the effectiveness and reach of its impact. With high levels of territorialisation demands and defensiveness but a lack of functional coded structures in support, the lower right quadrant on the CT continuum signals a dysfunctional assemblage.
Internal structures within schools often reflected their outward stance. The territorialisation of internal structures into faculties emerged as a point of critique by students and teachers in our research and was seen as a hindrance to the development of effective CCE. We labelled the centre of the diagram as transformational instability. We argue that this is a place of potential for change, a saddle of sorts, from which a descent into more stable positions on the CTC can follow. We argue that transformational instability is a position in which constructive creativity can optimise opportunities for the emergence of solutions. Most school structures we found from our research fall either into the conservative or the progressive domain on the CTC.
Cases from conservative schools
Karl is a science teacher at a private urban high school in an affluent community. Karl's experience is typical for teachers working in schools managed by highly territorialising and strongly coding management styles. Karl believes that the climate emergency is humanity's number one problem by a 'long shot'. He stated, 'If the Earth becomes uninhabitable by humans, nothing else matters'. However, his belief is not reflected in the stance of Karl's school management. Karl says that 'in terms of support from management, I've always tried to be a voice for solutions to climate change. But there's a huge amount of resistance to that'. Karl tried to organise students to participate in a school climate strike event. But Karl says, 'I was told, in no uncertain terms, that I was not to advertise that that was taking place on any public forum that the parents might hear about. I was told very specifically, no way, are you to tell the kids that this is happening, or to advertise their involvement'. They should be in school, was the answer. And Karl added, 'To hear that, you know, as a school, which prides itself on preparing kids for the future, I was really disappointed by that response from the management'. In order to address the climate emergency, Karl states 'We actually need to stop nibbling at the edges because that's just not going to cut it anymore. We need significant major, systemic, international, radical change to actually address these problems'. Karl highlighted the need for CCE. 'I don't think there's enough education taking place to let people know that actually, there are some really specific things we could do that would make significant differences'. Karl also highlighted the fact that climate change is not officially in the curriculum very much and that talking about climate change puts him 'off-topic' in his classes.
Karl's experience typifies many of the experiences reported by our other participants from conservative schools. His school's leadership remains entangled deeply in the power structures that define the conservative and culturally reproductive habits of their school. The assemblage in which his school leadership operates, according to the analysis of Karl's narrative, appears to be dominated by the following relations (in no particular order): parental concerns; parental cash flow; school status; student achievement and excellence; prestige; discipline; order; economics From the perspective of the school management, the environmental materialities of climate change are absent in this assemblage, which appears to be constituted solely from the conservative sociocultural elements in which the school operates. The leadership constitutes their stance through transactions of power relationships between the parents who send their students to the school and pay tuition fees, the pressures of maintaining discipline, an apparent disdain for real student agency, the maximising of subject learning and credit gathering for the sake of excellence in the culturally reproductive domain in which the school operates. We argue that Karl's story is an example of a category mismatch of leadership reaction. The leadership in Karl's school reacted in what they may have thought of as best practice, regarding the context of the climate emergency as a simple context and perhaps a mere disturbance (Snowden & Boone, 2007, p. 2). Their management style had been successful in the neoliberal growth economy that defined their task as ensuring the seamless and trouble-free reproduction of students and their achievements in the image of their parents. The significance of the change in the context that the climate emergency constitutes has been missed. As Snowden and Boone (2007) state, this can lead to 'catastrophic failure' (p. 3). Karl's school leadership increased coding by making explicit directives to not talk about the climate strikes and it increased territorialisation by enforcing students to remain in the school territory, physically and conceptually. The school's territorial borders became less permeable (Figure 7). Karl chose to abide by the management directives but reported a significant amount of frustration, anger and alienation. Karl has deeply understood the gravity of the situation humanity finds itself in yet has found himself 'stripped of power' (Jones & Davison, 2021, p. 194) by the way his school leadership operates. The cognitive dissonance between what Karl knows about climate change and being forced to comply with the restrictive stance of his school leadership appeared to be painful for Karl, with all the negative consequences for his well-being that this entails. Karl accepted this disempowerment, and he, in turn, through his compliance, disempowered his students in a typical cascade of hierarchical power relationship entanglements that begin with the school leadership. Analysing Karl's options, he could have resigned from his job or at least offered his resignation unless management permits him and his students to engage in the CCE space in the way he saw fit. But wider societal power structures made such a move unlikely. Karl is embedded and constrained by the entanglements of his own life and the many, often conflicting, obligations that surround them. He has a family to support and perhaps a mortgage to pay. But central to Karl's professional frustration is the stance of his school leadership. Karl's assemblage in no particular order differs significantly from that of his employer. climate emergency; science knowledge; deep concerns about the future; family; economic constraints; school culture; school leadership; colleagues; climate denial in society; Karl's dilemma is reflected in choices made by some teachers in the UK, who gave up their teaching jobs to fight the climate crisis (Staufenberg, 2019). The actions of the leadership of Karl's school caused a deterritorialisation of Karl with respect to his role at the school and a re-territorialisation within the event of the research assemblage where he was given the time to speak about his concerns freely. Karl was set on a line of flight away from the assemblage of his institution. This is a line of flight for Karl but not a line of exploration for Karl's school. It takes Karl away from the school and distances him from his potential within the school to generate a CCE initiative within the school.
The assemblages of the climate strikes were confronting for conservative schools' leaderships. Seven out of 15 strike leaders interviewed in our research experienced school leadership that actively discouraged or attempted to deter student participation. Flora and Josh's experiences typify the narratives of those from unsupportive secondary schools, reflecting the relationships of power and politics on both student agency and educational outcomes. Flora attended a large city school, and at 15 years old, she was environmentally aware and wanted to strike. However, 'they would punish us, even if your parents sent a permission letter. Their usual policy was one detention per period missed', she said. Josh lived in a rural community that drew students from a predominantly farming community. His school's reluctance to support the strikes was possibly due to negative perceptions held within the community concerning the link between agriculture and climate emissions. He said, 'I put up a few posters but I was promptly called into the Principal's office, being one of the only environmental students, and he told me to stop and I wasn't allowed to use the school to advertise'. Both felt their climate action isolated them from their school, ironically restricting the opportunity for them to be connected, confident and actively involved, a vision promoted in the NZC (Ministry of Education, 2015). The irony, that despite the curriculum's vision some adults attempted to define and regulate the type of agency and involvement occurring (Gordon, 2010), was apparent to many strike leaders. For many students, the strike leaders believed, the lack of school support scuttled their motivation and agency, but for Flora and Josh, their determination to raise climate awareness and enable youth voice emboldened them to defy traditional authoritarian expectations.
Paradoxically, the assertion of power shown by leadership in unsupportive schools had the potential to undermine the respect students may previously have held for the system. Lajos described learning 'a lot about power' from his strike experience, saying 'teachers or any authority making a threat of expelling or whatever, we learned, that is only possible so long as it is easier for them to do it rather than not do it'. Lajos' ethical stance, he considered, was justified by the urgency of the climate crisis and the size of the movement that, for many students, was more compelling than traditional school structures and expectations. The strike leaders perceived the need for cultural transformation to overcome the industrial cultural reproduction mode of traditional education.
The leadership assemblage of the conservative schools reported here did not include the climate emergency, climate change, and the material world. Instead, it relegated them to a disturbing context that is extrinsic to the business of the school. The disciplinary actions of the leadership deterritorialised the students and re-territorialised them into new assemblages of their own choosing, in which the power relationships of the school leadership and their ability to affect was no longer effective. For Flora, Josh and Lajos, significant learning happened outside the school context in a self-organised manner. Their respective school's leadership assemblage appeared limited to classical points relevant to socially reproductive schools. In no particular order: parental concerns; discipline; hierarchical order; school status; student achievement; risk management; timetable; routine; consistency; For the students themselves, a very different assemblage emerged. In no particular order: climate emergency; social justice; deep concerns about their own future; climate denial in society; friendships with other activists; actions; lines of flight from school;
Cases from progressive schools
There were also notable cases where school leadership engaged proactively with CCE and has been welcoming and supportive of teacher and student engagement in climate activism and leadership. Tanya's experience is one such example. Tanya teaches social science at a progressive suburban public high school. She wanted to support the students in her class in the climate strikes. She sought permission from her school leadership and asked if she could organise a bus. Tanya wanted the students to be marked truant in order for them to feel and discuss their rebel experience with their parents. She reported that the principal said, 'No, they can't be marked truant. But I agree with you. It's valuable learning. I understand that you want them to feel like rebels, but we're going to mark them as on a school trip'. The school hired a bus to take Tanya's students to the climate strike. When more students wanted to join the trip, Tanya's principal was very supportive. Tanya reported, 'So we're back to the principal. I mean, what if other students want to come and she was like, the more, the merrier. So we booked another couple of buses'. Besides the support Tanya received from her school leadership in the climate strike events, she was also able to start senior courses for students in social anthropology, focusing on citizen activism and how this can generate significant transformations in society. Tanya's school leadership appeared to realise the fundamental shift in paradigms that the climate emergency constitutes and is creating the space and capacity for teachers and students to engage constructively towards emerging futures through the engagement at the school. The assemblage of the leadership situation with regards to the climate strikes at Tanya's school includes more-than-human material affects. In no particular order: atmosphere; greenhouse gasses; emissions; climate emergency; students' concerns about their future; students' agency; action learning; busses; teacher autonomy; high trust relationship with staff; trust relationship with students; strike as valid action; citizen activism as a valuable learnable skill The leadership at Tanya's school managed the climate strikes and engagement with social transformation in a way that supported staff and student agency and permitted 'collective intelligence' (Gilbert, 2015, p. 11) to emerge. Tanya reported on the significant learning experience her students had at the climate strikes, the social interactions between the students and the public and the emotional engagement of the students in the experience of the strikes. Tanya, the students, her colleagues, and the school's leadership effectively collectivised (Nairn, 2019) their concerns about the climate emergency and the students' actions in cooperation with the teachers and the leadership generated hope. Tanya's narrative throughout our interview conveyed satisfaction with her employment and a sense of achievement towards transformational learning at her school. The school leadership as an assemblage encompassed the climate emergency and, in doing so, established a territory in which both students and teachers were included in their climate change concerns, constructive actions and inspired learning.
Tanya's school leadership reduced the degree of territorialisation and coding at the school in order to extend the realm of learning well beyond the school fence and into society (Figure 8). It embraced the challenges as learning opportunities and the teachers' and students' actions as potential lines of exploration that can bring back new knowledge and enrich the school's culture, instead of pushing teachers and students out of the school along lines of flight (Bazzul & Kayumova, 2016, p. 288).
Students like Marama and Madison, who attended progressive schools that were supportive of the climate strikes, reported experiencing an emerging cultural r/evolution that included the schools themselves. As a result, they reported feeling a greater sense of empowerment and agency within their schools. The strike leaders applauded the leadership of schools who dared to endorse them, stating some schools 'were outstanding' with their support. Marama recalled, 'my school was actually very pro-strike, most of our school was going anyway. I went to the first strike and it was the most life-changing experience and most empowering feeling to be there with everyone else'. The collective response-ability Marama reported characterised the strike leaders' experiences as both empowering and heightening political awareness. A political awakening was considered imperative for effective climate action. 'I had never been involved in anything political before', said Madison, 'it widened my understanding that politics affects everything, to be an advocate and if you want change, you have to be political'. From the students perspective the leaderships of supportive and progressive schools appeared to have the climate emergency and the implications this entails included in their assemblages. In no specific order: greenhouse gasses; emissions; climate emergency; students' concerns about their future; students' agency; action learning; teacher autonomy; trust relationship with staff; trust relationship with students; strike as valid action; citizen activism as a valuable learnable skill; political skill-building in students; social justice; intergenerational justice;
Schools as transformative spaces
As the climate crisis necessitates transformative processes, the territorialisation and coding within schools must also transform. The evidence from our research suggests that effective CCE would benefit from a new style of educational leadership that permits experimentation, student and teacher agency and community engagement to promote the transformation of society towards a sustainable future. Leadership that fostered a culture in which teachers were encouraged to engage with CCE and students were empowered to embrace student activism generated the potential to develop 'collective intelligence' (Gilbert, 2015, p. 11) in a complex realm. This leadership style effectively applied a deterritorialisation and decoding strategy that allowed students and teachers to emerge from the traditional school setting into a realm of exploration and experimentation. This experience contrasts with schools in which the leadership was antagonistic towards climate change and CCE, enacted disciplinary measures to prevent students from attending climate strikes, and even directed teachers to suppress any discussion of student climate activism. These leaders suppressed the synergy of the domain of emergence (Snowden & Boone, 2007) to the detriment of outcomes by amplifying and enforcing territorialisation and coding in the attempt to preserve order and hierarchy and in denial of the reshaping socio-material reality around them. The call for deterritorialisation and decoding of school leadership also applies to the faculty structure of schools. The climate emergency is a multi-faceted problem affecting all areas of learning and, as Bright and Eames (2020) argue, would benefit from cross-curricular approaches. Kwauk (2020) also identified the current lack of cross-sectoral coalitions in the education system as one of the roadblocks to effective CCE (p. 20), and Stevenson, Nicholls, and Whitehouse (2017) argue that cross-curricular approaches could give CCE the necessary space (p. 70). However, the traditional territorialised faculty and middle management structure of schools as well as firmly defended territories on timetables and within course structures are a significant hindrance. The voices from the climate activist teachers and students in our study confirmed this, with many participants citing the need for more cross-curricular engagement and the lack of space on packed timetables as priority concerns. Depictions of assemblages of departments within schools reveal a frequent lack of space for holistic thinking and a focus on traditional best practices, administrative workloads and a lack of capacity for cross-disciplinary collaboration. Paulene teaches at an urban public high school. She said, 'Our school is still very siloed in terms of the curriculum'. But Paulene is 'trying to find ways to sneak climate change into lessons and into every course I teach in one way or another'. Jessica also teaches at an urban public high school. She frames cross-curricular work towards generating CCE learning in terms of infiltration of ideas into multiple other contexts. 'So, I infiltrate everything I teach, really, with the environment. I reflect that actually, just gentle infiltration is really important as well. So, a little bit of titbits here and there, and every little bit counts'. Jessica's strategy of 'gentle infiltration' and Paulene's way of 'sneaking climate change into lessons' are examples of deterritorialising lines of flight, exploration and morphogenetic activism that shifts the assemblages in schools from within. Leadership that would grasp the importance of accelerating these shifts could turn the gentle infiltration into a school-wide overt strategy by proactively deterritorialising departments and faculties.
Some schools, however, already made deliberate attempts to generate collaborative learning teams, with members from different faculties cooperating towards co-created courses. Our research participants working in such schools cited the holistic cross-curricular learning opportunities their schools enabled as an important factor in work satisfaction and the generation of hope. Moana teaches middle school years at a public high school where the old faculty structure is dissolved, and curriculum areas are combined. Moana's school has a large component of urban Māori students, and Te Ao Māori, the holistic worldview of Aotearoa's indigenous people, is an integral part of the school's culture. Moana states: 'And basically, the program's integrated, so there's no English, Maths class. It's all sort of project-based inquiry learning'. In Moana's school, about three staff members are allocated to 50 learners, and specialist teachers come in to support the integrated learning programme. Moana says 'I really enjoy the transdisciplinary learning. So you know, incorporating, so for example, in my course, sustainability standards, and mātauranga standards, with Te Ao Māori as being the hook. Yeah!' Moana says she feels very grateful to be where she is and emphasises her ability to deliver 'firstly, the mātauranga Māori [knowledge, wisdom] around climate change'. The assemblages of Moana's school and herself which emerged from our data seem congruent, contributing to the vibrant positivity that radiated through Mona's interview and her hopeful perspective towards making a positive contribution to her students' developing world views in the light of the challenges we face. This is likely also a testament to the quality of leadership at her school. Moana and her school connect multiple systems of knowledge into a holistic and caring assemblage. In no particular order: Te Ao Māori; Mātauranga Māori; Western science and mathematics; climate; students' concerns about their future; students' agency; action learning; teacher autonomy; high trust relationship with staff; trust relationship with students; holistic care; Te Reo; place-based learning; land, water, air; teamwork; Many of the student strike leaders interviewed for this research expressed general concern over the historical failings of our Western neoliberalist culture and the way it has shaped education. For example, 13-year-old Lilly said, 'I was really confused why no one had done what was needed to be done, why they prioritised the economy over our future when there will be no economy if we don't have a future because you can't eat money'. This comment exemplifies the sentiments of the climate strikers who, notwithstanding their youth, innately understood the need for societal transformation.
The climate strikes of 2019 forced the assemblages of schools and their leaderships to react and revealed cultural divides within the educational landscape of Aotearoa New Zealand. The student strikes represented the concerns youth feel for their future, called for leaders to be accountable and demanded action. For decades youth have been active and notable catalysts for change. For example, the American civil rights movement, the American Vietnam war protests, the Chinese Tiananmen Square pro-democracy protest, the Arab Spring democracy movement in the Middle East, the American indigenous water rights protest (Blakemore, 2018), and more recently, the Hong Kong protest and climate strikes. The thinking that leads to youth activism is often alternative to or decades ahead of adults' perspectives. Because of their age, however, their voice is often dismissed (Barret, 2018). The assemblages in which many young people constitute their own persona benefit from being still unburdened by the power of economic and family responsibilities. Their assemblages demonstrate agility and readiness to accept and respond to risks. Schools could significantly benefit in developing constructive educational capacity by including, not excluding, student climate activism.
The NZC mandates a future-focussed formal education programme and, within sustainability, acknowledges the intergenerational injustice climate change posits for youth. In 2013, Robyn Boswell, the National Director of Future Problem Solving within the Ministry of Education, advised imagining the future with 'hope not horror' (Ministry of Education, 2013). From the perspective of climate strike leaders, the climate emergency constitutes a multi-dimensional paradigm shift. For the societal transformations essential to them to have 'hope, not horror' demands a reorientation of school leadership. Nearly a decade on, minimal progress appears to have been achieved to actualise hope. The call for transforming educational practice and pedagogy among the student strike leaders was a recurring theme. Strike leader, 18-year-old Marama, stated, 'their idea of education is so outdated, it needs a revamp, to be honest. There are so many kids that sit in class every day and are so unhappy with how the world is'. For many of today's youth, Boswell's 'future of horror', is here.
The need for transformative change dovetails with the voices for greater recognition of mātauranga Māori, the knowledge and wisdom of Aotearoa's indigenous population. In Aotearoa New Zealand, and with consideration of the Treaty of Waitangi (Orange, 2021), the opportunities for developing new ways of teaching and learning are particularly interesting with respect to the ongoing deterritorialisation and decoding of cultural and colonial territorial barriers and the creation of a productive partnership between Western and mātauranga Māori knowledge systems. Indigenous and local knowledge systems have been identified as crucial for safeguarding our common future against the oppression by the unsustainable practices of the neo-capitalist Western colonial hegemony (Fernández-Llamazares et al., 2021). It would go beyond the scope of this paper to attempt to give justice to this essential aspect of our country's grappling with our colonial past and with developing pathways to a sustainable future in a bi-cultural partnership. However, applying the ideas of Delanda's (2013Delanda's ( , 2016 material-discursive manifold and assemblage theory to the discourse on local and global multicultural futures would seem like a natural progression from here with a productive environment for research within Aotearoa (Ministry for the Environment, 2007;Talwar, 2021;Tunks, 1997).
From the perspectives of DeLanda's (2013DeLanda's ( , 2016 assemblage theory, school leadership should aim to deterritorialise and decode the management of their schools in order to embrace the climate emergency as a normative and upending symptom for a rapidly changing world in which schools operate. Assumptions about the role of schools as cultural reproduction systems are no longer valid. As identified by both teachers and students, the climate crisis challenges the dominant culture itself to undergo significant systemic changes. The notions of deterritorialisation and decoding of the leadership assemblage in the face of the complex situation that the climate emergency constitutes are mirrored in Boylan's (2016) call for teachers to lead from below, in Gilbert's (2015) focus on the emergence of solutions from the interactions within the system, Snowden and Boone's (2007) leadership advice for complex and chaotic situations based on their Cynefin framework, and Verlie's (2018) call for a new and diffractive pedagogy.
DeLanda's (2013) understanding of assemblages as being constituted within and shaping the material-discursive manifold of potentialities brings human and material agency onto the same ontological playing field, a complex dynamic and morphogenetic space of social and material agency. Delanda (2013) argues that lines of flight describe movements of assemblages, such as individuals or schools, or the assemblage of a school's leadership, as 'relative accelerations' (p. 94) out of rigid morphologies. In times of complex crises, where solutions need to emerge with urgency, deterritorialisation and decoding encourage these accelerations. We argue that lines of flight can then become the lines of exploration that we need to walk with abandon. DeLanda's manifold of potentialities is a manifold of possibilities, resonances and synergies, and openminded and attentive exploration of this space can accelerate the finding of solutions. Rigid and territorialised morphologies, however, constrain this process and retain the assemblages and their components in a state of incapacity for evolutionary progress. Rigid morphologies, such as those experienced by schools reluctant to engage in the climate strikes, tend to devalue individuality and lead to the entrenchment of authoritarian and defensive structures.
The material reality of the climate emergency decentralises the human from the agential dynamics of the world, and society and education are finding themselves entangled-with and acting-with climate (Verlie, 2017(Verlie, , 2018. The complexities of the social domain become diffracted by the complexities of humanity's explosive expansion of the last century, its impact on the Earth systems and the consequential material dynamics of climate change that it unleashed. This diffraction (Haraway, 2018;Barad, 2007) reaches deeply into the structures and practices of our education systems and, as Verlie (2018) states, 'is therefore generative, enabling new, novel, innovative, creative or different phenomena to emerge' (p. 7). Verlie (2018) suggests a new diffractive pedagogy that 'cultivates creativity, reconfigures bodies and subjectivities, is dynamic, nonlinear, transdisciplinary, multi-modal, disruptive, unchartered, transcorporeal, interwoven, and one that troubles established categories' (p. 8). Verlie's pedagogy, in turn, suggests a diffractive style of educational leadership that fosters collective response-ability in which new learning and knowledge emerges from the intra-actions of students, teachers, community and the material world. We argue that the deterritorialisation and decoding of schools, their leadership and their internal structures can promote this emergence.
Conclusion
The climate emergency demands that schools and school leaderships interrogate what schools should now actually be doing. The traditional role of schools as institutions that guarantee the reproduction of culture is put into question, just as the role of the current neo-capitalist culture comes into focus as a fundamental contributing cause of the climate emergency and as the producer of the harmful environmental practices that got us into this trouble. While we believe that engagement with CCE should be mandated for all levels of the education system, we argue with Gilbert (2015) not for top-down regulation of CCE but for an approach to leadership that generates the space, encouragement and support for transformational interactions and learning to emerge by inspiring teachers and students to take ownership of this process. This will enable CCE to capitalise on the emergence of solutions through the collective creativity of the people within the system. Schools and their leaderships are now in need of redefining how they operate in order to transcend cultural reproduction towards active cultural transformation and a posthumanist culture that recognises itself, as Fox and Alldred (2020) state, as an 'assemblage of biological, sociocultural, and environmental elements, whose capacities to affect and be affected are contingent upon setting and its relations with other matter ' (p. 272). This transition of schools into the reality of the climate emergency will require bold leadership and we argue that deliberate deterritorialising and decoding actions by school leaderships can promote this process. Teaching the existential threat of the climate crisis is an ethical imperative (Kessel, 2020). The stakes are as high they get. | 2022-02-23T16:31:38.461Z | 2022-02-21T00:00:00.000 | {
"year": 2022,
"sha1": "90cf311eeba766821cb1133c9ca3f6e139bbeb9b",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/03B130C5C2A70887D0612BDF50F266D9/S0814062622000088a.pdf/div-class-title-climate-change-and-the-assemblages-of-school-leaderships-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "cf7d7f22eea963ef1791c22765295ce6775e4a89",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
23438629 | pes2o/s2orc | v3-fos-license | Hypertension in Chronic Glomerulonephritis
Chronic glomerulonephritis (GN), which includes focal segmental glomerulosclerosis and proliferative forms of GN such as IgA nephropathy, increases the risk of hypertension. Hypertension in chronic GN is primarily volume dependent, and this increase in blood volume is not related to the deterioration of renal function. Patients with chronic GN become salt sensitive as renal damage including arteriolosclerosis progresses and the consequent renal ischemia causes the stimulation of the intrarenal renin-angiotensin-aldosterone system(RAAS). Overactivity of the sympathetic nervous system also contributes to hypertension in chronic GN. According to the KDIGO guideline, the available evidence indicates that the target BP should be ≤140mmHg systolic and ≤90mmHg diastolic in chronic kidney disease patients without albuminuria. In most patients with an albumin excretion rate of ≥30mg/24 h (i.e., those with both micro-and macroalbuminuria), a lower target of ≤130mmHg systolic and ≤80mmHg diastolic is suggested. The use of agents that block the RAAS system is recommended or suggested in all patients with an albumin excretion rate of ≥30mg/ 24 h. The combination of a RAAS blockade with a calcium channel blocker and a diuretic may be effective in attaining the target BP, and in reducing the amount of urinary protein excretion in patients with chronic GN.
Hypertension is a frequent finding in chronic kidney diseases. Almost all patients develop hypertension when the glomerular filtration rate (GFR) declines. Renal parenchymal hypertension develops in the setting of acute glomerulonephritis (GN), chronic GN, diabetic nephropathy, polycystic kidney disease, hypertensive nephrosclerosis, and renal microvascular disorders. Mild to moderate hypertension occurs in more than 75% of patients with acute forms of GN, such as poststreptococcal GN 1) . Patients with acute GN have hypertension primarily due to sodium retention leading to fluid overload, as evidenced by suppression of the renin-angiotensin-aldosterone (RAAS) system.
Causes of chronic GN with hypertension include IgA nephropathy (IgAN), membranous nephropathy, membranoproliferative GN, and focal segmental glomerulosclerosis 1) . According to one report, hypertension was frequently found in focal segmental sclerosis, membranous nephropathy, IgAN, and membranoproliferative GN 2) . Hypertension is a presenting feature in one third of patients with focal segmental sclerosis. Five years after renal biopsy, 92% of normotensive and 47% of hypertensive patients remained with normal renal function. These findings suggest that the high prevalence of hypertension in chronic GN is related to declining renal function. In IgAN, 9-53% and 7-15% of patients have hypertension and malignant hypertension, respectively. It was also reported that non-dipper hypertension was observed in 93% of the hypertensive IgAN patients 3) . Hypertension and the lack of a circadian rhythm can accelerate the progression of chronic GN which can, in turn, be slowed by the treatment of hypertension. Table 1. Pathogenesis of hypertension in chronic glomerulonephritis 1) Sodium and water retention: Sodium sensitivity increases as glomerulosclerosis and tubulointerstitial damage progress 2) Excessive activity of RAAS system: Renal ischemia is a potent stimulus of renin secretion 3) Increased activity of the sympathetic nervous system: The afferent signal may arise within the kidneys
Pathophysiology of hypertension in chronic GN
There are three main factors contributing to the development of hypertension in patients with chronic GN, which are similar to those in essential hypertension, but more accentuated (Table 1). Sodium retention is of primary importance. Increased RAAS activity is responsible for the hypertension. Renal ischemia induced by microvascular damage is a potent stimulus of RAAS. Hypertension also results from overactivity of the sympathetic nervous system. Much evidence indicates increased sympathetic nervous activity in renal disease. Renal ischemia is probably a primary event leading to increased sympathetic nervous activity 4,5) .
It was earlier reported that the blood volume was high in patients with IgAN, and that mean arterial pressure was correlated with blood volume, but not with plasma renin activity and GFR. Therefore, it has been suggested that hypertension in IgAN is primarily volume dependent, and that this increase in blood volume is not related to the deterioration of renal function 6) . Among IgAN patients with mild proteinuria, hypertension was associated with glomerular sclerosis, interstitial fibrosis/tubular atrophy, interstitial infiltration, and arteriosclerosis, but was not associated with the mesangial score. Arteriosclerosis was positive in 38.6% of hypertensives compared with 3.2% of normotensives. Hypertension affected the prognosis of mild proteinuric IgAN nephropathy through vascular lesions 7) . It was reported that in IgAN and membranous nephropathy, the patients with arteriolar hyalinosis and hypertension are characterized by higher values of glomerular sclerosis and tubulointerstitial damage, which are both responsible for reduced capillary bed perfusion pressure leading to a reduction of interstitial blood flow and consequent hypoxia in the interstitial compartment 8) . Furthermore, the sodium sensitivity index and scores for glomerular sclerosis and tubulointerstitial damage were higher in IgAN patients with normal to high-normal blood pressure (BP) or hypertension than in those with optimal BP 9) . The mean pressure-natriuresis curve was steeper in the optimal group than that in the normal to high-normal or hypertensive IgAN groups. The sodium sensitivity index was significantly correlated with glomerular sclerosis and tubulointerstitial damage. The increased sodium sensitivity appeared before hypertension and sodium restriction lowered BP to the optimal range and decreased proteinuria. We reported in 234 patients with IgAN that 121 patients (52%) had systolic BP ≥130 mmHg and 74 (32%) ≥140 mmHg. Systolic BP was positively correlated with serum uric acid concentrations and with pathological findings, including glomerulomegaly and the degree of tubulointerstitial fibrosis and deposits of IgM and C3, and negatively with post-glomerular filtration rate and the slope of change in 1/serum creatinine for 2 years (Table 2). Patients with systolic BP ≥130 mmHg compared with those <130 mmHg were older and showed more severe clinico-pathological findings such as pre-and post-serum creatinine, amount of proteinuria, glomerulomegaly, tubulointerstitial fibrosis, and deposits of IgM and C3 [10][11][12] . It was observed in patients with IgAN that urinary angiotensinogen levels, renal tissue angiotensinogen expression and angiotensin II immunoreactivity were significantly higher, and that urinary angiotensinogen reflects Table 3. The target blood pressure in the KDIGO clinical practice guidelines in non-diabetic chronic kidney disease 1) In patients with proteinuria 30-300 or ≥300 mg/24 hr: blood pressure ≤130/80 mmHg 2) In patients with proteinuria <30 mg/24 hr: blood pressure ≤140/90 mmHg the activity of the intrarenal RAAS system 13) . Even though the IgAN patients did not show massive renal damage, immunoreactivity of hemeoxygenase-1 and angiotensinogen were increased in these patients at this time point. These data suggest that intrarenal reactive oxygen species and RAAS activation play a pivotal role in the development of IgAN during the early stages and provide a supportive foundation for the effectiveness of RAAS blockade in IgAN 14) . We observed that urine angiotensinogen levels correlate with the urine protein-to-creatinine ratio and serum creatinine concentrations in patients with IgAN 11) . Finally, increased urinary angiotensinogen levels induced by salt and associated renal damage leads to the development of salt-sensitive hypertension in patients with IgAN 15) . Therefore, it can be speculated that patients with chronic GN become salt sensitive as renal damage progresses, and the consequent reduction of interstitial blood flow and hypoxia causes the stimulation of the intrarenal RAAS which, in turn, contributes to the development of saltsensitive hypertension.
Several studies have shown the gene polymorphisms contributing to hypertension and the relationship between BP-related genes and disease progression in patients with IgAN. Men with AGT M235T TT were found to be at an increased risk of IgAN progression compared to those with the other genotypes 16) . CD14 -159CC and ACE DD were independently associated with hypertension 17) . There was a significant difference in the therapeutic response to ARB between the DD/ID genotype and the II genotype 18) .
New aspects of anti-hypertensive therapy in patients with chronic GN
According to the KDIGO guideline 19) , the available evidence indicates that in chronic kidney disease patients without albuminuria the target BP should be ≤140 mmHg systolic and ≤90 mmHg diastolic. In most patients with an albumin excretion rate of ≥30 mg /24 h (i.e., those with both micro-and macroalbuminuria), a lower target of ≤130 mmHg systolic and ≤80 mmHg diastolic is sug-gested (Table 3). In achieving BP control, the value of lifestyle changes and the need for multiple pharmacological agents is acknowledged. However, the JNC VIII and ESH/ESC guidelines recommend a systolic BP goal of <140 mmHg in patients with chronic kidney disease 20,21) . Thus, the most appropriate targets for systolic BP to reduce cardiovascular morbidity and mortality among persons are still uncertain.
Intensive BP treatment was examined in the several studies, such as ACCORD 22) , HALT-PKD 23) and SPRINT 24) . It was recently reported in the SPRINT trial that among non-diabetic patients at high risk for cardiovascular events, targeting a systolic BP of <120 mm Hg, as compared with <140 mm Hg, resulted in lower rates of fatal and nonfatal major cardiovascular events and of death from any cause, although significantly higher rates of some adverse events were observed in the intensive-treatment group. Among patients with polycystic kidney disease in the HALT-PKD study, as compared with standard BP control, rigorous BP control was associated with a slower increase in total kidney volume, no overall change in the estimated GFR, a greater decline in the left-ventricularmass index, and greater reduction in urinary albumin excretion. The ACCORD study, on the contrary, showed in older patients with type 2 diabetes that intensive antihypertensive therapy did not significantly reduce the rate of a composite outcome of fatal and nonfatal major cardiovascular events. An intensive strategy reduced the new development of microalbuminuria by 16%, but had no significant effect on other microvascular end points or composites. So far, the evidence suggests that a lower target of ≤130 mmHg systolic and ≤80 mmHg diastolic may be appropriate for patients with chronic GN and both micro-and macroalbuminuria if they are not old and do not have high cardiovascular risk factors.
Which is the best anti-hypertensive drug? According to the KDOQI guideline, hypertensive people with diabetes and chronic kidney disease stages 1-4 should be treated with an ACE inhibitor or an angiotensin receptor blocker, usually in combination with a diuretic 25) . An approach to diabetic nephropathy based on RAAS blockade in type II diabetes is projected to result in a lower incidence of end-stage renal disease (66%) compared with placebo patients. It is also suggested that using an ACE inhibitor or an angiotensin receptor blocker in normotensive patients with diabetes and albuminuria levels ≥30 mg/g is more beneficial. In the KDIGO guideline, the use of agents that block the RAAS system is recommended or suggested in all patients with an albumin excretion rate of ≥30 mg/24 h 19) . The ESH /ESC hypertension guideline announced that RAS blockers are more effective in reducing albuminuria than other antihypertensive agents 21) . The ONTARGET study showed that both angiotensin receptor blockers and ACE inhibitors were equally effective in improving outcome (dialysis, doubling of serum creatinine, and death), and the number of events for the composite outcome was similar for both 26) . Because of the reduction in renal function, the use of diuretics is mainly important in patients with chronic kidney disease who are usually volume overloaded.
The attainment of target BP in patients with chronic kidney disease typically requires multidrug therapy as seen in several clinical studies. It is commonly accepted that a combination of an ACE inhibitor with a calcium channel blocker is superior to placebo and equivalent or superior to ACE inhibitors alone for inhibiting the progression of diabetic kidney disease. The ACCOMPLISH study showed that the combination of an ACE inhibitor with a calcium channel blocker is superior to the combination of ACE inhibitors with a beta blocker 27) . A number of trials and meta-analyses have shown that the combination of ACE inhibitor/angiotensin receptor blocker therapy has a greater anti-proteinuric effect than either agent alone. However, in the dual-therapy group of the Ontarget study, even though there was a significant reduction in proteinuria, there was an increase in adverse effects and worsening renal outcomes 26) . Whether the combination of ACE inhibitor/angiotensin receptor blocker therapy may be beneficial for the cardio-renal outcomes in non-elderly patients with chronic GN who do not have high cardiovascular risk remains unresolved.
In summary, the combination of an ACE inhibitor/angiotensin receptor blocker with a calcium channel blocker (long-acting dihydropyridine or non-dihydropyridine) and a diuretic may be effective to attain the target BP and to reduce the amount of urinary protein excretion in patients with chronic GN. | 2017-08-15T22:59:09.707Z | 2015-12-01T00:00:00.000 | {
"year": 2015,
"sha1": "3f5cdab490ff6f7c374c063c8504cff789b8ef21",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc4737660?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f5cdab490ff6f7c374c063c8504cff789b8ef21",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.